Download as pdf or txt
Download as pdf or txt
You are on page 1of 414

The

Frailest
Thing

Ten Years of Thinking About the


Meaning of Technology

L. M. Sacasas
Contents

Preface

History and Philosophy of Technology

1. Kranzberg's Laws Of Technology

2. The Hidden Costs Technological Shortcuts

3. American Technological Sublime: Our Civil Religion

4. “Does Technology Drive History?”: A Brief Review

5. Christianity and the History of Technology: A Review Essay

6. The World of Tomorrow, Inc.

7. Technology and Perception

8. What Motivates the Tech Critic

9. What Are We Talking About When We Talk About Technology?

10. 10 Points of Unsolicited Advice for Tech Writers

11. Why A Life Made Easier By Technology May Not Be Happier

12. Humanist Technology Criticism

13. The Technological Origins of Protestantism, or the Martin Luther Tech Myth

14. How (Not) to Learn From the History of Technology

15. Cyborg Discourse is Useless

16. Idols of Silicon and Data

17. Technology and the Great War


Technology, Ethics, and the Moral Life

18. Conscience of a Machine

19. Perspectives on Privacy and Human Flourishing

20. When Silence Is Power

21. Troubles We Must Not Refuse

22. Just Livin' To Be Heard

23. The Transhumanist Promise: Happiness You Cannot Refuse

24. Are Human Enhancement and AI Incompatible?

25. Do Artifacts Have Ethics?

26. Lethal Autonomous Weapons and Thoughtlessness

27. Attention and the Moral Life

28. To Act, or Not to Act On Social Media

29. The Ethics of Information Literacy

30. What Do I See When I See My Child?

31. Growing Up With AI

32. Digital Devices and Learning to Grow Up

33. The Ethics of Technological Mediation

34. One Does Not Simply Add Ethics To Technology

35. There Is No "We"

36. Does Technology Evolve More Quickly Than Ethical and Legal Norms?

37. Why We Can’t Have Humane Technology

38. Beyond the Trolley Car: The Moral Pedagogy of Ethical Tools

39. In Defense of Technology Ethics, Properly Understood


Media and the Self

40. The (Un)Naturalness of Privacy

41. Living for the Moment in the Age of the Image

42. Et in Facebook ego

43. Dead and Going to Die

44. Eight Theses Regarding Social Media

45. The Interrupted Self

46. Social Media and Loneliness

47. Early Modern and Digital Reading Practices

48. Spectrum of Attention

49. Landlines, Cell Phones, and Their Social Consequences

50. A Chance to Find Youself

51. What Do I Like When I "Like" On Facebook

52. Facebook and Loneliness

53. Bodies in Conversation

54. The Allegory of the Cave for the Digital Age

55. The Exhausting Work of Unremitting Self Presentation

Technology and Society

56. Resisting Disposable Reality

57. Kevin Kelly, God, and Technology

58. The Borg Complex: A Primer

59. Conquering the Night: Technology, Fear, and Anxiety

60. Disconnected Varieties of Augmented Experience


61. Technology, Speed, and Power

62. Our Little Apocalypses

63. Consider the Traffic Light Camera

64. Technology, Moral Discourse, and Political Communities

65. Machines, Work, and the Value of People

66. A Lost World

67. Finding A Place for Thought

68. Resisting the Habits of the Algorithmic Mind

69. Facebook Doesn’t Care About Your Children

70. Democracy and Technology

71. Superfluous People, the Ideology of Silicon Valley, and The Origins of Totalitarianism

72. Algorithms Who Art in Apps, Hallowed Be Thy Code

73. Eight Theses Regarding the Society of the Disciplinary Spectacle

74. Presidential Debates and Social Media, or Neil Postman Was Right

75. The World Will Be Our Skinner Box

76. Digital Media and the Revenge of Politics

77. The Myth of Convenience

78. Nine Theses Regarding the Culture of Digital Media

79. Orality and Literacy Revisited

Time, Memory, and Nostalgia

80. Nostalgia: The Third Wave

81. Keeping Time, Keeping Silent

82. From Memory Scarcity to Memory Abundance


83. If Nostalgia Is A Desire, What Does It Long For?

84. Google Photos and the Ideal of Automated Documentation

85. Digital Media and Our Experience of Time

86. Don’t Romanticize the Present

87. Time, Self, and Remembering Online

Being Online

88. The Treadmill Always Wins

89. Unplugged

90. Vows of Digital Poverty

91. Audience Overload

92. Digital Asceticism and Pascalian Angst

Miscellaneous

93. Shared Sensibilities

94. After Stories

95. Suffering, Joy, and Presence

96. The Tourist and the Pilgrim

97. The Tech-Savvy Amish

98. Freedom From Authenticity

99. What Do We Want, Really?

100. The Wonder of What We Are

Appendix: Writing Elsewhere


This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.
First edition, December 2019.
“Between us and heaven or hell is only life, which is the frailest thing in the world.”

Blaise Pascal
Preface
I began writing The Frailest Thing in conjunction with the start of my graduate studies in 2009. Ten
years later, it seemed fitting to bring the enterprise to a close. What you have here are 100 dispatches
spanning that decade of thinking and writing about how technology informs, mediates, and conditions
our experience. These are, in my view, the best of what I had to say and what might still be worth say-
ing. They are arranged into seven sections whose borders are porous. Within those sections, the selec-
tions are generally arranged in chronological order. Dates are provided at the end of each piece.
I decided it was best to use the name of the blog as the title of the collection. I’ve chosen the words
of the subtitle rather deliberately. I never thought that I was writing to arrive at abstract, universally ap-
plicable conclusions about the place of technology in our lives, much less to render any binding ver-
dicts about the moral status of technology. I certainly never thought that I had arrived at any answers or
solutions. Throughout the ten years, the best I could say is that I was thinking out loud and in conversa-
tion with my readers about the meaning of technology and its significance for our lives. The exercise
has been useful for me, and I hope it has been helpful for others.
Thank you for reading.
PART I

History and Philosophy of Technology


1. Kranzberg's Laws Of Technology

Dr. Melvin Kranzberg was a professor of the history of technology at the Georgia Institute of Tech-
nology and the founding editor of Technology and Culture. In 1985, he delivered the presiden-
tial address at the annual meeting of the Society for the History of Technology in which he explained
what had already come to be known as Kranzberg’s Laws — “a series of truisms,” according to
Kranzberg, “deriving from a longtime immersion in the study of the development of technology and its
interactions with sociocultural change.”

I’ll list and summarize Kranzberg’s laws below, but first consider this argument by metaphor.
Kranzberg begins his address by explaining the terms of the debate over technological determinism. He
notes that it had become an “intellectual cliche” to speak of technology’s autonomy and to suppose that
“the machines have become the masters of man.” This view, which he associated with Jacques Ellul
and Langdon Winner, yielded the philosophical doctrine of technological determinism, “namely,
that technology is the prime factor in shaping our life-styles, values, institutions, and other elements of
our society.”

He then noted that not all scholars subscribed to “this version of technological omnipotence.” Lynn
White, Jr., for example, suggested that the technology “merely opens a door, it does not compel one to
enter.” This is a compelling metaphor. It captures the view I’ve taken to calling “technological volun-
tarism,” technological determinism’s opposite. Technology merely presents an opportunity, the choice
of what to do with it remains ours. Yet, while working with an element of truth, this view seems ulti-
mately incomplete. And by pursuing the open door metaphor itself, Kranzberg suggests the inadequacy
of a view that focuses too narrowly on the initial choice to use or not to use a technology:

Nevertheless, several questions do arise. True, one is not compelled to enter White’s open door,
but an open door is an invitation. Besides, who decides which doors to open-and, once one has
entered the door, are not one’s future directions guided by the contours of the corridor or cham-
ber into which one has stepped? Equally important, once one has crossed the threshold, can one
turn back?
Those are astute and necessary questions, and all the more evocative for the way they play off of
White’s metaphor. These questions, and the answers they imply, lead Kranzberg to the formulation of
his First Law: “Technology is neither good nor bad; nor is it neutral.” By which he means that,

“technology’s interaction with the social ecology is such that technical developments frequently
have environmental, social, and human consequences that go far beyond the immediate pur-
poses of the technical devices and practices themselves, and the same technology can have quite
different results when introduced into different contexts or under different circumstances.”

Here are the remaining laws with brief explanatory notes:

Second Law: Invention is the mother of necessity. “Every technical innovation seems to require ad-
ditional technical advances in order to make it fully effective.”

Third Law: Technology comes in packages, big and small. “The fact is that today’s complex
mechanisms usually involve several processes and components.”

Fourth Law: Although technology might be a prime element in many public issues, nontechnical
factors take precedence in technology-policy decisions. “… many complicated sociocultural factors, es-
pecially human elements, are involved, even in what might seem to be ‘purely technical’ decisions.”
“Technologically ‘sweet’ solutions do not always triumph over political and social forces.”

Fifth Law: All history is relevant, but the history of technology is the most relevant. “Although his-
torians might write loftily of the importance of historical understanding by civilized people and citi-
zens, many of today’s students simply do not see the relevance of history to the present or to their fu-
ture. I suggest that this is because most history, as it is currently taught, ignores the technological ele-
ment.”

Sixth Law: Technology is a very human activity-and so is the history of technology. “Behind every
machine, I see a face–indeed, many faces: the engineer, the worker, the businessman or business-
woman, and, sometimes, the general and admiral. Furthermore, the function of the technology is its use
by human beings–and sometimes, alas, its abuse and misuse.”
There is a good deal of insight packed into Kranzberg’s Laws and much to think about. I’ll leave
you with one last tidbit. A story recounted by Kranzberg to good effect:

A lady came up to the great violinist Fritz Kreisler after a concert and gushed, “Maestro, your
violin makes such beautiful music.” Kreisler held his violin up to his ear and said, “I don’t hear
any music coming out of it.” You see, the instrument, the hardware, the violin itself, was of no
use without the human element. But then again, without the instrument, Kreisler would not have
been able to make music.

August 25, 2011


2. The Hidden Costs Technological Shortcuts

I once had a discussion with students about the desirability of instantly acquired knowledge or ex-
pertise. It was a purely hypothetical discussion, and I don’t quite remember how we got around to it.
Somehow, though, we found ourselves discussing a Matrix-like or Google-chip-type scenario in which
it would be possible to instantly download the contents of a book or martial art skills into the brain. The
latter, of course, begs all sorts of questions about the relationship between the mind and the body (and
so the does the former for that matter), but let’s set those questions aside for the moment. My argument
at the time, and the one I’d like to briefly articulate here, was that even if we were able to acquire
knowledge through such a transaction, we should not really want to.

It’s not an easy argument to make. As you can imagine many students were rather keen on the no-
tion of foregoing hours of study and, just to be clear, the appeal is not altogether lost on me as I glance
at the mounting tower of books that looms nearby, Babel-like. And the appeal is not just a function of
the demands of an academic setting either. I am the sort of person that is more than a little pained by
the thought of all that I will never read given the unyielding limitations of a human life. Moreover, who
wouldn’t want to possess all of the knowledge that could be so easily attained?

This discussion came to mind recently because it struck me that the proposition in question — the
desirability of achieving the end while foregoing the means — takes on a certain plausibility within
technological society. In fact, it may be the very heart of the promise held out by technology. Effi-
ciency, ease, speed: this is what technology offers. Get what you’ve always wanted, only get it with
less hassle and get it faster. The ends are relatively fixed, but technology reconfigures the means by
which we achieve them.

This is the story of automation, for example; a machine steps in to do for us what we previously had
to do for ourselves. Consider this recent post from Kevin Kelly in which he outlined “The 7 Stages of
Robot Replacement” as follows:

“A robot/computer cannot possibly do what I do.


OK, it can do a lot, but it can’t do everything I do.

OK, it can do everything I do, except it needs me when it breaks down, which is often.

OK, it operates without failure, but I need to train it for new tasks.

Whew, that was a job that no human was meant to do, but what about me?

My new job is more fun and pays more now that robots/computers are doing my old job.

I am so glad a robot cannot possibly do what I do.”

Kelly, as always, is admirably optimistic. But this seems to me to beg certain questions: What ex-
actly is the end game here? Where does this trajectory culminate? Are there no good reasons to oppose
the outsourcing of human involvement in the means side of our projects and actions?

Let me go back to the matter of reading and knowledge, in part because this is the context in which I
originally formulated my scattered thoughts on this question. There is a certain unspoken assumption
that makes the possibility of instantly acquiring knowledge plausible and seemingly unproblematic:
that knowledge is merely aggregated data and its mode of acquisition does nothing to alter its status.
But what if this were a rather blinkered view of knowledge? And what if the acquisition of knowledge,
however understood, was itself only a means to other more important ends?

If the work of learning is ultimately subordinate to becoming a certain kind of person, then it matters
very much how we go about learning. In some sense, it may matter more than what we learn. This is
because the manner in which we go about acquiring knowledge constitutes a kind of practice that over
the long haul shapes our character and disposition in non-trivial ways. Acquiring knowledge through
apprenticeship, for example, shapes people in a certain way, acquiring knowledge through extensive
print reading in another, and through web based learning in still another. The practice which constitutes
our learning, if we are to learn by it, will instill certain habits, virtues, and, potentially, vices — it will
shape the kind of person we are becoming.

This is one reason, then, why the means through which knowledge is acquired matters: it can shape
the sort of person you become in the long run. Another has to do with the pleasure that attends the
process. Of course, if one has not learned to take pleasure from reading or, to take another example, the
physical training associated with athletic excellence, then this point will ring rather hollow. Let me just
note that if I could immediately acquire the knowledge of a 1,000 books, I will know that I had missed
out on a considerable amount of enjoyment along the way. The sort of enjoyment that leads us to pause
as we approach the end of a book we will be rather sad to close.

All of this is also closely related to the undesirability of a frictionless life. When I seek to remove all
effort, all trouble, all resistance that stands between me and some object of desire, my attainment of
that object will be simultaneously rendered meaningless. But finally, it may be mostly about virtue.
What do I desire when I am lured by the promise of instant knowledge? It seems to me that since it is
not the pleasure that attains to the work and accomplishment of its acquisition, then it is just the power
or prestige that it may bring. The elimination of the work associated with gaining knowledge or skill,
then, may not be a function of sloth but rather of pride.

And as with knowledge, so with countless other facets of human experience. Technology promises
to reconfigure the means so as to get us the end we desire. If, however, part of what we desire, perhaps
without knowing it, is intimately wrapped up with the means of attainment, then it will always be a
broken promise.

October 10, 2011


3. American Technological Sublime: Our Civil Reli-
gion
David Nye is the author of American Technological Sublime (1995), a classic work in the history of
technology. Except that it is not merely a work of history in the strict disciplinary sense. Nye draws
promiscuously from other fields — citing, for example, Burke, Kant, Durkheim, Barthes and Bau-
drillard among others — to present a wide ranging and insightful study of the American character.

The concept of the technological sublime was not original to Nye. It had first been developed by
Perry Miller, a prominent mid-twentieth century scholar of early American history, in his study The
Life of the Mind in America. There Miller noted in passing the almost religious veneration that some-
times attended the experience of new technologies in the early republic.

Miller found that in the early nineteenth century “technological majesty” had found a place along-
side the “starry heavens above and the moral law within to form a peculiarly American trinity of the
Sublime.” Taking the steamboat as an illustration, Miller suggests that technology’s cultural ascen-
dancy was abetted by a decidedly non-utilitarian aspect of awe and wonder bordering on religious rev-
erence. “From the beginning, down to the great scenes of Mark Twain,” Miller explains, “the steam-
boat was chiefly a subject of ecstasy for its sheer majesty and might, especially for its stately progress
at night, blazing with light through the swamps and forests of Nature.”

Leo Marx also employed the technological sublime, but again in passing. It fell to David Nye, a stu-
dent of Marx’s, to develop a book length treatment of the concept. Nye looks to Edmund Burke and
Immanuel Kant in order fill out the concept of the sublime, but it is apparent from the start that Nye is
less interested in the philosopher’s solitary experience of the sublime in the presence of natural won-
ders than he is in the popular and often collective experience of the sublime in the presence of techno-
logical marvels.

Nye, with a historian’s eye for interesting and compelling sources, weaves together a series of case
studies that demonstrate the wonder, awe, and not a little trepidation that attended the appearance of the
railroads, the Brooklyn bridge, the Hoover Dam, the factory, skyscrapers, the electrified cityscape, the
atomic bomb, and the moon landing. Through these case studies Nye demonstrates how Americans
have responded to certain technologies, either because of their scale or their dynamism, in a manner
that can best be described by the category of the sublime. And perhaps more importantly, he argues that
this experience of the technological sublime laced throughout American history has acted as a thread
stitching together the otherwise diverse and divided elements of American society.

If the philosophers provided Nye with the terminology to name the phenomenon, he takes his inter-
pretative framework from the sociologists of religion. Nye’s project is finally indebted more to Emile
Durkheim than to either Burke or Kant. Nye notes early on that “because of its highly emotional nature,
the popular sublime was intimately connected to religious feeling.” Later he observes that the Ameri-
can sublime was “fused with religion, nationalism, and technology” and ceased to be a “philosophical
idea” instead it “became submerged in practice.”

This emphasis on practice is especially important to Nye’s overall thesis and it is on the practices
surrounding the technological sublime that he concentrates his attention. For example, with each new
sublime technology he discusses, Nye explores the public ceremonies that attended its public reception.
The 1939 World’s Fair, to take another example, appears almost liturgical in Nye’s exposition with its
carefully choreographed exhibitions featuring religiously intoned narration and a singular vision for a
utopian future.

This attention to practices and ceremonies was signaled at the outset when Nye cited David
Kertzer’s “Neo-Durkheimian view” that “ritual can produce bonds of solidarity without requiring uni-
formity of belief.” This functionalist view of religious ritual informs Nye’s analysis of the technologi-
cal sublime throughout. In Nye’s story, the particular technologies are almost irrelevant. They are sig-
nificant only to the degree that they gather around themselves a set of practices. And these practices are
important to the degree that they serve to unify the body politic in the absence of shared blood lines or
religion.

All told, Nye has written a book about a secular civil religion focused on sublime technologies and
he has presented a convincing case. Absent the traditional elements that bind a society together, the
technological sublime provided Americans a set of shared experiences and categories around which a
national character could coalesce.
Nye has woven a rich, impressive narrative that draws technology and religion together to help ex-
plain the American national character. There’s a great deal I’ve left out that Nye develops. For exam-
ple: the evolving relationship of reason to nature and technology as mediated through the sublime or
the diminishing active role of citizens, and especially laborers, in the public experience of the techno-
logical sublime. But these, in my view, are minor threads.

The take-away insight is that Americans blended, almost seamlessly, their religious affections with
their veneration for technology until finally the experience of technology took on the unifying role of
religion in traditional societies. Historically, American’s have been divided by region, ethnicity, race,
religion, and class. American share no blood lines and they have no ancient history in their land. What
they have possessed, however, is a remarkable faith in technological progress that his been periodically
rekindled by one sublime technology after another all the way to the space shuttle program and its final
mission.

The question I’m left with is this: What happens when the technological sublime runs dry? As Nye
points out, it is, unlike the natural sublime, a non-renewable resource. In other words, the sublime re-
sponse wears off and must find another object to draw it out. If Nye is right — and I do think it is pos-
sible to overreach so I want to be careful — there is not much else that serves as well as the technologi-
cal sublime to bind American society together. Perhaps then, part of our recent sense of unraveling, our
heightened sense of disunity, the so called culture wars — perhaps these are accentuated by the with-
drawal of the technological sublime. Perhaps, but that would take another book to explore.

October 21, 2011


4. “Does Technology Drive History?”: A Brief Review

Having worked through the essays collected in this volume, one may be tempted to conclude that
the title question finally dissolves into a debate about semantics and taxonomies. The dilemma is real
enough, and palpably so. That technological determinism, as more than one author noted, is such a hard
notion to dispel, that it is repeatedly resuscitated, that it can exert such a powerful influence upon the
popular imagination—all of this suggests that “technological determinism” attempts, however unsatis-
factorily, to name a phenomenon that is still in need of adequate description and explanation. The es-
says collected by Merrit Smith and Leo Marx (a historically suggestive pair of surnames) in Does
Technology Drive History?: The Dilemma of Technological Determinism are an effort to understand
this elusive phenomenon.

The first two essays, by Merritt Roe Smith and Michael L. Smith respectively, provide a cultural
frame for the succeeding discussion. They each draw on visual resources, particularly the lithography
of Currier and Ives, to illustrate the deep and abiding faith in technologically abetted progress that ani-
mated American history. The iconography of railroads, steamships, and telegraph lines infused popular
scenes depicting the advance of technology and civilization across the American continent, simultane-
ously dispersing darkness and displacing dissidents. These essays also point to the first of many dis-
tinctions that the reader must bear in mind as they attempt to keep the complexity of the debate in
mind. The distinction in this case is between the fact and the idea of technological determinism. The
status of the former is, of course, the question under debate. The latter, however, is a matter of belief
independent of the ontological status of the object of belief, and it is apparent that belief in technologi-
cal determinism functioned as an article of faith in nineteenth century America, typically under the
banner of Progress.

This important distinction between the (debatable) reality of technological determinism and techno-
logical determinism as an idea that is widely accepted recurs in the later essays by Rosalind Williams
and Leo Marx. Williams also links the notion of technological determinism to a faith in progress, but
she finds her link outside of the American context in the work of two French Enlightenment writers,
Turgot and Condorcet. Like their nineteenth century American heirs, Turgot and Condorcet believed
civilization would advance in step with technological and scientific progress. Whether or not this was
objectively true, it was subjectively believed, and more importantly, acted upon. “Ultimately,”
Williams concludes, “not machines but people create technological determinism.” And following
Mumford and Havel, Williams would have us ask whose interests are served by the proliferation of the
idea of technological determinism.

The semantic angle is pressed most forcefully by Bruce Bimber who identifies three accounts of
technological determinism: normative, nomological, and unintended consequence. In good analytic
philosophical style he then precisely defines “technological” and “determinism” in such a way that all
but nomological determinism fail to meet the definitional standard. The nomological account, of which
Heilbroner’s classic essay reprinted in this volume is representative, posits a law-like relationship be-
tween technological causes and social effects. Using the debate over technological determinism in the
thought of Karl Marx as a case study, Bimber concludes that a theory of history that would meet the
criteria of nomological determinism would be implausible. Thus, he urges that we clear the debate over
the social consequences of technology of the obfuscating language of determinism.

In the remaining essays dealing with the question of “technological determinism’s” status as histori-
cal reality, we find more parsing, defining, and categorizing. The usual pattern is to define two ex-
tremes and then offer a third mediating position. The essays by Thomas Hughes and Thomas Misa each
present a variation of this approach. Hughes seeks to stake out a position between technological deter-
minism on the one hand and social constructivism on the other. He finds both accounts ultimately inad-
equate even though each manages to grasp a part of the whole situation. As a mediating position,
Hughes offers the concept of “technological momentum.” By it Hughes seeks to identify the inertia that
complex technological systems develop over time. Hughes’ approach is essentially temporal. He finds
that the social constructivist approach best explains the behavior of young systems and the technologi-
cal determinist approach best explains the behavior of mature systems. “Technological momentum” of-
fers a more flexible model that is responsive to the evolution of systems over time.

If Hughes’ approach is essentially temporal, Misa’s account is in some sense spatial. He positions
his approach between “micro-level” and “macro-level” approaches to technological systems. The
claims made by macro-level analysis, technological determinism’s natural habitat, cannot be sustained
at the micro-level. But large trends and the social consequences of technological change remain invisi-
ble at the micro-level. Misa recommends “meso-level” analysis focused on institutions of mediating
scale situated “between the firm and the market or between the individual and the state.” At this level
Misa believes scholars are most likely to integrate the social shaping of technology with the technologi-
cal shaping of society.

Philip Scranton is likewise sensitive to matters of scale when he proposes that “totalizing deter-
minisms” be replaced by “local determinations.” In his view, the resolution of the dilemma lies in par-
ticularizing the object of analysis. Scranton’s essay is a reflection on matters of historiography. He
finds that master-narratives of progress and technological determinism have clouded the vision of his-
torians of technology to the contingent and particular. His sensibilities are essentially postmodern and
they include the rejection of grand narratives, the embrace of a “plurality of rationalities,” and a focus
on matters of power differentials. He urges an approach which shares a certain affinity with Clifford
Geertz’ “thick descriptions,” some of which might uncover local instances of technological determin-
ism, but many more which will not.

Perhaps the most pertinent consideration that arises from the essays described above, as well as
those that were not mentioned, is that the question of technological determinism is enormously com-
plex. Because of this it might seem pedantic to note areas that were left unexamined, but it is curious
that very little mention was made of what Walter Ong has labeled “technologies of the word.” These
technologies — writing, printing, and later electronic means of communication — influence human be-
ings at the most fundamental level, that of thought and expression. Whether they are finally endorsed
or not, it seems negligible to omit mention of the work of Ong, his teacher Marshall McLuhan, or the
German theorist Friedrich Kittler who, among others, have each in their own way drawn attention to
the formative influence of communication technologies on individual consciousness and society.

Interaction with the work of practice theorists such as Pierre Bourdieu and Michel de Certeau might
have also added yet another rich layer of analysis by seeking to locate the nexus of technology and so-
ciety in the embodied rituals of everyday life which structure the lived experience of individuals. A
mere hint in this direction is offered, incidentally it would seem, by J. M. Staudenmaier in his closing
essay when he alludes to the “catechetical extremes of the Disney imagineers” in constructing the EP-
COT experience. In fact, technologically inflected catechetical instruction of habits and beliefs is on of-
fer throughout society, not only at EPCOT. Examining these technological dimensions of lived experi-
ence would seem to be yet another potential source for theorizing the question of technological deter-
minism.

Altogether, however, the essays collected by Marx and Smith offer an invaluable entry into the de-
bate over technological determinism and, through this debate, into the larger question of technology’s
role in society. A question that is increasingly becoming more, not less pressing.

December 2, 2011
5. Christianity and the History of Technology: A Re-
view Essay
Introduction

Since the mid-twentieth century, there has been a sustained, if modest, scholarly conversation about
the relationship between technology and religion. Among scholars who have specifically addressed the
nature of this relationship, research has focused on the following set of concerns: religion’s role in de-
termining Western society’s posture toward the natural world, religions’s role in abetting technological
development, religion’s role in shaping Western attitudes toward “labor and labor’s tools,” and, more
recently, the use of religious language and categories to describe technology. The majority of these
studies focus almost exclusively on European and North American context and so “religion” amounts
to Christianity. Jacques Ellul, Lynn White, George Ovitt, Susan White, David Noble, and Bronislaw
Szerszynski have been among the more notable contributors to this conversation.

Jacques Ellul’s comments are the earliest, but they do not set the terms of the debate. That honor
falls to Lynn White who in his 1968 essay, “The Historical Roots of Our Ecological Crisis,” first pro-
posed that Europe’s relationship to technology had been distinctly shaped by the Christian worldview.
Scholars have repeatedly returned to further scrutinize this thesis and its ecological frame.

George Ovitt’s The Restoration of Perfection: Labor and Technology in Medieval Culture remains
the most thorough treatment of the nexus of questions generated by White’s essay. Ovitt concludes that
there is good reason to significantly qualify White’s claims.

Susan White’s study, Christian Worship and Technological Change, stands apart as a consideration
of technology’s influence on Christian liturgy.

David Noble builds on the work of White and Ovitt to offer an account of what he terms “the reli-
gion of technology” which amounts to a pervasive intermingling of religious concerns with the project
of technology as a well as a tendency to link the quest for transcendence to technology.
Finally, Bronislaw Szerszynski’s Nature, Technology, and the Sacred offers a theoretically sophisti-
cated reconsideration of White’s argument which, with respect to the technological character of con-
temporary society, embraces Ellul’s diagnosis of technological society. Szerszynski also moves the ar-
gument out of the medieval period to argue that the really decisive transformations in the intellectual
and religious context of Europe’s technological history should be located in the Protestant Reforma-
tion.

Jacques Ellul: A-technical Christianity

Ellul’s The Technological Society was first published in French in 1954 and the first English transla-
tion appeared in 1964. Within the brief historical sketch Ellul provides of the evolution of technique, he
examines the relationship between Christianity and technology. In Ellul’s estimation, Christianity as it
was practiced through the late Medieval period was at best ambivalent to the advance of technology.
He recognizes that received opinion contrasts the Eastern religions, which were supposedly “passive,
fatalist, contemptuous of life and action” with Christianity, the religion of the West, which was suppos-
edly “active, conquering, turning nature to profit.”

Although this characterization was widely accepted, Ellul believed it to be in error. It both ignored
the real technical advances of Eastern civilizations and misunderstood the posture of Christianity to
technical development.

According to Ellul, the emergence of Christianity marked the “breakdown of Roman technique in
every area — on the level of organization as well as in the construction of cities, in industry, and in
transport.” In his view, Julian the Apostate, and later Gibbon, were not altogether mistaken in attribut-
ing the withering of the Empire to rise of the Church. Following the collapse of the Roman Empire in
the west, a society emerged under the tutelage of Christianity, which Ellul characterized as “‘a-capital-
istic’ as well as ‘a-technical.’” In every sphere of Medieval culture save architecture Ellul sees “the
same nearly total absence of technique.”

Ellul goes on to challenge the two historical arguments employed by those who believed that Chris-
tianity “paved the way for technical development.” According to the first, Christianity’s suppression of
slavery gave impetus to the development of technology to relieve the miseries of manual labor. Ac-
cording to the second, Christianity’s disenchantment of the natural world removed metaphysical and
psychological obstacles to its technologically enabled exploitation. The former fails to account for the
impressive technical achievements of slave societies, and the latter, while valid to a certain extent, ig-
nores the other strictures Christian faith placed on technical activity, namely its other-worldly and as-
cetic tendencies. Additionally, Christianity subjected all activity to moral judgment. Accordingly, tech-
nical activity was bounded by non-technical considerations. It is within this “narrow compass” that cer-
tain technical advances were achieved and propagated by the monasteries.

Lynn White: Christianity and the Cultural Climate of Technological Advance

In his classic essay, “The Historical Roots of Our Ecological Crisis,” Lynn White, Jr. staked out a
position that departed from Ellul at almost every conceivable point. White begins by describing the gap
in technical achievement that opened up between Western Europe and both Islamic and Byzantine civi-
lizations to the the east. This gap predated the “Scientific Revolution” of the sixteenth century and was
already evident by the late Middle Ages. Consequently, White turns to the Middle Ages to understand
the nature of Western technology.

Although White is at this stage in his career moving from the single-factor approach to technologi-
cal change, elements of the approach are still evident in the “The Historical Roots of Our Ecological
Crisis” in which he points to the introduction of the heavy plow as catalyst for changing attitudes about
humanity’s relationship to nature. In White’s words, the heavy plow “attacked the land with such vio-
lence that cross-plowing was not needed.” White also notes that this new attitude of domination was
soon given pictorial expression in Frankish calendars which depicted man and nature in opposition
with man as master.

At this juncture in the essay, however, White shifts from a single technological factor analysis of so-
cial change to a consideration of the cultural influences that conditioned the development and deploy-
ment of technology in adversarial relationship to nature. White finds the Christian religion, as practiced
in Western Europe, to be the chief culprit. “Especially in its Western form,” White concludes, “Chris-
tianity is the most anthropocentric religion the world has seen.” After briefly recalling the well known
plot points and language of the creation narrative in the opening chapter of the book of Genesis, White
contrasts Christianity to ancient paganism and the Eastern religions and finds that Christianity “not
only established a dualism of man and nature but also insisted that it is God’s will that man exploit na-
ture for his proper ends.” Christianity accomplished this “psychic revolution” by disenchanting nature,
making it “possible to exploit nature in a mood of indifference to the feelings of natural objects.”

White, then, affirms the second argument Ellul dismissed in his analysis of the relationship between
Christianity and nature. He surmounts one of Ellul’s criticisms — that eastern branches of Christianity
did not yield the same relationship to nature and thus religion is not the key factor — by pointing to the
significant differences in theological outlook that characterized the activist Latin churches in the West
and the contemplative Greek churches of the East. Ellull had noted the difference, taking the Russian
Orthodox Church as his case in point, but he concluded that the difference must be cultural and not reli-
gious. While the cultural shaping of ancient Christianity should not be overlooked, the fact remains that
by the Middle Ages both Eastern and Western Christianity had taken on their distinct shape and were
now, as culturally inflected variations of the same religion, shaping the intellectual climate of their re-
spective societies.

Furthermore, White also strengthens the argument by pointing to the sacramental vision of eastern
Christianity. Nature existed as a system of signs to be read and through which God spoke to humanity.
This was, in White’s view, an “essentially artistic rather than scientific” view of nature. While the West
initially shared this sacramental vision, by the late Medieval period it had given way to a natural theol-
ogy more inclined to “read” nature by understanding the workings of nature rather than merely contem-
plating its appearance. (Bronislaw Szerszynsk will later take up this semiotic argument in depth.)

The rhetorical frame of White’s article concerns itself with the sources of “the present ecological
crisis,” but the body of his argument addresses itself to another question: What accounts for the ad-
vance of Western technology beyond its civilizational rivals? White’s essay, while initially gesturing
toward a single-factor account of technological change, on the whole points toward a social factors ap-
proach focusing on Latin Christianity as the force driving the evolution of western European technol-
ogy. By so doing, it set the terms and became a key point of departure for subsequent discussion of reli-
gion’s relationship to technology. Most notably, it anchored the debate in the Middle Ages, it pointed
to the cultural significance of seemingly arcane theological distinctions, it identified Christianity as the
most important cultural factor driving technological activity in the West, and it linked the historical
question to environmental concerns.
Lynn White further developed his thesis in a long 1971 article, “Cultural Climates and
Technological Advance in the Middle Ages,” in which he directly set out to identify the sources of the
unprecedented “technological thrust of the medieval West.”

White begins by discussing medieval Europe’s propensity for borrowing and elaborating on tech-
nologies initially developed in other societies. The culture of medieval Europe “was unique in the re-
ceptivity of its climate to transplants” and this accounts in part for the vigor of medieval technological
output. However, this receptivity is itself in need of explanation and in response White reaffirms the
logic of “The Historical Roots of Our Ecological Crisis”:

“What a society does about technology is influenced by casual borrowings from other cultures, al-
though the extent and uses of these borrowings are reciprocally affected by attitudes toward technologi-
cal change. Fundamentally, however, such attitudes depend upon what people in a society think
about their personal relation to nature, their destiny, and how it is good to act. These are religious ques-
tions.”

In what follows, White amplifies and augments the lines of argument adumbrated in the earlier es-
say while also drawing out additional lines of evidence in support of his thesis. The seminal work of
medieval historian Ernst Benz, whose writing on the subject had appeared in Italian in 1964 and was
presented in English in 1966, is introduced into White’s argument for the first time. Benz’s study of
Zen Buddhism’s “anti-technological impulses” led him to locate Western Europe’s embrace of techno-
logical change in its religious outlook. He pointed to Christianity’s linear conception of history, its pre-
sentation of God as architect and potter, its theological affirmation of the goodness of material creation,
and its presupposition of the “intelligent craftsmanship” of the created order — all in his view unique
to Christianity — as the components of what White terms a “cultural climate” remarkably hospitable to
technological advance.

White affirms the contours of Benz analysis, but he finds room for improvement. Drawing on two
articles independently published in 1956, White once again points to the disenchantment of nature sup-
posedly accomplished by Christianity’s cultural triumph over ancient paganism. He also reaffirms and
further develops the importance of the distinction between Latin and Eastern Christianity. Here White
strengthens his earlier observations by drawing on iconographic and textual evidence.
Beginning shortly after the turn of the first Christian millennium, Western iconography depicts God
in the act of creation as a builder, master craftsman, and later a mechanic — it was a visual tradition
never adopted in the Eastern churches. Exegetically, White points to the Western and Eastern interpre-
tations of the story of Martha and Mary in Luke’s gospel. While a surface reading suggests an endorse-
ment of contemplation over activism, Latin interpreters, beginning with Augustine, go out of their way
to soften and even reverse the apparent critique of activism and labor.

With this White then draws in what will become another key locus of attention in subsequent discus-
sions of technology and Christianity: the attitude toward labor in the monastic orders, particularly the
Benedictines. White notes that in the Byzantine world, which, unlike the West, did not suffer a general
collapse of culture, the religious orders were not forced to bear the burden of sustaining all aspects of
civilization, secular and religious. In the West, however, following the collapse of Roman authority, the
religious orders, notably the Benedictines, found themselves in the position of performing both reli-
gious and secular duties, of uniting worship and labor. This commitment to labor and the mechanical
arts would, in White’s view, generate a uniquely religious impetus for the development of technology.

White supports his contention by drawing on the work of the pseudonymous Theophilus and Hugh
of Saint Victor. Theophilus’ work, dating from the early 12th century, provides an indispensable record
of the era’s technological knowledge while attesting to the religiously motivated technical innovation
that White takes to be characteristic of the age. Meanwhile, Theophilus’ contemporary, Hugh of Saint
Victor incorporated the mechanical arts into his influential classification of knowledge and the arts.
While the mechanical arts were accorded the lowest place in the hierarchical ranking of the arts, they
were nonetheless included and this was no small thing. Together, the work of Theophilus and Hugh
supported White’s thesis regarding Western Christianity’s role in shaping Europe’s technological surge
in the Middle Ages.

White concludes with one more corroborating piece of iconographic evidence. An illustration of
Psalm 63 in the Utrecht Psalter dating from the mid-ninth century features a confrontation by between
King David and the Righteous and a much larger force of the ungodly. While the ungodly use a whet-
stone to sharpen their sword, the godly employ “the first crank recorded outside China to rotate the first
grindstone known anywhere.” Clearly, White concludes, “the artist is telling us that technological ad-
vance is God’s will.”
While “The Historical Roots of Our Ecological Crisis” is cited more often and is more frequently
taken as a point of departure, it is in “Cultural Climates and Technological Advance in the Middle
Ages” that White most persuasively argues the case for Christianity’s formative influence on the his-
tory of technology in Western society. With its inclusion of Ernst Benz’ research and its discussion of
Benedictine spirituality, this essay frames the research agenda for subsequent research and discussion.

George Ovitt: Challenging the Lynn White Thesis

George Ovitt’s The Restoration of Perfection: Labor and Technology in Medieval Culture, pub-
lished in 1987, provides the first book length treatment of the major themes advanced in White’s thesis.
Ovitt, who in his preface pays appropriate academic homage to White, sets out to explore more fully
the claims made on behalf of Christian faith and practice in relation to medieval attitudes toward tech-
nology. Ovitt’s analysis is also deeply influenced by Max Weber and Lewis Mumford’s application of
Weberian insights to Benedictine monasticism. Finally, Ovitt also tests the alternative claims made by
Jacques Le Goff in his 1980 evaluation of medieval attitudes toward labor and technological develop-
ment. Against White’s thesis, Le Goff argued that shifting theological attitudes toward labor and tech-
nology followed upon rather than instigated transformations in economic and material conditions.

In addition to his evaluation of prior scholarship, Ovitt also contributes a chapter on the medieval
and Christian roots of the notion of progress. The notion of progress becomes another important se-
mantic net that catches significant elements of the relationship of religion and technology. Ovitt con-
cludes that early Christian writers, and their medieval heirs, acknowledged material progress and hu-
man sovereignty over nature, but these were linked inextricably to moral progress and “the striving for
sovereignty over the more intractable self.”

Notions of progress, in other words, were not yet, as they later would be, reduced to or considered
equivalent to technical progress. In this, as in countless other ways, the Western medieval church took
its cue from Augustine whose view of the mechanical arts can best be labeled as ambiguous.

Turning specifically to the claims advanced by White and Benz, Ovitt chooses to test them by con-
ducting an extensive review of the medieval period’s hexaemeral literature, i.e., commentaries on the
six days of creation. His review of this literature leads Ovitt to substantially qualify both Benz’s claims
regarding the image of God as craftsman and White’s suggestion that Christian theology sanctioned a
rapacious posture toward nature.

Ovitt finds that while there is ample evidence of the portrayals of God as craftsman in the hexae-
meral literature of the Latin Church, it is balanced by the continued portrayal of God as a detached and
transcendent Creator — a portrayal shared with the Eastern Church — which appears just as frequently
as the image of God as craftsman. Additionally, Ovitt suggests that when the image of God as crafts-
man is employed, it often suggests that as the finished products of the craftsman’s hand are often put to
uses contrary to the craftsman’s intentions, so too the work of God’s hands, namely humanity, often
rebels against the intentions of the Creator.

Regarding, White’s thesis, Ovitt follows earlier critics in questioning whether, on the one hand,
White has adequately described the biblical data and its theological elaboration, and, on the other,
whether such a direct, causal connection could in any case be reasonably drawn between a religious
perspective and the cultural consensus. Ovitt’s research additionally suggests that the more common
depiction of humanity’s relationship to creation is better understood as one of custodianship or stew-
ardship rather than unbridled exploitation. Moreover, Ovitt points out that the idea that nature exists to
be used by human beings does not by itself explain the uses to which it is put. Ovitt approvingly cites
Mumford’s suggestion that it was not until Francis Bacon’s introduction of “‘a power-hungry technical
mentality’ that the West took off on the self-destructive course it now follows.”

Moreover, if we look for the “direct result of the medieval Christian ethic of domination” we find “a
long-lived tradition of craftsmanship and subsistence farming” — “democratic polytechnics,” in Mum-
ford’s terms, rather than “authoritarian megatechnics.”

The “really significant question for the histoiry [sic] of technology” in Ovitt’s view, “is not what
Christianity taught about the domination of nature, but what Christianity taught about the domination
of human beings by other human beings.”

And with this Ovitt turns to consider medieval social and education systems, specifically the prac-
tice of labor in the monastic orders and the place of the mechanical arts in medieval classifications of
knowledge. Regarding the former, Ovitt indeed finds that the monastic tradition, in both its eremitic
and cenobitic manifestations, upheld the fundamental dignity and spiritual usefulness of labor. More-
over, Ovitt finds the Rule of St. Benedict to be the most “effective reconciliation” of the spiritual needs
of the individual and the economic needs of the community. Most importantly, he concludes that con-
trary to the later logic of capitalism, the significance of labor did not reside in “its products or uses of
technology and invention quantitatively as a means of enhancing productivity,” rather it resided in the
“process of labor” which was aimed at personal holiness and communal self-sufficiency.

Ovitt reasonably concludes that a Weberian analysis of monasticism’s role in incubating incipient
capitalism as well as fostering a peculiarly Western enthusiasm for technology should be balanced by
the spirit of Augustine’s prayer — “O Lord, I am working hard in this field, and the field of my labors
is my own self” — which expressed the dominant medieval priorities.

In his survey of medieval classifications of knowledge, particularly that of Hugh of Saint Victor in
the twelfth century and Thomas Aquinas’ in the thirteenth, Ovitt finds a generally positive estimation
of the mechanical arts even while they retain the lowest rank among the various arts. In his estimation,
more than their admission into the theologians’ classifications of knowledge was needed to generate
the particular enthusiasm for technology that would come to distinguish Western society. What was
needed was the decoupling of “labor, and labor’s tools from the realm of the sacred and the control of
theologians” and a renewed interest in judging technology by its products rather than its effects on the
spiritual lives of its users.

This would be achieved, ironically, by the Gregorian Reform movement originating within the
church which sought to clarify the respective roles of the church and the state. In so doing, the church
sanctioned a three-fold division of society including those who ruled and fought, those who labored,
and those who prayed. This division tacitly endorsed the separation of labor from the purview of the
church and created the space for technology to evolve apart from moral and spiritual constraints or con-
siderations. Contrary to White, then, Ovitt suggests that Christianity’s chief contribution to the evolu-
tion of distinctly Western attitudes to technology may have been its stepping out of the way as it were.

Ovitt’s study significantly complicates the historical arguments made by Benz and White as well as
Ellul’s earlier summary of Christianity’s relationship to technique. The total picture is on the whole
more ambiguous than the work of any of these scholars would lead readers to believe. Medieval atti-
tudes toward nature, labor, and the mechanical arts were on the whole positive but hardly enthusiastic.
More importantly, Ovitt convincingly showed that technical considerations were, as one might expect,
consistently subordinated to spiritual ends as demonstrated by the Benedictine’s willingness to lay
aside labor when it became possible to commission a lesser order of lay brothers or even paid laborers
to perform the work necessitated by the community. Ovitt’s work has appropriately taken its place be-
side that of White as a major contribution to the scholarly exploration of the relationship of medieval
Christianity to Western technology. If we are to find the sources of western enthusiasm for technology
it is necessary to begin with medieval Christianity, but after Ovitt’s thorough work it is no longer possi-
ble to end there.

Susan White: Technology and Liturgy

In 1994, Cambridge scholar Susan J. White, published a study entitled Christian Worship and Tech-
nological Change. White’s study reverses the direction of the inquiry by asking not about the effect of
Christianity on the evolution of technology, but about the consequences of technology for Christian
worship. In providing the rationale for her study, White cited the earlier work of Ellul, Lynn White,
Mumford, and Ovitt exploring the complex relationship between Christianity and technology. In her
most compelling chapter, White discusses the role of astronomical technology — the astrolabe, the al-
bion, and the rectangulus — in refining the Christian liturgical calendar which had been in acknowl-
edged disarray as well as the role of the mechanical clock in recalibrating the rhythms of the liturgy.
On these points, however, White is mostly following the earlier work of Mumford and Lynn White.
Thus her noting the contrast between Eastern Orthodoxy’s longstanding rejection of the mechanical
clock and its quick embrace in the western churches, a point also registered by Lynn White.

In a subsequent chapter, “Liturgy and Mechanization,” White takes an approach that resonates with
Ellul’s articulation of technique, noting how transformations in Christian liturgical practice during the
nineteenth and twentieth centuries were often inspired by the goals and logic of machine technology. In
her analogy, liturgy became a form of mechanistic technology.

Susan White’s work is useful for its reframing of the question regarding technology and Christianity
in light of technology’s influence on the religion. Her work is especially helpful for those who have
never considered the consequences of technological change on liturgical practice. She does not, how-
ever, advance the debate initiated by Lynn White regarding the relationship between Christianity and
western technology. In 1997, however, historian David F. Noble’s The Religion of Technology raised
the question again with renewed vigor.

David Noble: The Religion of Technology

Noble describes “a thousand-year old Western tradition in which the advance of the useful arts was
inspired by and grounded upon religious expectation.” Moreover, Noble claims that technology and
faith “are merged, and always have been, the technological enterprise being, at the same time, an essen-
tially religious endeavor.” Noble insists that this is not a metaphorical claim, but rather literally true:
“modern technology and religion have evolved together …, and as a result, the technological enterprise
has been and remains suffused with religious belief.”

In the end, however, it is difficult to isolate what exactly Noble has identified through his discussion
of monasticism, millenarianism, Baconian enthusiasm, Free Masonry, and the twentieth century tech-
nologies of transcendence including the atomic bomb, space flight, artificial intelligence, and biotech-
nology. Noble has, from one angle, uncovered the history of an alternative religion, the religion of
technology, which might be best understood as a Christian heresy with an immanentized eschatology.
He builds on the work of White, Benz, and Ovitt to argue for a major transformation in Christianity’s
view of the mechanical arts around the eleventh century, and, in his estimation, it was during this pe-
riod that technological progress came to be equated with progress toward the material restoration of the
created order. It was this faith in technological progress that would henceforth characterize the core of
the “religion of technology.”

From another angle, though, Noble has only shown that religious language and religious aspirations
are frequently associated with technology, particularly by the elite practitioners that are driving the de-
velopment and deployment of new technology. Any effort to arrive at a more clearly defined conclu-
sion would falter under close examination since Noble has brought together a remarkably diverse col-
lection of evidence and has not drawn very precise causal relationships. Instead, his method is to drive
home a general sense of technology and religion’s entanglement by citing as many plausible instances
of the claim as possible. Noble’s slippage between of the use of “faith” and “religion” is indicative of
the general nature of his claims.
It is also problematic that Noble’s borrowings from Benz, White, and Ovitt give no indication that
there is any underlying tension in their respective accounts of Christianity’s relationship with technol-
ogy. For example, although he is clearly familiar with Ovitt’s work, he approvingly cites White’s thesis
in “The Historical Roots of Our Ecological Crisis” without qualification.

Moreover his deployment of theological terminology in the opening chapter does not inspire confi-
dence in his grasp of nuance in the relevant theological literature. But Noble is less interested in joining
that debate than he is in registering a more general complaint. Noble is clearly concerned about the reli-
gious hopes for transcendence that have been repeatedly attached to new technologies throughout the
history of Western society. In his view, this tendency clouds our ability to think clearly about technol-
ogy, a problem that grows all the more acute as our technologies evolve into more complex, and poten-
tially dangerous realities. In the end we may say that Noble has established the presence of a religious
veneer that frequently appears on the surface of technology to influence its development and adoption
without clarifying or systematizing the varieties, sources, and consequences of that veneer.

Bronislaw Szerszynski: Technology, Nature, and the Sacred

In his 2005 book, Nature, Technology, and the Sacred, Bronislaw Szerszynski traces the evolving
understandings of nature and the sacred in order to provide an alternative account of secularization and
disenchantment, two processes in which technology has traditionally been assigned a critical role.

To begin with, he argues that the best way to understand the evolution of ideas about nature from
pre-modern to modern societies is, in fact, not as a process of secularization or disenchantment, but
rather as a process of depersonalization. The advent of Christianity did not, as Lynn White (among oth-
ers) suggested, simply empty nature of its metaphysical content leaving it vulnerable to exploitation.
Rather, Christianity removed personal agencies that inhabited nature and replaced them with a semiotic
layer of meaning. Nature became alive, not with minor deities and spirits, but with meaning that could
be allegorically interpreted. This conception of nature as a religiously meaningful system of signs acted
as a brake on the potential exploitation of nature that White and others imagined following on the
Christian triumph over the pagan worldview.

However, the Protestant Reformation, in Szerszynski’s view, did lead to a situation like that imag-
ined by White when it collapsed all meanings and interpretations to the level of the literal and effec-
tively contained them in the text of the Bible. In this way, the semiotic cloak that had protected nature
was pulled back leaving it defenseless against the advances of science and technology.

The Reformation was decisive, but it did not act alone in reconfiguring the economy of the sacred in
relation to nature. In the late twelfth century, conceptions of nature were already beginning to shift ow-
ing to the growing influence of voluntarist theology that privileged the unrestrained will and power of
God over the potential intelligibility of the created order. Nonetheless, Szerszynski follows Ellul and
Ovitt against White in taking medieval Christianity’s attitude toward technology and nature to be
mostly ambivalent and not by itself the cause of the Western technological mindset. Even the Reforma-
tion, in Szerszynski’s view, did not entirely free technology from moral and religious constraints. The
beginnings of this rupture are best traced to Francis Bacon’s privileging of techne over epistme. From
this point Szerszynski traces a straight line to the evolution Ellul’s technological society.

At this juncture, Szerszynski introduces David Nye’s exploration of the technological sublime to re-
inforce his conclusion that by the nineteenth century technology commanded near religious reverence.
This religious reverence, however, “echoes” the earlier voluntarist transformation in the theological
conception of God: technology, like the voluntarist God was to be “loved for itself, apart from its fit-
ness for human life and purpose.”

Combined with the rise of bio-power documented by Michel Foucault which focused narrowly on
the preservation and governance of biological life this meant that all transcendent frames of reference
that might have limited technology’s influence were now effectively displaced. As Noble noted, tech-
nology itself became the object of transcendent aspirations.

Closing Thoughts

Following Lynn White, the examination of Christianity’s relationship to technology has been firmly
anchored in the Medieval period. With the exception of Szerszynski, none of the authors under consid-
eration extensively explored the significance of the Reformation to the history of technology and Szer-
szynski’s work was thin on historical sources. More work is needed on the Reformation and its rela-
tionship to technology. Of course, much work has already been done on the Reformation and the his-
tory of one particular technology, the printing press.
Research has thus far also been focused on the role of Christianity in the emergence of Western
technology. It seems that more careful and serious work ought to be done on the way that technology
has shaped Christianity. Susan White gestured in this direction, but her work was largely derivative of
the studies undertaken by Lynn White and Lewis Mumford and was focused on only a handful of illus-
trative cases.

Joseph Corn, whose work was not discussed above, has shown the importance of exploring reli-
gion’s role in the social construction of particular technologies. Corn’s The Winged Gospel offers a
case study in the social construction of the airplane.

Drawing on a wide array of sources including newspapers, popular and professional journals, mem-
oirs, and an assortment of varied cultural ephemera, Corn makes his case for the religious dimensions
of the airplane’s social construction. The airplane was initially greeted by Thomas-like doubt that de-
manded sight in order to believe, and was thereafter frequently described with language that appealed
to the supernatural — flight was a miracle. Additionally, the use of the airplane and its celebration was
often marked by ritual and ceremony that bore a striking resemblance to elements of Christian worship
and practice. Finally, attitudes toward the airplane took on the character of a “technological messian-
ism,” a faith in the power of the airplane to bring about a nearly eschatological reordering of society. In
the end, of course, these hopes are reigned in by reality.

Similar narrow studies focused on particular technologies and the role religious belief played in their
social construction would strengthen our understanding of the relationship between technology and re-
ligion which thus far has been treated in mostly broad strokes.

Finally, religious studies can also provide a helpful set of categories for understanding the role of
technology in society, particularly given the pervasive interrelationship of technology and religion as
documented by Noble. Ritual studies, for example, may usefully augment our understanding of the in-
tersections of technology use and embodied practice. David Nye’s incorporation of Durkheimian cate-
gories in his study of the technological sublime provides a helpful model for what this interdisciplinary
exchange of methods can accomplish.

March 1, 2012
6. The World of Tomorrow, Inc.
“Man’s temples typify his concepts. I cherish the thought that America stands on the threshold of a
great awakening. The impulse which this Phantom City will give to American culture cannot be over-
estimated. The fact that such a wonder could rise in our midst is proof that the spirit is with us.”
– Journalist Fredrick F. Cook, writing of the 1893 Columbian Exposition in Chicago

The religion of technology was represented exceptionally well at the 1939 New York World’s Fair.
In fact, in the 1939 fair with its “World of Tomorrow” theme, the techno-utopian message of the reli-
gion of technology may have found its most compelling medium. Prior to 1939, the American world’s
fairs had always been characterized by what Astrid Böger aptly called a “bifocal nature,” that is they
“served as patriotic commemorations of central events in American history even as they envisioned the
nation’s bright future.” Janus-faced, they looked back on a glorified past and forward toward an ideal-
ized future. The fairs of the 1930’s, however, consciously focused their vision on the future. It is true
that a glance was still cast backwards – the ‘39 fair for instance commemorated the 250thanniversary
of George Washington’s inauguration – but the emphasis was clearly on the wonders that lay ahead.

The ’39 New York fair, in particular, was explicitly eschatological. Its most popular exhibits fea-
tured Cities of Tomorrow, Zions that were to be realized through technological expertise deployed by
corporate power supported by benign government planning. And little wonder, the nation had been
through a decade of economic depression and rumors of war swept across the Atlantic. “To catch the
public imagination,” historian David Nye explains, “the fair had to address this uneasiness. It could not
do so by mere appeals to patriotism, by displays of goods that many people had no money to buy, or by
the nostalgic evocation of golden yesterdays. It had to offer temporary transcendence.” And by the late
1930s, technology appeared to be on the verge of delivering on this promise. “Earlier world’s fairs, in
which science had not played so great a role, had also been conceived in utopian spirit,” noted Folke T.
Kihlstedt, “but not until the 1930s did science and technology seem to possess the potential for the ac-
tualization of a utopian vision.”
While engineers had achieved a place among the clergy of the religion of technology during the late
nineteenth century, by the 1930s they had been displaced by the industrial designer who, in Kihlstedt’s
phrasing, “quickly became the chief promoter of a utopian future served by the products of technol-
ogy.” The industrial designer “looked not with the pragmatic eye of the engineer but with the visionary
gaze of the utopian.” This “visionary gaze” and the attention to the affective dimension of technology
made the industrial designer the ideal prophet of the religion of technology.

The planners of the 1939 New York fair instructed the industrial designers to weave technology
throughout the fabric of the whole fair. In previous expositions, science had occupied a prominent but
localized place among the multiple exhibits. The 1939 fair intentionally broke with this tradition. “In-
stead of building a central shrine to house scientific displays,” Robert Rydell explains, “they decided to
saturate the fair with the gospel of scientific idealism by highlighting the importance of industrial labo-
ratories in exhibit buildings devoted to specific industries.”

With nearly a decade of economic depression behind them and a looming international conflagration
before them, the fair planners remained committed to the religion of technology and they were intent
on creating a fair that would rekindle America’s waning faith. It may not be entirely inappropriate,
then, to see the 1939 New York World’s Fair as a revival meeting calling the faithful to repentance and
renewed hope in the religion of technology. But the call to renewed faith in 1939 also contained varia-
tions on the theme. The presentation of the religion of technology took a liturgical turn and it was al-
loyed with the spirit of the American corporation.

Ritual Fairs

Historians and critics of the world’s fair have mostly focused their attention on the intention of the
fair designers. They have studied the fairs as texts laid out for analysis. But its debatable whether this
tells us much about the experience of fairgoers. Warren Susman, writing of 1939 New York World’s
Fair, concluded: “The Fair was not open for long,” he noted, “before the people showed both the plan-
ners and the commercial interests how perverse they could be about following the arrangements so
carefully made for them.” Despite the best efforts of planners, “the people proceeded on its own way.”

Yet for all of this, the fairs were making an impression on fairgoers and Astrid Böger suggests a
way of understanding that impression: “world’s fairs are performative events in that they present a vi-
sion of national culture in the form of spectacle, which visitors are invited to participate in and, thus,
help create.” Writing of the Ferris Wheel at the Columbian Exposition in Chicago, Böger explains that
it was the “striking example of the sensual – primarily visual – experience of the fair, which seems to
precede both understanding of the exhibit’s technology and, more importantly, appreciation of it as an
American achievement.”

What Böger hones in on in these observations is the distinction between the intellectual content of
the fairs as intended by the fair planners and the actual experience of the fairs by those who attended. It
is the difference between reading the fairs as a “text” with an explicit message and constructing a
meaning through the experience of “taking in” the fair. The planners intended an intellectualized,
chiefly cognitive experience. Fairgoers processed the fair in an embodied and mostly affective manner.
It is this distinction that leads to the observation that the religion of technology, as it appeared at the
fairs, was a liturgical religion. In his articulation of the religion of technology, Noble emphasized the
explicit and the propositional. His focus was on belief and theology. But the fairs suggest other dimen-
sions of the religion of technology, practice and ritual.

The particular genius of the 1939 New York World’s Fair lay in the manner in which the two most
popular exhibits blended their explicit message with a ritual experience. Democracity, housed inside
the Perisphere, and General Motors’ Futurama both solved the problem of the impertinent walkers by
miniaturizing the idealized world and carefully controlling the fairgoer’s experience of the miniaturized
environment. Earlier fairs sought to present themselves as idealized cities, but this risked the diffusion
of the message as fairgoer’s crafted their own fair itineraries or otherwise remained oblivious to the im-
plicit messages. Democracity and the Futurama mitigated this risk by crafting not only the world, but
the experience itself – by providing a liturgy for the ritual. And the ritual was decidedly aimed at the
cultivation of hope in a future techno-utopian society, which is to say it gave ritual expression to the re-
ligion of technology.

As David Nye observed, “the most successful [exhibits] were those that took the form of dramas
with covertly religious overtones.” In fact, Nye describes the fair as a whole as “a quasi-religious expe-
rience of escape into an ideal future equally accessible to all … The fair was a shrine of
modernity.” Nowhere was the “quasi-religious” aspect of the fair more clearly evident than in Democ-
racity, the miniature city of the future housed within the fair’s iconic Perisphere.
Fairgoers filed into the sphere and were able to gaze down upon the city of the future from two bal-
conies. When the five and a half minute show began, the narrator began describing the features of this
idealized landscape featuring the city of the future at its center. Emanating outward from the central
city were towns and farm country. The towns would each be devoted to specific industries and they
would be home to both workers and management. As the show progressed and the narrator extolled the
virtues of central planning, the lighting in the sphere simulated the passage of day and night. Nye sum-
marizes what followed:

“Once the visitors had contemplated this future world, they were presented with a powerful vision
that one commentator compared to ‘a secular apocalypse.’ Now the lights of the city dimmed. To cre-
ate a devotional mood, a thousand-voice choir sang on a recording that André Kostelanetz had prepared
for the display. Movies projected on the upper walls of the globe showed representatives of various
professions working, marching, and singing together. The authoritative voice of the radio announcer H.
V. Kaltenborn announced: ‘This march of men and women, singing their triumph, is the true symbol of
the World of Tomorrow.’”

What they sang was the theme song of the fair that proclaimed:

“We’re the rising tide coming from far and wide


Marching side by side on our way,
For a brave new world,
That we shall build today.”

Kihlstedt suggests Democracity’s designer, Henry Dreyfuss, modeled this culminating scene on
Dutch Renaissance artist Jan Van Eyck’s Ghent Altarpiece featuring “a great multitude … of all na-
tions and kindreds, and people” as described in the book of Revelation. “In this well-known painting,”
Kihlstedt explains, “the saints converge toward the altar of the Lamb from the four corners of the
world. As they reveal the unity and the ‘ultimate beatitude of all believing souls,’ these saints define by
their presence a heaven on earth.” Ritual and interpretation were thus fused together in one visceral, af-
fective liturgy. Each visitor experienced a nearly identical presentation, and many did so repeatedly.
The message was both explicit and memorable.

Corporate Liturgies
Earlier fairs were driven by a variety of ideologies. Rydell in particular has emphasized the imperial
and racial ideologies driving the design of the Victorian Era fairs. These fairs also promoted political
ideals and patriotism. Additionally, they sought to educate the public in the latest scientific trends (du-
bious as they may be in the case of Social Darwinism). But in the 1930s the emphasis shifted decid-
edly. Böger notes, for example, “the early American expositions have to be placed in the context of na-
tionalism and imperialism, whereas the world’s fairs after 1915 went in the direction of globalism and
the ensuing competition of opposing ideological systems rather than of individual nation states.” More
specifically the fairs of the 1930s, and the 1939 fair especially, aimed to buttress the legitimacy of
democracy and the free market in the face of totalitarian and socialist alternatives.

“From the beginning,” Rydell observes, “the century-of-progress expositions were conceived as fes-
tivals of American corporate power that would put breathtaking amounts of surplus capital to work in
the field of cultural production and ideological representation.” Kihlstedt likewise notes, “whereas
most nineteenth-century utopias were socialist, based on cooperative production and distribution of
goods, the twentieth-century fairs suggested that utopia would be attained through corporate capitalism
and the individual freedom associated with it.” He added, “the organizers of the NYWF were making
quasi-propagandistic use of utopian ideas and imagery to equate utopia with capitalism.” For his part,
Nye drew on Roland Marchand to connect the evolution of the world’s fairs with the development of
corporate marketing strategies: “corporations first tried only to sell products, then tried to educate the
public about their business, and finally turned to marketing visions of the future.” Interestingly, Nye
also tied the ritual nature of the fairs with the corporate turn: “Such exhibits might be compared to the
sacred places of tribal societies … Each inscribed cultural meanings in ritual … And who but the cor-
porations took the role of the ritual elders in making possible such a reassuring future, in exchange for
submission.”

In this way the religion of technology was effectively incorporated. American corporations pre-
sented themselves as the builders of the techno-utopian city. With the cooperation of government agen-
cies, the corporations would wield the breathtaking power of technology to create a perfect, rationally
planned and yet democratic consumer society. Thus was the religion of technology enlisted by the mar-
keting departments of American corporations.

The major American world’s fairs functioned as microcosms of American society. At the fairs, the
ideals of cultural, political, and economic elites are put on display. These ideals were anchored in a
mythic past and projected in an equally mythic future. The fairs not only reflected the ideals of Ameri-
can elites, they also registered an indelible impression on the millions of Americans who attended. The
precise measure of the influence of the fairs on American society, however, remains difficult to mea-
sure. Yet, framing the 1939 New York World’s fair within the larger story of the religion of technology
reveals the emergence of a powerful alliance of technology, religious aspirations, and corporate power.
This alliance was certainly taking shape before 1939, but at the New York fair it announced itself in
memorable and decisive fashion. Through the careful deployment of an imaginative liturgical experi-
ence, the fair instilled the virtues of this alliance in a generation of Americans. This generation would
go on to build a society that, for better and for worse, reflected the triumph of the incorporated religion
of technology.

May 11, 2012


7. Technology and Perception

Ubiquitous realities tend to fade from view. They are, paradoxically, too pervasive to be noticed. It
is these very realities, hiding in front of our noses as the cliché has it, which most profoundly shape our
experience. And very often these ever-present, unnoticed realities are technological realities.

With a little help from Maurice Merleau-Ponty, we can unpack at least one of the ways in which
certain technologies fade from view while simultaneously shaping our perception, and Philip Brey’s
“Technology and Embodiment in Ihde and Merleau-Ponty” will guide our approach to Merleau-Ponty.

The purpose of Brey’s article is to supplement and shore up certain categories developed by the
philosopher of technology, Don Ihde. To do so, Brey traces certain illustrations used by Ihde back to
their source in Merleau-Ponty’s Phenomenology of Perception.

Ihde sought to create a taxonomy that categorized a limited set of ways humans interacted with tech-
nology, and among his categories was one he termed “embodiment relations.” Ihde defined embodi-
ment relations as those in which a technology mediates an individual’s perception of the world and
gives a series of examples including glasses, telescopes, hearing aids, and a blind man’s cane. An inter-
esting feature of each of these technologies is that they “withdraw” from view when their use becomes
habitual. Ihde lists other examples, however, which Brey finds problematic as exemplars of the cate-
gory. These include the hammer and a feathered hat.

(The example of the feather hat is drawn from Merleau-Ponty. As a lady wearing a feathered hat
makes her way about, she interacts with her surroundings in light of this feature that amounts to an ex-
tension of her body.)

In both cases, Brey believes the example is less about perception (although it can be involved) and
more about action. Consequently, Brey offers some further distinctions to better get at the kinds of rela-
tions Ihde was attempting to classify. He begins by dividing embodiment relations into relations that
mediate perception and those that mediate motor skills.
Brey goes on to make further distinctions among the kinds of embodiment relations that mediate
motor skills. Some of these involve navigational skills and tend to be of the sort that “enlarge” one’s
body. The feathered hat fits into this category as do other items such as a worn backpack that require
the user to incorporate the object into one’s body schema in such a way that we pre-consciously navi-
gate as if the object were a part of our body. Then there are embodiment relations which mediate motor
skills in interaction with the environment. The hammer fits into this category. These objects become
part of our body schema in order to extend our action in the world.

These clarifications and distinctions are helpful, and Brey is right to distinguish between embodi-
ment relations geared toward perception and those geared toward action. But he is also right to point
out that even those tools that are geared toward action involve perception to some degree. While a
hammer is used primarily to mediate action, it also mediates perception. For example, a hammer strike
reveals something about the surface struck.

Yet, Brey believes that in this class of embodiment relations the perceptual function is “subordinate”
to the motor function. This is probably a sound conclusion, but it does not seem to take into account a
more subtle way in which perception comes into play. Elsewhere, I’ve written about the manner in
which technology in-hand affects our perception of the world not only by offering sensory feedback,
but also by shaping our interpretive acts of perception, our seeing-as. I won’t rehash that argument
here; instead I want to focus on the withdrawing character of technologies through which we perceive.

The sorts of tools that mediate perception ordinarily do so while they themselves recede from view.
Summarizing Ihde’s discussion of embodiment relations, Brey offers the following description of the
phenomenon:

“In embodiment relations, the embodied technology does not, or hardly, become itself an object
of perception. Rather, it “withdraws” and serves as a (partially) transparent means through
which one perceives one’s environment, thus engendering a partial symbiosis of oneself and it.”

Consider the eye a paradigmatic example. We see all things through it, but we never see it (unless,
of course, in a mirror). This is a function of what Michael Polanyi has called the “from-to” character of
perception. Our intentionality is directed from our body outward to the world. “The bodily processes
hide,” Mark Johnson explains, “in order to make possible our fluid, automatic experiencing of the
world.”

The technologies that we take into an embodied relation do not ordinarily achieve quite so complete
a withdrawal, but they do ordinarily fade from our awareness as objects in themselves. Contact lenses,
for example, or the blind man’s cane. In fact, almost any tool of which we become expert users tends to
withdraw as an object in its own right in order to facilitate our perception or our action.

In short essay titled “Meditation in a Toolshed,” C. S. Lewis offers an excellent illustration of this
dynamic. Granted, he was offering an illustration of different phenomenon, but I think it fits here as
well. Lewis described entering into a dark toolshed and seeing before him a shaft of light coming in
through a crack above the door. At that moment Lewis “was seeing the beam, not seeing things by it.”
But then he stepped into the beam:

“Instantly the whole previous picture vanished. I saw no toolshed, and (above all) no beam. In-
stead I saw, framed in the irregular cranny at the top of the door, green leaves moving on the
branches of a tree outside and beyond that, 90 odd million miles away, the sun. Looking along
the beam, and looking at the beam are very different experiences.”

Notice his emphasis on the manner in which the beam itself disappears from view when one sees
through it. That through which we perceive ceases to be an object that we perceive. Returning to where
we began then, we might say that one manner in which a technology becomes too pervasive too be no-
ticed is by becoming that by which we perceive the world or some aspect of it.

It is easiest to recognize the dynamic at work in objects that are specifically designed to enhance our
physical senses — eyeglasses, for example. But the principle may be expanded further (even if the me-
chanics shift) to include other less obvious ways we perceive through technology. The whole of Mar-
shall McLuhan’s work, in fact, could be seen as an attempt to understand how all technology is media
technology that alters perception. In other words, all technology mediates reality in some fashion, but
the mediating function withdraws from view because it is that through which we perceive the content.
It is the beam of light into which we step to perceive some other thing and, as with the beam, it remains
unseen even while it enables and shapes our seeing.
June 24, 2012
8. What Motivates the Tech Critic

Some time ago, I confessed my deeply rooted Arcadian disposition. I added, “The Arcadian is the
critic of technology, the one whose first instinct is to mourn what is lost rather than celebrate what is
gained.” This phrase prompted a reader to suggest that the critic of technology is preferably neither an
Arcadian nor a Utopian. This better sort of critic, he wrote, “doesn’t ‘mourn what is lost’ but rather
seeks an understanding of how the present arrived from the past and what it means for the future.” The
reader also referenced an essay by the philosopher of technology Don Ihde in which Ihde reflected on
the role of the critic of technology by analogy to the literary critic or the art critic. The comment trig-
gered a series of questions in my mind: What exactly makes for a good critic of technology? What
stance, if any, is appropriate to the critic of technology toward technology? Can the good critic mourn?

First, let me reiterate what I’ve written elsewhere: Neither unbridled optimism nor thoughtless pes-
simism regarding technology foster the sort of critical distance required to live wisely with technology.
I stand by that.

Secondly, it is worth asking, what exactly does a critic of technology criticize? The objects of criti-
cism are rather straightforward when we think of the food critic, the art critic, the music critic, the film
critic, and so on. But what about the critic of technology? The trouble here, of course, stems from the
challenge of defining technology. More often than not the word suggests the gadgets with which we
surround ourselves. A little more reflection brings to mind a variety of different sorts of technologies:
communication, military, transportation, energy, medical, agricultural, etc. The wheel, the factory, the
power grid, the pen, the iPhone, the hammer, the space station, the water wheel, the plow, the sword,
the ICBM, the film projector – it is a procrustean concept indeed that can accommodate all of this.
What does it mean to be a critic of a field that includes such a diverse set of artifacts and systems?

I’m not entirely sure; let’s say, for present purposes, that critics of technology find their niche within
certain subsets of the set that includes all of the above. The more interesting question, to me, is this:
What does the critic love?*
If we think of all of the other sorts of critics, it seems reasonable to suppose that they are driven by a
love for the objects and practices they criticize. The music critic loves music. The film critic loves film.
The food critic loves food. (We might also grant that a certain variety of critic probably loves nothing
so much as the sound of their own writing.) But does the technology critic love technology? Some of
the best critics of technology have seemed to love technology not at all. What do we make of that?

What does the critic of technology love that is analogous to the love of the music critic for music,
the food critic for food, etc.? Or does the critic of technology love anything at all in this way. Ihde
seems not to think so when he writes that, unlike other sorts of critics, the critic of technology does not
become so because they are “passionate” about the object of criticism.

Perhaps there is something about the instrumental character of technology that makes it difficult to
complete the analogy. Music, art, literature, food, film – each of these requires technology of some
sort. There are exceptions: dance and voice, for example. But for the most part, technology is involved
in the creation of the works that are the objects of criticism. The pen, the flute, the camera – these tools
are essential, but they are also subordinate to the finished works that they yield. The musician loves the
instrument for the sake of the music that it allows them play. It would be odd indeed if a musician were
to tell us that he loves his instrument, but is rather indifferent to the music itself. And this is our clue.
The critic of technology is a critic of artifacts and systems that are always for the sake of something
else. The critic of technology does not love technology because technology rarely exists for its own
sake. Ihde is right in saying that the critic of technology is not, in fact, should not be passionate about
the object of their criticism. But it doesn’t necessarily follow that no passion at all motivates their
work.

So what does the critic of technology love? Perhaps it is the environment. Perhaps it is an ideal of
community or friendship. Perhaps it is an ideal civil society. Perhaps it is health and vitality. Perhaps it
is sound education. Perhaps liberty. Perhaps joy. Perhaps a particular vision of human flourishing. The
critic of technology is animated by a love for something other than the technology itself. Returning to
where we began, I would suggest that the critic may indeed mourn just as they may celebrate. They
may do either to the degree that their critical work reveals technology’s complicity in either the de-
struction or promotion of that which they love.

––
Criticism of technology, if it moves beyond something like mere description and analysis, implies
making what amount to moral and ethical judgments. The critic of technology, if they reach conclu-
sions about the consequences of technology for the lives of individual persons and the health of institu-
tions and communities, will be doing work that rests on ethical principles and carries ethical implica-
tions.

In this they are not altogether unlike the music critic or the literary critic who is excepted to make
judgments about the merits of a work art given the established standards of their field. These standards
take shape within an institutionalized tradition of criticism. Likewise, the critic of technology — if they
move beyond questions such as “Does this technology work?” or “How does this technology work?” to
questions such as “What are the social consequences of this technology?” — is implicated in judgments
of value and worth.

But according to what standards and from within which tradition? Not the standards of
“technology,” if such could even be delineated, because these would merely consider efficiency and
functionality (although even these are not exactly “value neutral”). It was, for example, a refusal to
evaluate technology on its own terms that characterized the vigorous critical work of the late Jacques
Ellul. As Ellul saw it, technology had achieved its nearly autonomous position in society because it was
shielded from substantive criticism — criticism, that is, which refused to evaluate technology by its
own standards. The critic of technology, then, proceeds with an evaluative framework that is indepen-
dent of the logic of “technoscience,” as philosopher Don Ihde called it, and so they become an outsider
to the field.

__________________

“The contrast between art and literary criticism and what I shall call ‘technoscience criticism’ is
marked. Few would call art or literary critics ‘anti-art’ or ‘anti-literature’ in the working out, however
critically, of their products. And while it may indeed be true that given works of art or given texts are
excoriated, demeaned, or severely dealt with, one does not usually think of the critic as generically
‘anti-art’ or ‘anti-literature.’ Rather, it is precisely because the critic is passionate about his or her
subject matter that he or she becomes a ‘critic.’ That is simply not the case with science or techno-
science criticism …. The critic—as I shall show below—is either regarded as an outsider, or if the crit-
icism arises from the inside, is soon made to be a quasi-outsider.”
___________________

The libertarian critic, the Marxist critic, the Roman Catholic critic, the posthumanist critic, and so
on — each advances their criticism of technology informed by their ethical commitments. Their criti-
cism of technology flows from their loves. Each criticizes technology according to the larger moral and
ethical framework implied by the movements, philosophies, and institutions that have shaped their
identity. And, of course, so it must be. We are limited beings whose knowledge is always situated
within particular contexts. There is no avoiding this, and there is nothing particularly undesirable about
this state of affairs. The best critics will be self-aware of their commitments and work hard to sympa-
thetically entertain divergent perspectives. They will also work patiently and diligently to understand a
given technology before reaching conclusions about its moral and ethical consequences. But I suspect
this work of understanding, precisely because it can be demanding, is typically driven by some deeper
commitment that lends urgency and passion to the critic’s work.

Such underlying commitments are often veiled within certain rhetorical contexts that demand as
much, the academy for example. But debates about the merits of technology might be more fruitful if
the participants acknowledged the tacit ethical frameworks that underlie the positions they stake out.
This is because, in such cases, the technology in question is only a proxy for something else — the ob-
ject of the critic’s love.

March 20, 2013


9. What Are We Talking About When We Talk About
Technology?
I’ve been of two minds with regards to the usefulness of the word technology. One of those two
minds has been more or less persuaded that the term is of limited value and, worse still, that it is posi-
tively detrimental to our understanding of the reality it ostensibly labels. The most thorough case for
this position is laid out in a 2010 article by the historian of technology Leo Marx, “Technology: The
Emergence of a Hazardous Concept.”

Marx worried that the term technology was “peculiarly susceptible to reification.” The problem with
reified phenomenon is that it acquires “a ‘phantom-objectivity,’ an autonomy that seems so strictly ra-
tional and all-embracing as to conceal every trace of its fundamental nature: the relation between peo-
ple.” This false aura of autonomy leads in turn to “hackneyed vignettes of technologically activated so-
cial change—pithy accounts of ‘the direction technology is taking us’ or ‘changing our lives.’” Accord-
ing to Marx, such accounts are not only misleading, they are also irresponsible. By investing “technol-
ogy” with causal power, they distract us from “the human (especially socioeconomic and political) re-
lations responsible for precipitating this social upheaval.” It is these relations, after all, that “largely de-
termine who uses [technologies] and for what purposes.” And, it is the human use of technology that
makes all the difference, because, as Marx puts it, “Technology, as such, makes nothing happen.”[1]

As you might imagine, I find that Marx’ point compliments a critique of what I’ve called Borg
Complex rhetoric. It’s easier to refuse responsibility for technological change when we can attribute it
to some fuzzy, incohate idea of technology, or worse, what technology wants. That latter phrase is the
title of a book by Kevin Kelly, and it may be the best example on offer of the problem Marx was com-
batting in his article.

But … I don’t necessarily find that term altogether useless or hazardous. For instance, some time
ago I wrote the following:

“Speaking of online and offline and also the Internet or technology – definitions can be elusive. A
lot of time and effort has been and continues to be spent trying to delineate the precise referent for
these terms. But what if we took a lesson from Wittgenstein? Crudely speaking, Wittgenstein came to
believe that meaning was a function of use (in many, but not all cases). Instead of trying to fix an exter-
nal referent for these terms and then call out those who do not use the term as we have decided it must
be used or not used, perhaps we should, as Wittgenstein put it, ‘look and see’ the diversity of uses to
which the words are meaningfully put in ordinary conversation. I understand the impulse to demystify
terms, such as technology, whose elasticity allows for a great deal of confusion and obfuscation. But
perhaps we ought also to allow that even when these terms are being used without analytic precision,
they are still conveying sense.”

As you know from previous posts, I’ve been working through Langdon Winner’s Autonomous
Technology (1977). It was with a modicum of smug satisfaction, because I’m not above such things,
that I read the following in Winner’s Introduction:

“There is, of course, nothing unusual in the discovery that an important term is ambiguous or
imprecise or that it covers a wide diversity of situation. Wittgenstein’s discussion of ‘language
games’ and ‘family resemblances’ in Philosophical Investigations illustrates how frequently this
occurs in ordinary language. For many of our most important concepts, it is futile to look for a
common element in the phenomena to which the concept refers. ‘Look and see and whether
there is anything common to all.–For if you look at them you will not see something that is
common to all, but similarities, relationships, and a whole series of them at that.’”

Writing in the late ’70s, Winner claimed, “Technology is a word whose time has come.” After a pe-
riod of relative neglect or disinterest, “Social scientists, politicians, bureaucrats, corporate managers,
radical students, as well as natural scientists and engineers, are now united in the conclusion that some-
thing we call ‘technology’ lies at the core of what is most troublesome in the condition of our world.”

To illustrate, Winner cites Allen Ginsburg — “Ourselves caught in the giant machine are condi-
tioned to its terms, only holy vision or technological catastrophe or revolution break ‘the mind-forg’d
manacles.’” — and the Black Panthers: “The spirit of the people is greater than the man’s technology.”

For starters, this is a good reminder to us that we are not the first generation to wrestle with the
place of technology in our personal lives and in society at large. Winner was writing almost forty years
ago, after all. And Winner rightly points out that his generation was not the first to worry about such
matters either: “We are now faced with an odd situation in which one observer after another ‘discovers’
technology and announces it to the world as something new. The fact is, of course, that there is nothing
novel about technics, technological change, or advanced technological societies.”

While he thinks that technology is a word “whose time has come,” he is not unaware of the sorts of
criticisms articulated by Leo Marx. These criticisms had then been made of the manner in which
Jacques Ellul defined technology, or, more precisely, la technique: “the totality of methods rationally
arrived at and having absolute efficiency (for a given stage of development) in every field of human ac-
tivity.”

Against Ellul’s critics, Winner writes, “While Ellul’s addition of ‘absolute efficiency’ may cause us
difficulties, his notion of technique as the totality of rational methods closely corresponds to the
term technology as now used in everyday English. Ellul’s la technique and our technology both point to
a vast, diverse, ubiquitous totality that stands at the center of modern culture.”

It is at this point that Winner references Wittgenstein in the paragraph cited above. He then ac-
knowledges that the way in which technology tends to be used leads to the conclusion that “technology
is everything and everything is technology.” In other words, it “threatens to mean nothing.”

But Winner sees in this situation something of interest, and here is where I’m particularly inclined to
agree with him against critics like Leo Marx. Rather than seek to impose a fixed definition or banish
the term altogether, we should see in this situation “an interesting sign.” It should lead us to ask, “What
does the chaotic use of the term technology indicate to us?”

Here is how Winner answers that question: “[…] the confusion surrounding the concept ‘technol-
ogy’ is an indication of a kind of lag in public language, that is, a failure of both ordinary speech and
social scientific discourse to keep pace with the reality that needs to be discussed.”There may be a bet-
ter way, but “at present our concepts fail us.”

Winner follows with a brief discussion of the unthinking polarity into which discussions of technol-
ogy consequently fall: “discussion of the political implications of advanced technology have a tendency
to slide into a polarity of good versus evil.” We might add that this is not only a problem with discus-
sion of the political implications of advanced technology, it is also a problem with discussions of the
personal implications of advanced technology.

Winner adds that there “is no middle ground to discuss such things,” we encounter either “total af-
firmation” or “total denial.” In Winner’s experience, ambiguity and nuance are hard to come by and
any criticism, that is anything short of total embrace, meets with predictable responses: “You’re just us-
ing technology as a whipping boy,” or “You just want to stop progress and send us back to the Middle
Ages with peasants dancing on the green.”[2]

While it may not be as difficult to find more nuanced positions today, in part because of the sheer
quantity of easily accessible commentary, it still seems generally true that most popular discussions of
technology tend to fall into either the “love it” or “hate it” category.

In the end, it may be that Winner and Marx are not so far apart after all. While Winner is more toler-
ant of the use of technology and finds that, in fact, its use tells us something important about the not un-
reasonable anxieties of modern society, he also concludes that we need a better vocabulary with which
to discuss all that gets lumped under the idea of technology.

I’m reminded of Alan Jacobs’ oft-repeated invocation of Bernard Williams’s adage, “We suffer
from a poverty of concepts.” Indeed, indeed. It is this poverty of concepts that, in part, explains the
ease with which discussions of technology become mired in volatile love it or hate it exchanges. A
poverty of concepts short circuits more reasonable discussion. Debate quickly morphs into acrimony
because in the absence of categories that might give reason a modest grip on the realities under consid-
eration the competing positions resolve into seemingly subjective expressions of personal preference
and, thus, criticism becomes offensive.[3]

So where does this leave us? For my part, I’m not quite prepared to abandon the word technology. If
nothing else it serves as a potentially useful Socratic point of entry: “So, what exactly do you mean
by technology?” It does, to be sure, possess a hazardous tendency. But let’s be honest, what alterna-
tives do we have left to us? Are we to name every leaf because speaking of leaves obscures the multi-
plicity and complexity of the phenomena?

That said, we cannot make do with technology alone. We should seek to remedy that poverty of our
concepts. Much depends on it.
Of course, the same conditions that led to the emergence of the more recent expansive and some-
times hazardous use of the word technology are those that make it so difficult to arrive at a richer more
useful vocabulary. Those conditions include but are not limited to the ever expanding ubiquity and
complexity of our material apparatus and of the technological systems and networks in which we are
enmeshed. The force of these conditions was first felt in the wake of the industrial revolution and in the
ensuing 200 years it has only intensified.

To the scale and complexity of steam-powered industrial machinery was added the scale and com-
plexity of electrical systems, global networks of transportation, nuclear power, computers, digital de-
vices, the Internet, global financial markets, etc. To borrow a concept from political science, technical
innovation functions like a sort of ratchet effect. Scale and complexity are always torqued up, never re-
leased or diminished. And this makes it hard to understand this pervasive thing that we call technology.

For some time, through the early to mid-twentieth century we outsourced this sort of understanding
to the expert and managerial class. The post-war period witnessed a loss of confidence in the experts
and managers, hence it yielded the heightened anxiety about technology that Winner registers in the
’70s. Three decades later, we are still waiting for new and better forms of understanding.

February 14, 2014


10. Ten Points of Unsolicited Advice for Tech Writers

1. Don’t be a Borg. The development, deployment, and adoption of any given technology does not
unfold independently of human action.

2. Do not cite apparent historical parallels to contemporary concerns about technology as if they in-
validated those concerns. That people before us experienced similar problems does not mean that they
magically cease being problems today.

3. Do not deify technology or assign salvific powers to Technology.

4. When someone criticizes a specific technology without renouncing all other forms of technology,
they are not being hypocritical–they are thinking.

“I believe that you must appreciate technology just like art. You wouldn’t tell an art
connoisseur that he can’t prefer abstractionism to expressionism. To love is to choose. And today,
we’re losing this. Love has become an obligation.” (Paul Virilio)

5. Relatedly, the observation that human beings have always used technology is not a cogent
response to the criticism of particular technologies. The use of a pencil does not entail my endorsement
of genetic engineering.

6. Don’t grant technology independent or sufficient causal force. Consequences follow from the use
of technology, but causality is usually complex and distributed.

7. If you begin by claiming, hyperbolically, that a given technology is revolutionary, thereafter re-
sponding to critics by assuring them that nothing has changed is disingenuous at best. If something is
completely different, it can’t also be exactly the same.

8. It is banal to observe that a given technology may be used for both good or evil; this does not
mean that the technology in question is neutral.
9. Use the word technology circumspectly. It can function as an abstraction harboring all sorts
of false assumptions and logical fallacies.

10. That people eventually acclimate to changes precipitated by the advent of a new technology does
not prove that the changes were inconsequential or benign.

March 30, 2014


11. Why A Life Made Easier By Technology May Not
Be Happier
Tim Wu, of the Columbia Law School, has been writing a series of reflections on technological evo-
lution for Elements, the New Yorker’s science and technology blog. In the first of these, “If a Time
Traveller Saw a Smartphone,” Wu offers what he calls a modified Turing test as a way of thinking
about the debate between advocates and critics of digital technology (perhaps, though, it’s more like
Searle’s Chinese room).

Imagine a time-traveller from 1914 (a fateful year) encountering a woman behind a veil. This
woman answer all sorts of questions about history and literature, understands a number of languages,
performs mathematical calculations with amazing rapidity, etc. To the time-traveller, the woman seems
to possess a nearly divine intelligence. Of course, as you’ve already figured out, she is simply consult-
ing a smartphone with an Internet connection.

Wu uses this hypothetical anecdote to conclude, “The time-traveller scenario demonstrates that how
you answer the question of whether we are getting smarter depends on how you classify ‘we.’ This is
why [Clive] Thompson and [Nicholas] Carr reach different results: Thompson is judging the cyborg,
while Carr is judging the man underneath.” And that’s not a bad way of characterizing the debate.

Wu closes his first piece by suggesting that our technological augmentation has not been secured
without incurring certain costs. In the second post in the series, Wu gives us a rather drastic case study
of the kind of costs that sometimes come with technological augmentation. He tells the story of the Oji-
Cree people, who until recently lived a rugged, austere life in northern Canada … then modern tech-
nologies showed up:

“Since the arrival of new technologies, the population has suffered a massive increase in morbid
obesity, heart disease, and Type 2 diabetes. Social problems are rampant: idleness, alcoholism,
drug addiction, and suicide have reached some of the highest levels on earth. Diabetes, in par-
ticular, has become so common (affecting forty per cent of the population) that researchers
think that many children, after exposure in the womb, are born with an increased predisposition
to the disease. Childhood obesity is widespread, and ten-year-olds sometimes appear mid-
dle-aged. Recently, the Chief of a small Oji-Cree community estimated that half of his adult
population was addicted to OxyContin or other painkillers.

Technology is not the only cause of these changes, but scientists have made clear that it is a
driving factor.”

Wu understands that this is an extreme case. Some may find that cause to dismiss the Oji-Cree as
outliers whose experience tell us very little about the way societies ordinarily adapt to the evolution of
technology. On the other hand, the story of the Oji-Cree may be like a time-lapse video which reveals
aspects of reality ordinarily veiled by their gradual unfolding. In any case, Wu takes the story as a
warning about the nature of technological evolution.

“Technological evolution” is, of course, a metaphor based on the processes of biological evolution.
Not everyone, however, sees it as a metaphor. Kevin Kelly, who Wu cites in this second post, argues
that technological evolution is not a metaphor at all. Technology, in Kelly’s view, evolves precisely as
organisms do. Wu rightly recognizes that there are important differences between the two, however:

“Technological evolution has a different motive force. It is self-evolution, and it is therefore


driven by what we want as opposed to what is adaptive. In a market economy, it is even more
complex: for most of us, our technological identities are determined by what companies decide
to sell based on what they believe we, as consumers, will pay for.”

And this leads Wu to conclude, “Our will-to-comfort, combined with our technological powers, cre-
ates a stark possibility.” That possibility is a “future defined not by an evolution toward superintelli-
gence but by the absence of discomforts.” A future, Wu notes, that was neatly captured by the animated
film WALL•E.

Wu’s conclusion echoes some of the concerns I raised in an earlier reflection about the future envi-
sioned by the transhumanist project. It also anticipates the third post in the series, “The Problem With
Easy Technology.” In this latest post, Wu suggests that “the use of demanding technologies may actu-
ally be important to the future of the human race.”
Wu goes on to draw a distinction between demanding technologies and technologies of conve-
nience. Demanding technologies are characterized by the following: “technology that takes time to
master, whose usage is highly occupying, and whose operation includes some real risk of failure.” Con-
venience technologies, on the other hand, “require little concentrated effort and yield predictable re-
sults.”

Of course, convenience technologies don’t even deliver on their fundamental promise. Channelling
Ruth Cowan’s More Work For Mother: The Ironies Of Household Technology From The Open Hearth
To The Microwave, Wu writes,

“The problem is that, as every individual task becomes easier, we demand much more of both
ourselves and others. Instead of fewer difficult tasks (writing several long letters) we are left
with a larger volume of small tasks (writing hundreds of e-mails). We have become plagued by
a tyranny of tiny tasks, individually simple but collectively oppressive.”

But, more importantly, Wu worries that technologies of convenience may rob our action “of the sat-
isfaction we hoped it might contain.” Toward the end of his post, he urges readers to “take seriously
our biological need to be challenged, or face the danger of evolving into creatures whose lives are more
productive but also less satisfying.”

I trust that I’ve done a decent job of faithfully capturing the crux of Wu’s argument in these three
pieces, but I encourage you to read all three in their entirety.

I also encourage you to read the work of Albert Borgmann. I’m not sure if Wu has read Borgmann
or not, but his discussion of demanding technologies was anticipated by Borgmann nearly 30 years ago
in Technology and the Character of Contemporary Life: A Philosophical Inquiry. What Wu calls de-
manding technology, Borgmann called focal things, and these entailed accompanying focal practices.
Wu’s technologies of convenience are instances of what Borgmann called the device paradigm.

In his work, Borgmann sought to reveal the underlying pattern that modern technologies exhibited–
Borgmann is thinking of technologies dating back roughly to the Industrial Revolution. The device par-
adigm was his name for the pattern that he discerned.
Borgmann arrived at the device paradigm by first formulating the notion of availability. Availability
is a characteristic of technology which answers to technology’s promise of liberation and enrichment.
Something is technologically available, Bormann explains, “if it has been rendered instantaneous, ubiq-
uitous, safe, and easy.” At the heart of the device paradigm is the promise of increasing availability.

Borgmann goes on to distinguish between things and devices. While devices tend toward technolog-
ical availability, what things provide tend not to be instantaneous, ubiquitous, safe, or easy. The differ-
ence between a thing and a device is a matter of the sort of engagement that is required of user. The
difference is such that user might not even be the best word to describe the person who interacts with a
thing. In another context, I’ve suggested that practitioner might be a better way of putting it, but that
does not always yield elegant phrasing.

A thing, Borgmann writes, “is inseparable from its context, namely, its world, and from our com-
merce with the thing and its world, namely, engagement.” And immediately thereafter, Borgmann
adds, “The experience of a thing is always and also a bodily and social engagement with the things
world.” Bodily and social engagement–we’ll come back to that point later. But first, a concrete exam-
ple to help us better understand the distinctions and categories Borgmann is employing.

Borgmann invites us to consider how warmth might be made available to a home. Before central
heating, warmth might be provided by a stove or fireplace. This older way of providing warmth,
Borgmann reminds us, “was not instantaneous because in the morning a fire first had to be built in the
stove or fireplace. And before it could be built, trees had to be felled, logs had to be sawed and split,
the wood had to be hauled and stacked.” Borgmann continues:

“Warmth was not ubiquitous because some rooms remained unheated, and none was heated
evenly …. It was not entirely safe because one could get burned or set the house on fire. It was
not easy because work, some skills, and attention were constantly required to build and sustain
a fire.”

The contrasts at each of these points with central heating are obvious. Central heating illustrates the
device paradigm by the manner in which it secures the technological availability of warmth. It conceals
the machinery, the means we might say, while perfecting what Borgmann calls the commodity, the end.
Commodity is Borgmann’s word for “what a device is there for,” it is the end that the means are in-
tended to secure.

The device paradigm, remember, is a pattern that Borgmann sees unfolding across the modern tech-
nological spectrum. The evolution of modern technology is characterized by the progressive conceal-
ment of the machinery and the increasingly available commodity. “A commodity is truly available,”
Borgmann writes, “when it can be enjoyed as a mere end, unencumbered by means.” Flipping a switch
on a thermostat clearly illustrates this sort of commodious availability, particularly when contrasted
with earlier methods of providing warmth.

It’s important to note, too, what Borgmann is not doing. He is not distinguishing between the tech-
nological and the natural. Things can be technological. The stove is a kind of technology, after all, as is
the fireplace. Borgmann is distinguishing among technologies of various sorts, their operational logic,
and the sort of engagement that they require or invite. Nor, while we’re at it, is Borgmann suggesting
that modern technology has not improved the quality of life. There can be no human flourishing were
people are starving or dying of disease.

But, like Tim Wu, Borgmann does believe that the greater comfort and ease promised by technology
does not necessarily translate into greater satisfaction or happiness. There is a point at which, the gains
made by technology stop yielding meaningful satisfaction. Wu believes this is so because of “our bio-
logical need to be challenged.” There’s certainly something to that. I made a similar argument some
time ago in opposing the idea of a frictionless life. Borgmann’s analysis, however, adds two more im-
portant considerations: bodily and social engagement.

“Physical engagement is not simply physical contact,” Borgmann explains, “but the experience of
the world through the manifold sensibility of the body.” He then adds, “sensibility is sharpened and
strengthened in skill … Skill, in turn, is bound up with social engagement.”

Consider again the example of the wood-burning stove or fireplace as a means of warmth. The more
intense physical engagement may be obvious, but Borgmann invites us to consider the social dimen-
sions as well:

“It was a focus, a hearth, a place that gathered the work and leisure of a family and gave the
house its center. Its coldness marked the morning, and the spreading of its warmth the begin-
ning of the day. It assigned to the different family members tasks that defined their place in the
household. The mother built the fire, the children kept the firebox filled, and the father cut the
firewood. It provided for the entire family a regular and bodily engagement with the rhythm of
the seasons that was woven together of the threat of cold and the solace of warmth, the smell of
wood smoke, the exertion of sawing and of carrying, the teaching of skills, and the fidelity to
daily tasks.”

Borgmann’s vision of a richer, more fulfilling life secures its greater depth by taking seriously both
our embodied and social status. This vision goes against the grain of modernity’s account of the human
person which is grounded in a Cartesian dismissal of the body and a Lockean conception of the autono-
mous individuality. To the degree that this is an inadequate account of the human person, a technologi-
cal order that is premised upon it will always undermine the possibility of human flourishing.

Wu and Borgmann have drawn our attention to what may be an important source of our discontent
with the regime of contemporary technology. As Wu points out in his third piece, the answer is not
necessarily an embrace of all things that are hard and arduous or a refusal of all the advantages that
modern technology has secured for us. Borgmann, too, is concerned with distinguishing between dif-
ferent kinds of troubles: those that we rightly seek to ameliorate in practice and in principle and those
we do well to accept in practice and in principle. Making that distinction will help us recognize and ap-
preciate what may be gained by engaging with what Borgmann has called the commanding presence of
focal things and what Wu calls demanding technologies.

Admittedly, that can be a challenging distinction to make, but learning to make that distinction may
be the better part of wisdom given the technological contours of contemporary life, at least for those
who have been privileged to enjoy the benefits of modern technology in affluent societies. And I’m of
the opinion that the work of Albert Borgmann is one of the more valuable resources available to us as
seek to make sense of the challenges posed by the character of contemporary technology.

April 7, 2014
12. Humanist Technology Criticism

“Who are the humanists, and why do they dislike technology so much?”

That’s what Andrew McAfee wants to know. McAfee, formerly of Harvard Business School, is now
a researcher at MIT and the author, with Erik Brynjolfsson, of The Second Machine Age: Work,
Progress, and Prosperity in a Time of Brilliant Technologies. At his blog, hosted by the Financial
Times, McAfee expressed his curiosity about the use of the terms humanism or humanist in “critiques
of technological progress.” “I’m honestly not sure what they mean in this context,” McAfee admitted.

Humanism is a rather vague and contested term with a convoluted history, so McAfee asks a fair
question–even if his framing is rather slanted. I suspect that most of the critics he has in mind would
take issue with the second half of McAfee’s compound query. One of the examples he cites, after all, is
Jaron Lanier, who, whatever else we might say of him, can hardly be described as someone who “dis-
likes technology.”

That said, what response can we offer McAfee? It would be helpful to sketch a history of the net-
work of ideas that have been linked to the family of words that include humanism, humanist, and
the humanities. The journey would take us from the Greeks and the Romans, through (not excluding)
the medieval period to the Renaissance and beyond. But that would be a much larger project, and I
wouldn’t be your best guide. Suffice it to say that near the end of such a journey, we would come to
find the idea of humanism splintered and in retreat; indeed, in some quarters, we would find it rejected
and despised.

But if we forego the more detailed history of the concept, can we not, nonetheless, offer some clari-
fying comments regarding the more limited usage that has perplexed McAfee? Perhaps.

I’ll start with an observation made by Wilfred McClay in a 2008 essay in the Wilson Quar-
terly, “The Burden of the Humanities.” McClay suggested that we define the humanities as “the study
of human things in human ways.”¹ If so, McClay continues, “then it follows that they function in cul-
ture as a kind of corrective or regulative mechanism, forcing upon our attention those features of our
complex humanity that the given age may be neglecting or missing.” Consequently, we have a hard
time defining the humanities–and, I would add, humanism–because “they have always defined them-
selves in opposition.”

McClay provides a brief historical sketch showing that the humanities have, at different historical
junctures, defined themselves by articulating a vision of human distinctiveness in opposition to the ani-
mal, the divine, and the rational-mechanical. “What we are as humans,” McClay adds, “is, in some re-
spects, best defined by what we are not: not gods, not angels, not devils, not machines, not merely ani-
mals.”

In McClay’s historical sketch, humanism and the humanities have lately sought to articulate an un-
derstanding of the human in opposition to the “rational-mechanical,” or, in other words, in opposition
to the technological, broadly speaking. In McClay’s telling, this phase of humanist discourse emerges
in early nineteenth century responses to the Enlightenment and industrialization. Here we have the be-
ginnings of a response to McAfee’s query. The deployment of humanist discourse in the context of
technology criticism is not exactly a recent development.

There may have been earlier voices of which I am unaware, but we may point to Thomas Carlyle’s
1829 essay, “Sign of the Times,” as an ur-text of the genre.² Carlyle dubbed his era the “Mechanical
Age.” “Men are grown mechanical in head and heart, as well as in hand,” Carlyle complained. “Not for
internal perfection,” he added, “but for external combinations and arrangements for institutions, consti-
tutions, for Mechanism of one sort or another, do they hope and struggle.”

Talk of humanism in relation to technology also flourished in the early and mid-twentieth century.
Alan Jacobs, for instance, is currently working on a book project that examines the response of a set of
early 20th century Christian humanists, including W.H. Auden, Simone Weil, and Jacques Maritain, to
total war and the rise of technocracy. “On some level each of these figures,” Jacobs explains, “intuited
or explicitly argued that if the Allies won the war simply because of their technological superiority —
and then, precisely because of that success, allowed their societies to become purely technocratic, ruled
by the military-industrial complex — their victory would become largely a hollow one. Each of them
sees the creative renewal of some form of Christian humanism as a necessary counterbalance to tech-
nocracy.”
In a more secular vein, Paul Goodman asked in 1969, “Can Technology Be Humane?” In his article
(h/t Nicholas Carr), Goodman observed that popular attitudes toward technology had shifted in the
post-war world. Science and technology could no longer claim the “unblemished and justified reputa-
tion as a wonderful adventure” they had enjoyed for the previous three centuries. “The immediate rea-
sons for this shattering reversal of values,” in Goodman’s view, “are fairly obvious.

Hitler’s ovens and his other experiments in eugenics, the first atom bombs and their frenzied
subsequent developments, the deterioration of the physical environment and the destruction of
the biosphere, the catastrophes impending over the cities because of technological failures and
psychological stress, the prospect of a brainwashed and drugged 1984. Innovations yield dimin-
ishing returns in enhancing life. And instead of rejoicing, there is now widespread conviction
that beautiful advances in genetics, surgery, computers, rocketry, or atomic energy will surely
only increase human woe.”

For his part, Goodman advocated a more prudential and, yes, humane approach to technology.
“Whether or not it draws on new scientific research,” Goodman argued, “technology is a branch of
moral philosophy, not of science.” “As a moral philosopher,” Goodman continued in a remarkable pas-
sage, “a technician should be able to criticize the programs given him to implement. As a professional
in a community of learned professionals, a technologist must have a different kind of training and de-
velop a different character than we see at present among technicians and engineers. He should know
something of the social sciences, law, the fine arts, and medicine, as well as relevant natural sciences.”
The whole essay is well-worth your time. I bring it up merely as another instance of the genre of hu-
manistic technology criticism.

More recently, in an interview cited by McAfee, Jaron Lanier has advocated the revival of human-
ism in relation to the present technological milieu. “I’m trying to revive or, if you like, resuscitate, or
rehabilitate the term humanism,” Lanier explained before being interrupted by a bellboy cum Kantian,
who breaks into the interview to say, “Humanism is humanity’s adulthood. Just thought I’d throw that
in.” When he resumes, Lanier expanded on what he means by humanism:

“And pragmatically, if you don’t treat people as special, if you don’t create some sort of a spe-
cial zone for humans—especially when you’re designing technology—you’ll end up dehuman-
ising the world. You’ll turn people into some giant, stupid information system, which is what I
think we’re doing. I agree that humanism is humanity’s adulthood, but only because adults
learn to behave in ways that are pragmatic. We have to start thinking of humans as being these
special, magical entities—we have to mystify ourselves because it’s the only way to look after
ourselves given how good we’re getting at technology.”

In McAfee’s defense, this is an admittedly murky vision. I couldn’t tell you what exactly Lanier is
proposing when he says that we have to “mystify ourselves.” Earlier in the interview, however, he gave
an example that might help us understand his concerns. Discussing Google Translate, he observes the
following: “What people don’t understand is that the translation is really just a mashup of pre-existing
translations by real people. The current set up of the internet trains us to ignore the real people who did
the first translations, in order to create the illusion that there is an electronic brain. This idea is terribly
damaging. It does dehumanise people; it does reduce people.”

So Lanier’s complaint here seems to be that this particular configuration of technology and obscures
an essential human element. Furthermore, Lanier is concerned that people are reduced in this process.
This is, again, a murky concept, but I take it to mean that some important element of what constitutes
the human is being ignored or marginalized or suppressed. Like the humanities in McClay’s analysis,
Lanier’s humanism draws our attention to “those features of our complex humanity that the given age
may be neglecting or missing.”

One last example. Some years ago, historian of science George Dyson wondered if the cost of ma-
chines that think will be people who don’t. Dyson’s quip suggests the problem that Evan Selinger has
dubbed the outsourcing of our humanity. We outsource our humanity when we allow an app or device
to do for us what we ought to be doing for ourselves (naturally, that ought needs to be established).
Selinger has developed his critique in response to a variety of apps but especially those that outsource
what we may call our emotional labor.

I think it fair to include the outsourcing critique within the broader genre of humanist technology
criticism because it assumes something about the nature of our humanity and finds that certain tech-
nologies are complicit in its erosion. Not surprisingly, in a tweet of McAfee’s post, Selinger indicated
that he and Brett Frischmann had plans to co-author a book analyzing the concept of dehumanizing
technology in order to bring clarity to its application. I have no doubt that Selinger and Frishchmann’s
work will advance the discussion.

While McAfee was puzzled by humanist discourse with regards to technology criticism, others have
been overtly critical. Evgeny Morozov recently complained that most technology critics default to hu-
manist/anti-humanist rhetoric in their critiques in order to evade more challenging questions about poli-
tics and economics. For my part, I don’t see why both approaches cannot each contribute to a broader
understanding of technology and its consequences while also informing our personal and collective re-
sponses.

Of course, while Morozov is critical of humanizing/dehumanizing approach to technology on more


or less pragmatic grounds–it is ultimately ineffective in his view–others oppose it on ideological or the-
oretical grounds. For these critics, humanism is part of the problem not the solution. Technology has
been all too humanistic, or anthropocentric, and has consequently wreaked havoc on the global envi-
ronment. Or, they may argue that any deployment of humanism as an evaluative category also implies
a policing of the boundaries of the human with discriminatory consequences. Others will argue that it is
impossible to make a hard ontological distinction among the natural, the human, and the technological.
We have always been cyborgs in their view. Still others argue that there is no compelling reason to
privilege the existing configuration of what we call the human. Humanity is a work in progress and
technology will usher in a brave, new post-human world.

Already, I’ve gone on longer than a blog post should, so I won’t comment on each of those objec-
tions to humanist discourse. Instead, I’ll leave you with a few considerations about what humanist tech-
nology criticism might entail. I’ll do so while acknowledging that these considerations undoubtedly im-
ply a series of assumptions about what it means to be a human being and what constitutes human flour-
ishing.

That said, I would suggest that a humanist critique of technology entails a preference for technology
that (1) operates at a humane scale, (2) works toward humane ends, (3) allows for the fullest possible
flourishing of a person’s capabilities, (4) does not obfuscate moral responsibility, and (5) acknowledges
certain limitations to what we might quaintly call the human condition.
I realize these all need substantial elaboration and support–the fifth point is especially contentious–
but I’ll leave it at that for now. Take that as a preliminary sketch. I’ll close, finally, with a parting ob-
servation.

A not insubstantial element within the culture that drives technological development is animated by
what can only be described as a thoroughgoing disgust with the human condition, particularly its em-
bodied nature. Whether we credit the wildest dreams of the Singulatarians, Extropians, and Post-hu-
manists or not, their disdain as it finds expression in a posture toward technological power is reason
enough for technology critics to strive for a humanist critique that acknowledges and celebrates the
limitations inherent in our frail, yet wondrous humanity.

This gratitude and reverence for the human as it is presently constituted, in all its wild and glorious
diversity, may strike some as an unpalatably religious stance to assume. And, indeed, for many of us it
stems from a deeply religious understanding of the world we inhabit, a world that is, as Pope Fran-
cis recently put it, “our common home.” Perhaps, though, even the secular citizen may be troubled by,
as Hannah Arendt has put it, such a “rebellion against human existence as it has been given, a free gift
from nowhere (secularly speaking).”

June 9, 2015
13. The Technological Origins of Protestantism, or
the Martin Luther Tech Myth
This year marks the 500th anniversary of the start of the Protestant Reformation. The traditional date
marking the beginning of the Reformation is October 31, 1517. It was on that day, All Hallow’s (or
Saints) Eve, that Martin Luther posted his famous Ninety-five Theses on a church door in Wittenberg.
It is fair to say that no one then present, including Luther, had any idea the magnitude of what was to
follow.

Owing to the anniversary, you might encounter a slew of new books about Luther, the
Reformation(s), and its consequences. You might stumble across a commemorative Martin Luther
Playmobil set. You might even learn about a church in Wittenberg which has deployed a robot named
… wait for it … BlessU-2 to dispense blessings and Bible verses to visitors (free of charge, Luther
would have been glad to know).

Then, of course, there are the essays and articles in popular journals and websites, and, inevitably,
the clever takes that link Luther to contemporary events. Take, for example, this piece in Foreign Pol-
icy arguing that Luther was the Donald Trump of 1517. The subtitle goes on to claim that “if the leader
of the reformation could have tweeted the 95 theses, he would have.” I’ll get back to the subtitle in just
a moment, but, let’s be clear, the comparison is ultimately absurd. Sure, there are some very superficial
parallels one might draw, but even the author of the article understands their superficiality. Throughout
the essay, he walks back and qualifies his claims. “But in the end Luther was a man of conscience,” he
concedes, and that pretty much undermines the whole case.

But back to that line about tweeting the 95 theses. It is perhaps the most plausible claim in the whole
piece, but, oddly, the author never elaborates further. I say that it is plausible not only because the the-
ses are relatively short statements – roughly half of them or so could actually be tweeted (in their Eng-
lish translation, anyway) – and one might say that in their day they went viral, but also because it trades
on an influential myth that continues to inform how many Protestants view technology.
The myth, briefly stated in intentionally anachronistic terms, runs something like this. Marin
Luther’s success was owed to his visionary embrace of a cutting edge media technology, the printing
press. While the Catholic church reacted with a moral panic about the religious and social conse-
quences of easily accessible information and their inability to control it, Luther and his followers un-
derstood that information wanted to be free and institutions needed to be disrupted. And history testi-
fies to the rightness of Luther’s attitude toward new technology.

In calling this story a myth, I don’t mean to suggest that it is altogether untrue. While the full ac-
count is more complicated, it is nonetheless true that Luther did embrace printing and appears to have
understood its power. Indeed, under Luther’s auspices Wittenberg, an otherwise unremarkable univer-
sity town, became one of the leading centers of printing in Europe. A good account of these matters can
be found in Andrew Pettegree’s Brand Luther. “After Luther, print and public communication would
never be the same again,” Pettegree rightly concludes. And it is probably safe to also conclude that
apart from printing the Reformation does not happen.

Instead, I use the word myth to mean a story, particularly of a story of origins, which takes on a
powerful explanatory and normative role in the life of a tradition or community. It is in this sense that
we might speak of the Luther Tech Myth.

The problem with this myth is simple: it sanctions, indeed it encourages uncritical and unreflective
adoption of technology. I might add that it also heightens the plausibility of Borg Complex claims:
“churches* that do not adapt and adopt to new media will not survive,” etc.

For those who subscribe to the myth, intentionally or tacitly, this is not really a problem because the
myth sustains and is sustained by certain unspoken premises regarding the nature of technology, partic-
ularly media technology: chiefly, that it is fundamentally neutral. They imagine that new media merely
propagate the same message only more effectively. It rarely occurs to them that new media may trans-
form the message in a subtle but not inconsequential manner and that new media may smuggle another
sort of message (or, effect) with it, and that these may reconfigure the nature of the community, the
practices of piety, and the content of the faith in ways they did not anticipate.

Let’s get back to Luther for a moment and take a closer look at the relationship between printing and
Protestantism.
In The Reformation: A History, Oxford historian Diarmaid MacCulloch makes some instructive ob-
servations about printing. What is most notable about MacCulloch’s discussion is that it deals with the
preparatory effect of printing in the years leading up to 1517. For example, citing historian Bernard
Cottret, MacCulloch speaks of “the increase in Bibles [in the half century prior to 1517] created the
Reformation rather than being created by it.” A thesis that will certainly surprise many Protestants to-
day, if there are any left. (More on that last, seemingly absurd clause shortly.)

A little further on, MacCulloch correctly observes that the “effect of printing was more profound
than simply making more books available more quickly.” For one thing, it “affected western Europe’s
assumptions about knowledge and originality of thought.” Manuscript culture is “conscious of the
fragility knowledge, and the need to preserve it,” fostering “an attitude that guards rather than spreads
knowledge.” Manuscript culture is thus cautious, conservative, and pessimistic. On the other hand, the
propensity toward decay is “much less obvious in the print medium: Optimism may be the mood rather
than pessimism.” (A point on which MacCulloch cites the pioneering work of Elizabeth Eisenstein.) In
other words, printing fostered a more daring cultural spirit that was conducive to the outbreak of a rev-
olutionary movement of reform.

Finally, printing had already made it possible for reading to become “a more prominent part of reli-
gion for the laity.” Again, MacCulloch is not talking about the consequences of the Reformation; he is
talking about the half century or so leading up to Luther’s break with Rome. Where reading became a
more prominent feature of personal piety, “a more inward-looking, personalized devotion,” which is to
say, anachronistically, a more characteristically Protestant devotion, emerged. “For someone who re-
ally delighted in reading,” MacCulloch adds, “religion might retreat out of the sphere of public ritual
into the world of the mind and the imagination.”

“So,” MacCulloch concludes, “without any hint of doctrinal deviation, a new style of piety arose in
that increasingly large section of society that valued book learning for both profit and pleasure.” This
increasingly large section of the population “would form a ready audience for the Protestant message,
with its contempt for so much of the old ritual of worship and devotion.”

All of this, then, is to say that Protestantism is as much an effect of the technology of printing as it is
a movement that seized upon the new technology to spread its message. (I suspect, as an aside, that this
story, which is obviously more complicated than the sketch I’m providing here would be an important
element in Alan Jacobs’ project of uncovering the technological history of modernity.)

A few more thoughts before we wrap up, bear with me. Let’s consider the idea of “a new style of
piety,” which preceded and sustained momentous doctrinal and ecclesial developments. This phrase is
useful in so much as it is pairs nicely with the old maxim: Lex orandi, lex credendi (the law of prayer is
the law belief). The idea is that as the church worships so it believes, or that in some sense worship pre-
cedes and constitutes belief. To put it another way, we might say that the worship of the church consti-
tutes the plausibility structures of its faith. To speak of a “new style of piety,” then, is to speak of a set
practices for worship, both in its communal forms and in its private forms. These new practices are, ac-
cordingly, a new form of worship that may potentially re-configure the church’s faith. This is important
to our discussion insofar as practices of worship have a critical material/technological dimension. Bot-
tom line: shifts in the material/technological artifacts and conditions of worship potentially restructure
the form and practices of worship, which in turn may potentially reconfigure what is believed.

Of course, it is not only a matter of how print prepares the ground for Protestantism, it is also a mat-
ter of how Protestantism evolves in tandem with print. Protestantism is a religion of the book. Its piety
is centered on the book; the sacred text, of course, but also the tide of books that become aides to spiri-
tuality, displacing icons, crucifixes, statues, relics, and the panoply of ritual gestures that enlisted the
body in the service of spiritual formation. The pastor-scholar becomes the model minister. Faith be-
comes both a more individual affair and a more private matter. On the whole, it takes on a more intel-
lectualist cast. Its devotion is centered more on correct belief rather than veneration. Its instruction is
traditionally catechetical. Etc.

This brings us back to the Luther Tech Myth and whether or not there are any Protestants left. The
myth is misleading because it oversimplifies a more complicated history, and the oversimplification
obscures the degree to which new media technology is not neutral but rather formative.

Henry Jenkins has made an observation that I come back to frequently: “I often tell students that the
history of new media has been shaped again and again by four key innovative groups — evangelists,
pornographers, advertisers, and politicians, each of whom is constantly looking for new ways to inter-
face with their public.”
The evangelists Jenkins refers to are evangelical Christians in the United States, who are descended
from Luther and his fellow reformers. Jenkins is right. Evangelicals have been, as a rule, quick to adopt
and adapt new media technologies to spread their message. In doing so, however, they have also been
transformed by the tools they have implemented and deployed, from radio to television to the Internet.
The reason for this is simple: new styles of piety that arise from new media generate new assumptions
about community and authority and charisma (in the theological and sociological sense), and they alter
the status and content of belief.

And for this reason traditional Protestantism is an endangered species. Even within theologically
conservative branches of American Protestantism, it is rare to find the practice of traditional forms of
Protestant piety. Naturally, this should not necessarily be read as a lament. It is, rather, an argument
about the consequences of technological change and an encouragement to think more carefully about
the adoption and implementation of new technology.

June 2, 2017
14. How (Not) to Learn From the History of Technol-
ogy
One of the first things I wrote on this blog nearly eight years ago, on an “About” page that has since
been significantly abbreviated, was that we should aim at neither unbridled enthusiasm for technology
nor thoughtless pessimism. Obviously no one is going to accuse me of unbridled enthusiasm for tech-
nology. While I may more justly be accused of a measure of pessimism, which in my view is not alto-
gether unwarranted, I trust it does not come across as thoughtless.

So what I would like to know, from those who tend to be on the other side of the divide, is what
constitutes, in their view, legitimate expressions of concern or worry that will not be dismissed with
handwaving rhetorical gestures about how people have always worried about new technologies, etc.,
etc. (For starters, let us set aside the words “worry” and “concern” altogether. They too readily evoke
the image of fainting couches and give off the odor of smelling salts.)

Consider Zachary Karabell’s piece for Wired, “Demonized Smartphones Are Just Our Latest
Technological Scapegoat.” Karabell is responding to a series of recent articles exploring the fraught re-
lationship between children and smartphones. Most notably, two groups of Apple investors publicly
called on the company to take action to combat smartphone addiction among children.¹ Karabell cites a
handful of other examples.

I have tried to read Karabell carefully and sympathetically, but I am not entirely clear what I am to
take from his piece. Chiefly, it seems he simply felt the need to bring some calm to what he perceived
to be a panic about technology. (Given the title of the article, however, I’m not sure it’s the critics who
are panicking: smartphones aren’t being criticized, they are being demonized!)

The first move in this direction is to remind us that “Alarm at the corrosive effects of new technolo-
gies is not new” followed by obligatory references to Plato’s warning² about writing and the Catholic
Church’s response to the printing press after which we get brief mention of similar warnings about a
series of other technologies from the telegraph to Grand Theft Auto.
This is an all-too-familiar litany, and my question is always the same: What’s the point?

This question is not meant to be dismissive. It is an honest question. What is the point of the litany?
It cannot be, of course, that I should therefore discount the present warnings because this would be
a non sequitur as Karabell himself acknowledges. “Just because these themes have played out benignly
time and again,” he writes, “does not, of course, mean that all will turn out fine this time.”

Indeed not. And this is so for a reason that is easy to grasp: each technology is different. This is es-
pecially the case when we consider the capacities and scale of more recent technologies when com-
pared to earlier examples. Early in his piece, Karabell asked, “Is today’s concern about smartphones
any different than other generations’ anxieties about new technology?” The answer seems straightfor-
ward to me: Yes, obviously so … because we’re talking about different technologies.

But let’s go a bit further. How sure are we that things “have played out benignly time and again”?
How would we know? Is mere survival the bar? If not, what is? By what standard are we to conclude
that the impact of these more recent technologies has been altogether benign? As Karabell acknowl-
edges, these earlier complaints are often not wrong; we just don’t care anymore. Should we? Does this
say more about our insensibilities than it does about their anxieties? Frankly, I’m increasingly con-
vinced that we must be prepared to ask such questions and consider them with care and imagination.

After we’ve been reminded that we are not the first generation to express a measure of concern
about new technologies, we are presented with a brief catalogue of the problems attributed to smart-
phones. Karabell seems both to believe that these are genuine concerns that should not be ignored and
that we are not in a position to give them much weight. It’s an intriguing tension within this piece. It is
as if the author understands that he is dealing with valid criticisms but cannot quite bring himself to
take them too seriously.

Chiefly, it would seem that Karabell wants us to be open-minded about new technologies. The jury
is still out in his view, and we don’t yet know with certainty what the long term effects will be. This
paragraph is representative:

Some might say that until we know more, it’s prudent, especially with children, to err on the
side of caution and concern. There certainly are risks. Maybe we’re rewiring our brains for the
worse; maybe we’re creating a generation of detached drones. But there also may be benefits of
the technology that we can’t (yet) measure.

It’s hard for me to read that and draw any firm conclusions as to what Karabell thinks we ought to
do. Which is fine. I don’t always know what to do regarding the stuff I write about. But this piece os-
tensibly aims at relieving concerns and dismissing warnings; I’m not sure it succeeds, at least I don’t
see that it give us any grounds to be relieved or to set warnings aside.

What’s more, if things do in fact play out benignly (assuming that everyone affected could agree on
what that might mean), it would seem to me that the warnings and criticisms would be at least part of
the reason why. Writers like Karabell assume that whatever early turbulence a new technology causes
for a society, in time the society will right itself and cruise along smoothly. You would think, then, that
such writers would enthusiastically welcome criticisms of new technologies in order to figure out how
to steer through the turbulent period as quickly as possible. But this is rarely the case; they are merely
annoyed.

It’s rarely the case because these technologies are often proxies for something much larger, some-
thing more like a worldview, an ideology, or a moral framework. Technology is code for Modernity or
Progress or Reason, so to call a technology into question is to call these deeper values and commit-
ments into question. Karabell’s closing paragraphs reveal as much.

“More than not,” he writes, “the innovations we call ‘technology’ have transformed and ameliorated
the human conditions. There may have been some loss of community, connection to the land, and be-
longing; even here, we tend to forget that belonging almost meant exclusion for those who didn’t fit or
didn’t believe what their neighbors did.”

It is difficult to overestimate the degree to which those sentences unwittingly betray a host of moral
judgments the author seems unable to perceive as such. “[L]oss of community, connection to the land,
and belonging”—they are casually listed off as if the author has only heard rumors of people that care
about such things and can’t quite fathom such attachments.

“The smartphone is today’s emblem of whether one believes in progress or decline,” Karabell writes
in his last paragraph.
Maybe that’s just too much of a burden to put on a technology, any technology. Maybe progress and
decline shouldn’t be measured exclusively by technological innovation. Maybe it is not the critic who
needs to be admonished to consider new technologies with an open mind.

January 27, 2018


15. Cyborg Discourse is Useless

In “Why Silicon Valley Can’t Fix Itself,” Ben Tarnoff and Moira Weigel critically engage with one
response to the tech backlash, the emergence of Center for Humane Technology.

The piece begins with an overview of the wave of bad press the tech industry has received, focusing
especially on criticism that has emerged within Silicon Valley from former and current industry execu-
tives, investors, and workers. Weigel and Tarnoff then describe the work of the center and its emphasis
on more humane design as the key to redressing the ills caused by Silicon Valley. They make the inter-
esting observation that humanizing technology is a message the tech industry can get behind because it
is, in at least one manifestation, native to Silicon Valley. They are thinking chiefly of the work of Stew-
art Brand and, later, Steve Jobs.

They then turn to their critique of what they call the tech humanist response to the problems gener-
ated by Silicon Valley, a response embodied by the Center for Humane Technology. It is to that cri-
tique that I want to give my attention. Weigel and Tarnoff’s argument targets humanist technology crit-
icism more broadly, and it is this broader argument that I want to consider more closely.

Clarifications: I should say before moving forward that back in 2015 I wrote briefly in defense of
what I then called humanist tech criticism. I did so initially in response to Evgeny Morozov’s review of
Nick Carr’s work on automation, a review which was also a broadside against what he called humanist
technology criticism. Shortly thereafter I returned to the theme in response to Andrew McAfee’s
query, “Who are the humanists, and why do they dislike technology so much?”

More recently, in a discussion of the tech backlash, I’ve expressed some reservations about the
project of humanist technology criticism. My reservations, however, stem from different sources than
either Morozov’s critique or that of Weigel and Tarnoff, although there is some overlap with both.

One more prefatory note before I get on with a discussion of Weigel and Tarnoff’s critique of hu-
manist technology criticism. I’ve been using the phrase “what X called humanist technology criticism,”
and I’ve done so because the phrase is being used with a measure of imprecision or without a great deal
of critical rigor. I think that’s important to note and keep in mind. Finally, then, on to what I’m actually
writing this post to discuss.

Turnoff and Weigel’s critique of tech-humanist discourse is two-fold. First, they find that tech hu-
manist criticism, as it is deployed by the Center for Humane Technology, is too narrowly focused on
either how individuals qua consumers use digital devices or on the design decisions made by engineers
or programmers. This focus ignores the larger economic context in which such decisions are made. In
this respect, their critique reiterates Morozov’s 2015 critique of humanist technology criticism.

They argue, for example, that individual design decisions “are only a symptom of a larger issue:

the fact that the digital infrastructures that increasingly shape our personal, social and civic lives
are owned and controlled by a few billionaires. Because it ignores the question of power, the
tech-humanist diagnosis is incomplete – and could even help the industry evade meaningful re-
form. Taken up by leaders such as Zuckerberg, tech humanism is likely to result in only superfi-
cial changes. These changes may soothe some of the popular anger directed towards the tech in-
dustry, but they will not address the origin of that anger. If anything, they will make Silicon
Valley even more powerful.

About this, they are almost certainly right. As I wrote in my earlier post on the center, “Tinkering
with the apparatus to make it more humane does not go far enough if the apparatus itself is intrinsically
inhumane.” That tech companies are poised to appropriate and absorb the tech humanist critique, as it
now manifests itself, and strengthen their hand as a result seems obvious enough.

The second aspect of Tarnoff and Weigel’s critique is more philosophical in nature. “Tech human-
ists say they want to align humanity and technology,” they write, “But this project is based on a deep
misunderstanding of the relationship between humanity and technology: namely, the fantasy that these
two entities could ever exist in separation.”

This misunderstanding, in their view, generates a number of problems. For example, it yields mis-
guided anxieties about the loss of essential human qualities as a consequence of technological change.
(Perfunctory mention of the Phaedrus? Check.) “Holding humanity and technology separate,” they
also argue, “clears the way for a small group of humans to determine the proper alignment between
them.” And, fundamentally, because human nature changes it cannot “serve as a stable basis for evalu-
ating the impact of technology.”

“Fortunately,” the authors tell us, “there is another way of thinking about how to live with technol-
ogy – one that is both truer to the history of our species and useful for building a more democratic fu-
ture.” “This tradition,” they add, “does not address ‘humanity’ in the abstract, but as distinct human be-
ings, whose capacities are shaped by the tools they use. It sees us as hybrids of animal and machine –
as ‘cyborgs’, to quote the biologist and philosopher of science Donna Haraway.”

Somewhat provocatively I want to suggest that cyborg discourse is useless. This is a little different
than claiming that it is entirely erroneous or wholly without merit. Nor is it really a claim about Har-
away’s work. It seems to me that cyborg discourse as it is most often deployed in discussions of tech-
nology today is only superficially connected with Haraway’s arguments. They are dealt with about as
deeply as Plato’s, which is to say not very deeply at all.

Historical note: What is most striking about the cyborg argument is how very Victorian it turns out
to be. Writing in the mid-sixties, Lewis Mumford observed in “Technics and the Nature of Man” that
for “more than a century man has been habitually defined as a tool-using animal.” Mumford targets the
Victorian reduction of the human being to homo faber, the toolmaker, and the view that human beings
owe their unique capacities to their use of tools. It is a view that is, in his analysis, wrong on the facts
and also a projection into the past of “modern man’s own overwhelming interest in tools, machines,
technical mastery.”

It is this understanding that Mumford challenges precisely because, in his view, it has abetted the
rise of authoritarian technics controlled by a very few. In other words, Mumford’s far more radical po-
litical and economic critique of modern technology is grounded in an understanding of human nature
that is decidedly at odds with cyborg discourse. Cyborg discourse turns out to be rhetorical steampunk.

Rather, what I am claiming is that cyborg discourse, as it is popularly deployed in discussions about
the impact of technology, is useless because it gets us nowhere. By itself it offers no practical wisdom.
It offers no critical tools to help us judge, weigh, or evaluate. We’ve always been cyborgs, you say?
Fine. How does this help me think about any given technology? How does this help me evaluate its
consequences?
Indeed, it is worse than useless because, more often than not, it abets the unchecked growth of the
tech industry by blunting critique and dampening intuitive reservations. Indeed, the most consistent
application of cyborg rhetoric lies in the eschatological fantasies of the transhumanists. The tech indus-
try, in other words, is as adept at appropriating and absorbing cyborg discourse as it is humanist dis-
course.

Consider, to begin with, the claim that because human nature changes it cannot serve as a stable ba-
sis for evaluating the impact of technology.

At what rate exactly does human nature change? Does it change so quickly that it cannot guide our
reflections on the relative merits of new technologies? As evidence for the claim that humans and tech-
nology “constantly change together,” the authors cite a journal article that they say “suggests that the
human hand evolved to manipulate the stone tools that our ancestors used.” The conclusion of the arti-
cle, however, is less than definitive: while certain strands of evidence point in this direction, “it cannot
be directly determined that hominin hands evolved by natural selection in adaptation to tool making.”
Moreover, the time scale cited by the author is, in any case, “many millennia.”

It seems to me that very little follows from this piece of evidence. The relevance of this thesis to
how we think about and evaluate technology today needs to be established. We are no long talking
about primitive stone tools nor are we helped by taking into consideration processes that played out
over the course of many millennia. If someone claims that a certain technology is dehumanizing, telling
them that our human ancestors evolved in conjunction with their use of stone tools is a fine bit of petty
sophistry.

And why should it be the case that holding humanity and technology separate paves the way for an
elite class to determine the nature of the relationship? Is this a necessary development? How so? Are
there no counter-examples? Blurring the distinction has, in fact, had the effect that the authors attribute
to maintaining the distinction. “We have always been cyborgs” is just as much a case for thoughtless
assimilation to whatever new technology we’re being sold.

The cyborg tradition, the authors claim, does not address the abstraction “humanity” but distinct hu-
man beings. This if fine, but, again, I’m not sure it gets us very far. For one thing, are we back to indi-
viduals making decisions? And on what basis, exactly, are these distinct human beings making their de-
cisions?

“To say that we’re all cyborgs is not to say that all technologies are good for us, or that we should
embrace every new invention,” the authors grant. Okay, so how are we to judge and discern? Can these
individuals not be guided by some particular understanding of what constitutes human flourishing? If
I’m going to act collectively will it not be on the basis of some understanding of what is good not just
for me personally but for me and others as human beings?

“But it does suggest,” they immediately add, “that living well with technology can’t be a matter of
making technology more ‘human’.” But again, why should we not judge technology based upon some
understanding of what is fitting for the sorts of creatures we presently are? Because in five millennia
we will have a marginally different skeletal configuration? If not all technologies are good for us, is it
not because some technologies erode something that could be claimed as fundamental to human dignity
or because it undermines some essential component of human flourishing?

Interestingly, we are then told that the “cyborg way of thinking, by contrast, tells us that our species
is essentially technological.” Have we not just substituted one essentialist account of human nature for
another? Cyborg discourse, as it turns out, aims to tell us exactly the sort of creatures we are. It’s not
that we are doing away with all accounts of human nature, we are just privileging one account over oth-
ers.

In this way it parallels the liberal democratic pretense to neutrality regarding competing visions of
the good life. And, in the same way and for the same reason, it thus promotes a context in which tech-
nology can flourish independently of any specifically human ends.

The anti-tech humanist position staked out by the authors also ignores the possibility that some tech-
nologies are fundamentally disordering of individual and collective human experience. In many re-
spects, they are subject to the same critique that they leveled against the Center for Humane Technol-
ogy. What they want is simply a better version, by their lights, of existing technology. Chiefly, this en-
tails some version of public ownership. But what will constitute this public if not some shared under-
standing of what is good for people given their peculiarly human nature?
“But even though our continuous co-evolution with our machines is inevitable,” Tarnoff and Weigel
write, “the way it unfolds is not. Rather, it is determined by who owns and runs those machines. It is a
question of power.” A little further on they invite us to envision “a worker-owned Uber, a user-owned
Facebook or a socially owned ‘smart city’ of the kind being developed in Barcelona.” But what of
those who, for reasons grounded in a particular understanding of the human condition, don’t care to
live in any iteration of a smart city? Or what if a publicly owned version of Facebook is judged to be
socially and politically disordering on the same grounds? Cyborg rhetoric tends dismisses such criti-
cism because it is grounded on an account of human nature that is at odds with the cyborg vision.

“Rather than trying to humanise technology, then, we should be trying to democratize it,” Tarnoff
and Weigel insist. “We should be demanding that society as a whole gets to decide how we live with
technology – rather than the small group of people who have captured society’s wealth.” But herein lies
the problem. Society as a whole is too fractured a unit to undertake the kind of collective action the au-
thor’s desire. It is an abstraction, just like Humanity. The authors seem to imagine that society as a
whole shares their concerns. But what if most people are perfectly content trading their data for conve-
nience?

When it comes down to it, everyone is a humanist technology critic, there are simply competing un-
derstandings of the human in play. If the use of a given technology is to be regulated or resisted or oth-
erwise curtailed, it’s because someone deems it bad for people given some understanding, tacit as it
may be, of what people are for.

None of this is to say that humanist discourse does not have its own set of problems, theoretical and
practical. Or that the critical questions I’ve raised may not have satisfactory answers from a cyborg dis-
course perspective. Mostly it is to say that more often than not cyborg discourse is facile and superfi-
cial and, by itself, does very little to enlighten our situation or point a way forward.

May 28, 2018


16. Idols of Silicon and Data

In 2015, former Google and Uber engineer, Anthony Levandowski, founded a nonprofit called Way
of the Future in order to develop an AI god and promote its worship. The mission statement reads as
follows: “To develop and promote the realization of a Godhead based on artificial intelligence and
through understanding and worship of the Godhead contribute to the betterment of society.”

A few loosely interconnected observations follow.

First, I would suggest that Levandowski’s mission only makes explicit what is often our implicit re-
lationship to technology. Technology is a god to us, albeit a “god that limps,” in Colin Norman’s ar-
resting image drawn from Greek mythology’s lame, metal-working god, Hephaestus. We trust our-
selves to it, assign to it salvific powers, uncritically answer its directives, and hang our hope on it. But
in its role as functional deity it inevitably disappoints.

Second, it seems to me that the project should be taken seriously only as an explicit signifier of im-
plicit and tacit realities. However, as Levandowski’s project makes clear, quasi-religious techno-fan-
tasies are often embraced by well-placed and influential technologists. To some degree, then, it would
seem that the development of technology, its funding and direction, is driven by these motives and they
should not be altogether ignored.

Third, the article discussing Levandowski’s project also leans on some comments from historian
Yuval Hariri, who has developed something of reputation for grand claims about the future of human-
ity. According to the article, “history tells us that new technologies and scientific discoveries have con-
tinually shaped religion, killing old gods and giving birth to new ones.” He then quotes Harari:

That is why agricultural deities were different from hunter-gatherer spirits, why factory hands
and peasants fantasised about different paradises, and why the revolutionary technologies of the
21st century are far more likely to spawn unprecedented religious movements than to revive
medieval creeds.
Both miss the reciprocal relationship between society and technology, and specifically between reli-
gion in the West and technology. It is not only a matter of technology impacting and effecting religion,
it is also a matter of religion infusing and informing the development of technology. I’ve cited
it numerous times before, but it’s worth mentioning again that David Noble’s The Religion of Technol-
ogy is wonderful place to start in order to understand this dynamic. According to Noble, “literally and
historically,” modern technology and religion have co-evolved and, consequently, “the technological
enterprise has been and remains suffused with religious belief.” We fail to understand technology in the
West if we do not understand this socio-religious dimension.

Fourth, the author goes on to cite advocates of transhumanism. Transhumanism is usefully con-
strued as a Christian heresy, or, if you prefer, (post-)Christian fan-fiction. I elaborate on that claim to-
ward the end of this post.

Fifth, some of our most incisive tech critics have been people of religious conviction. Jacques Ellul,
Marshall McLuhan, Ivan Illich, Walter Ong, Albert Borgmann, Wendell Berry, and Bill McKibben
come to mind. Neil Postman, who was not a religious person as far as I know, nonetheless attributed
his critical interest in media to a reading of the second commandment. The tech critic of religious faith
has at least one thing going for them: their conviction that there is one God and technology is not it.*
Valuing technology for more than it is, then, appears as a species of idolatry.

Sixth, all god-talk in relation to technology takes for granted some understanding of God, religion,
faith, etc. It is often worth excavating the nature of these assumptions because they are usually doing
important conceptual work in whatever argument or claim they are embedded.

Seventh, in the article transhumanist philosopher Zoltan Istvan claims of the potential AI deity, that
“this God will actually exist and hopefully will do things for us.” The religion of technology is ulti-
mately about power: human power over nature and, finally, the power of some humans over others.

October 1, 2017
17. Technology and the Great War

Today marks the 100th anniversary of Armistice Day, which brought the Great War to an end. The
Great War, of course, would later come to be known as the First World War because an even greater
war would follow within twenty years.

Although it has been eclipsed in the public imagination by the Second World War, a case can be
made that the first was indeed the more significant. This is true in a superficial sense, of course: it’s
hard to see how you get the second war without the first. But it is true even apart from that rather obvi-
ous claim. It was the first war that dealt a mortal blow to much of what we associate with the culture of
modernity: its confidence, its commitment to reason, its belief in progress, and its faith in technology.

There are few better summaries of the losses that can be attributed to the First World War than the
following lines from Jacques Barzun:

“Varying estimates have been made of the losses that must be credited to the great illusion.
Some say 10 million lives were snuffed out in the 52 months and double that number wounded.
Others propose higher or lower figures. The exercise is pointless, because loss is a far wider
category than death alone. The maimed, the tubercular, the incurables, the shell-shocked, the
sorrowing, the driven mad, the suicides, the broken spirits, the destroyed careers, the budding
geniuses plowed under, the missing births were losses, and they are incommensurable … One
cannot pour all human and material resources into a fiery cauldron year after year and expect to
resume normal life at the end of the prodigal enterprise.”

Barzun was right, of course, and he knew of what he spoke; he lived through it all. Barzun turned
twelve just a few days after the war came to end. He was 104 when he died not that long ago in 2012.

In Machines as the Measure of Men, historian Michael Adas devotes a chapter to “The Great War
and the Assault on Scientific and Technological Measures of Human Worth.” In it, he makes a number
of observations about the impact of the Great War on the place of technology in the public imagination.
In the years leading up to the war, Adas writes, “little serious discussion was devoted to the horrific
potential of the new weapons that had been spawned by the union of science and technology in the ever
changing industrial order.”

“The failure of most Europeans to fathom the potential for devastation of the new weapons they
were constantly devising,” Adas added, “owed much to their conviction that no matter how rapid the
advances, Western men were in control of the machines they were creating.”

The failure had not been total, however. A few military specialists had taken important lessons from
the closing campaigns of the American Civil War and the Russo-Japanese War, and they foresaw the
great destructive potential of industrialized warfare. Adas cites Baron von der Goltz, for example, who,
in the 1880s, concluded, “All advances made by modern science and technical art are immediately ap-
plied to the abominable art of annihilating mankind.”

Adas continued:

“Numerous writers [of the time] lamented the extent to which scientific research, formerly seen
as overwhelmingly beneficial to humanity, had been channeled into the search for ever more
lethal weapons. Some of the most brilliant minds of a civilization ‘devoured by geometry’ had
labored for generations to ensure that death could be dealt on a mass scale ‘with exactitude, log-
arithmic, dial-timed, millesimal-calculated velocity.’ Many of those who watched their compa-
triots die cursed not the enemy, who was equally a victim, but the ‘mean chemist’s contrivance’
and the ‘stinking physicist’s destroying toy.’”

The following is worth quoting at length:

“The theme of humanity betrayed and consumed by the technology that Europeans had long
considered the surest proof of their civilization’s superiority runs throughout the accounts of
those engaged in the trench madness. The enemy is usually hidden in fortresses of concrete,
barbed wire, and earth. The battlefield is seen as a ‘huge, sleeping machine with innumerable
eyes and ears and arms.’ Death is delivered by ‘impersonal shells’ from distant machines; one is
spared or obliterated by chance alone. The ‘engines of war’ grind on relentlessly; the ‘massacre
mecanique‘ knows no limits, gives no quarter. Men are reduced to ‘slaves of machines’ or
“wheels [or cogs] in the great machinery of war.’ Their bodies become machines; they respond
to one’s questions mechanically; they ‘sing the praises’ of the machines that crush them. War
has become ‘an industry of professionalized human slaughter,’ and technology is equated with
tyranny. Western civilization is suffocating as a result of overproduction; it is being, destroyed
by the wheels of great machines or has been lost in a labyrinth of machines. Its very future is
threatened by the machines it has created. Like David Jones, many of those who fought on the
Western Front and lived long enough to write about their encounter with war in the industrial
age began to ‘doubt the decency of [their] own inventions, and [were] certainly in terror of their
possibilities.’ To have any chance of survival, all who entered the battle zone were forced to ‘do
gas-drill, be attuned to many newfangled technicalities, respond to increasingly exacting me-
chanical devices; some fascinating and compelling, others sinister in the extreme; all requiring a
new and strange direction of the mind, a new sensitivity certainly, but at a considerable cost.’”

One is reminded of how one survivor of the horror of the trenches came to characterize “the Ma-
chine” in his most famous work, The Lord of the Rings. Readers will remember many of the obvious
ways in which Tolkien linked the apparatus of industrialization with the forces of darkness. Saruman,
to take one memorable example, is described by another character in these terms: “He has a mind of
metal and wheels; and he does not care for growing things, except as far as they serve him for the mo-
ment.” Saruman harnessed the power of mechanized magic and wielded this power to destroy, among
other things, the forests surrounding Isengard, which it is clear we are to understand as a great offense
and one for which he pays dearly. It would be a mistake to dismiss Tolkien as an opponent of technol-
ogy per se pining for a romanticized pre-modern world. Tolkien’s views are not, as I understand them,
romantic; quite the contrary. They were forged in the fires of the war as he experienced first hand the
fruits of industrialized violence. “One has indeed personally to come under the shadow of war to feel
fully its oppression,” Tolkien once wrote, “but as the years go by it seems now often forgotten that to
be caught in youth by 1914 was no less hideous an experience than to be involved in 1939 and the fol-
lowing years. By 1918 all but one of my close friends were dead.”

In a letter to a friend, Tolkien wrote of Lord of the Rings, “Anyway all this stuff is mainly con-
cerned with Fall, Mortality, and the Machine.” He went on to explain what he meant by “the Machine”
in this way:
“By the last I intend all use of external plans or devices (apparatus) instead of development of
the inherent inner powers or talents — or even the use of these talents with the corrupted motive
of dominating: bulldozing the real world, or coercing other wills. The Machine is our more ob-
vious modern form though more closely related to Magic than is usually recognised …. The En-
emy in successive forms is always ‘naturally’ concerned with sheer Domination, and so the
Lord of magic and machines.”

It is likely that Tolkien’s depiction of Saruman’s destruction of the trees owed something to the dev-
astation he witnessed on the front lines. According to Adas, “‘Uprooted, smashed’ trees, ‘pitted,
rownsepyked out of nature, cut off in their sap rising,’ were also central images in participants’ descrip-
tions of the war zone wasteland.” “As Georges Duhamel observed in 1919,” Adas went on to write,
“the Western obsession with inventing new tools and discovering new ways to force nature to support
material advancement for its own sake had inevitably led to the trench wasteland in which ‘man had
achieved this sad miracle of denaturing nature, of rendering it ignoble and criminal.’”

“As a number of intellectuals noted after the war,” Adas concluded, “the Europeans’ prewar associa-
tion of the future with progress and improvement was also badly shaken by the mechanization of
slaughter in the trenches.” He went on:

“Henry James’s poignant expression of the sense of betrayal that Europeans felt in the early
months of the war, when they realized that technical advance could lead to massive slaughter as
readily as to social betterment, was elaborated upon in the years after the war by such thinkers
as William Inge, who declared that the conflict had exposed the ‘law of inevitable progress’ as a
mere superstition. The Victorian mold, Inge declared, had been smashed by the war, and ‘the
gains of that age now seem to some of us to have been purchased too high, or even to be them-
selves of doubtful value.’ Science had produced perhaps the ugliest of civilizations; technologi-
cal marvels had been responsible for unimaginable destruction. Never again, he concluded,
would there be ‘an opportunity for gloating over this kind of improvement.’ The belief in
progress, the ‘working faith’ of the West for 150 years, had been forever discredited.”

What is most striking about all of this may be the fact that the effect was so short-lived. Narratives
of technological progress, as it turned out, were not altogether discredited. The 1933 World’s Fair in
Chicago, celebrating “a century of progress,” took as one of its mottos the line “Science Finds, Industry
Applies, Man Conforms.” Even after similar fears spiked once again following the Second World War,
especially in light of the development of atomic weapons, it would be only a matter of time before con-
cerns subsided and faith in technological progress reemerged. It’s hard to live without a myth, and it
seems that the myth of technological progress is the only one still kicking around at this late hour. Al-
though it is also true that ever since the Great War, the embrace has never again been quite so earnest.

November 11, 2018


PART II

Technology, Ethics, and the Moral Life


18. Conscience of a Machine

Gary Marcus has predicted that within the next two to three decades we would enter an era “in
which it will no longer be optional for machines to have ethical systems.” Marcus invites us to imagine
the following driverless car scenario: “Your car is speeding along a bridge at fifty miles per hour when
errant school bus carrying forty innocent children crosses its path. Should your car swerve, possibly
risking the life of its owner (you), in order to save the children, or keep going, putting all forty kids at
risk?”

In this scenario, a variation of the trolley car problem, the computer operating the car would need to
make a decision (although I suspect putting it that way is an anthropomorphism). Were a human being
called upon to make such a decision, it would be considered a choice of moral consequence. Conse-
quently, writing about Marcus’ piece, Nicholas Carr concluded, “We don’t even really know what a
conscience is, but somebody’s going to have to program one nonetheless.”

Of course, there is a sense in which autonomous machines of this sort are not really ethical agents.
To speak of their needing a conscience strikes me as a metaphorical use of language. The operation of
their “conscience” or “ethical system” will not really resemble what has counted as moral reasoning or
even moral intuition among human beings. They will do as they are programmed to do. The question
is, What will they be programmed to do in such circumstances? What ethical system will animate the
programming decisions? Will driverless cars be Kantians, obeying one rule invariably; or will they be
Benthamites, calculating the greatest good for the greatest number?

There is an interesting sense, though, in which an autonomous machine of the sort envisioned in
these scenarios is an agent, even if we might hesitate to call it an ethical agent. What’s interesting is not
that a machine may cause harm or even death. We’ve been accustomed to this for generations. But in
such cases, a machine has ordinarily malfunctioned, or else some human action was at fault. In the sce-
narios proposed by Marcus, an action that causes harm would be the result of a properly functioning
machine and it would have not been the result of direct human action. The machines decided to take an
action that resulted in harm, even if it was in some sense the lesser harm. In fact, such machines might
rightly be called the first truly malfunctioning machines.

There is little chance that our world will not one day be widely populated by autonomous machines
of the sort that will require a “conscience” or “ethical systems.” Determining what moral calculus
should inform such “moral machines,” is problematic enough. But there is another, more subtle danger
that should concern us.

Such a machine seems to enter into the world of morally consequential action that until now has
been occupied exclusively by human beings, but they do so without a capacity to be burdened by the
weight of the tragic, to be troubled by guilt, or to be held to account in any sort of meaningful and satis-
fying way. They will, in other words, lose no sleep over their decisions, whatever those may be.

We have an unfortunate tendency to adapt, under the spell of metaphor, our understanding of human
experience to the characteristics of our machines. Take memory for example. Having first decided, by
analogy, to call a computer’s capacity to store information “memory,” we then reversed the direction of
the metaphor and came to understand human memory by analogy to computer “memory,” i.e., as mere
storage. So now we casually talk of offloading the work of memory or of Google being a better substi-
tute for human memory without any thought for how human memory is related to perception,
understanding, creativity, identity, and more.

I can too easily imagine a similar scenario wherein we get into the habit of calling the algorithms by
which machines are programmed to make ethically significant decisions the machine’s “conscience,”
and then turn around, reverse the direction of the metaphor, and come to understand human conscience
by analogy to what the machine does. This would result in an impoverishment of the moral life.

Will we then begin to think of the tragic sense, guilt, pity, and the necessity of wrestling with moral
decisions as bugs in our “ethical systems”? Will we envy the literally ruth-less efficiency of “moral
machines”? Will we prefer the Huxleyan comfort of a diminished moral sense, or will we claim the
right to be unhappy, to be troubled by fully realized human conscience?

This is, of course, not merely a matter of making the “right” decisions. Part of what makes program-
ming “ethical systems” troublesome is precisely our inability to arrive at a consensus about what is the
right decision in such cases. But even if we could arrive at some sort of consensus, the risk I’m envi-
sioning would remain. The moral weightiness of human existence does not reside solely in the moment
of decision, it extends beyond the moment to a life burdened by the consequences of that action. It is
precisely this “living with” our decisions that a machine conscience cannot know.

In Miguel Unamuno’s Tragic Sense of Life, he relates the following anecdote: “A pedant who be-
held Solon weeping for the death of a son said to him, ‘Why do you weep thus, if weeping avails noth-
ing?’ And the sage answered him, ‘Precisely for that reason–because it does not avail.’”

Were we to conform our conscience to the “conscience” of our future machines, we would cease to
shed such tears, and our humanity lies in Solon’s tears.

December 7, 2012
19. Perspectives on Privacy and Human Flourishing

I’ve not been able to track down the source, but somewhere Marshall McLuhan wrote, “Publication
is a self-invasion of privacy. The more the data banks record about each one of us, the less we exist.”

The unfolding NSA scandal has brought privacy front and center. A great deal is being written right
now about the ideal of privacy, the threats facing it from government activities, and how it might best
be defended. Conor Friedersdorf, for instance, worries that our government has built “all the infrastruc-
ture a tyrant would need.” At this juncture, the concerns seem to me neither exaggerated nor conspira-
torial.

Interestingly, there also seems to be a current of opinion that fails to see what all the fuss is about.
Part of this current stems from the idea that if you’ve got nothing to hide, there’s nothing to worry
about. There’s an excerpt from Daniel J. Solove’s 2011 book on just this line of reasoning in the
Chronicle of Higher Ed that is worth reading.

Others are simply willing to trade privacy for security. In a short suggestive post on creative ambi-
guity with regards to privacy and government surveillance, Tyler Cowen concedes, “People may even
be fine with that level of spying, if they think it means fewer successful terror attacks.” “But,” he im-
mediately adds, “if they acquiesce to the previous level of spying too openly, the level of spying on
them will get worse. Which they do not want.”

Maybe.

I wonder whether we are not witnessing the long foretold end of western modernity’s ideal of pri-
vacy. That sort of claim always comes off as a bit hyperbolic, but it’s not altogether misguided. If we
grant that the notion of individual privacy as we’ve known it is not a naturally given value but rather a
historically situated concept, then it’s worth considering both what factors gave rise to the concept and
how changing sociological conditions might undermine its plausibility.
Media ecologists have been addressing these questions for quite awhile. They’ve argued that pri-
vacy, as we understand (understood?) it, emerged as a consequence of the kind of reading facilitated by
print. Privacy, in their view, is the concern of a certain type of individual consciousness that arises as a
by-product of the interiority fostered by reading. Print, in these accounts, is sometimes credited with an
unwieldy set of effects which include the emergence of Protestantism, modern democracy, the Enlight-
enment, and the modern idea of the individual. That print literacy is the sole cause of these develop-
ments is almost certainly not the case; that it is implicated in each is almost certainly true.

This was the view, for example, advanced by Walter Ong in Orality and Literacy. “[W]riting makes
possible increasingly articulate introspectivity,” Ong explains, “opening the psyche as never before not
only to the external objective world quite distinct from itself but also to the interior self against whom
the objective world is set.” Further on he wrote,

Print was also a major factor in the development of the sense of personal privacy that marks
modern society. It produced books smaller and more portable than those common in a manu-
script culture, setting the stage psychologically for solo reading in a quiet corner, and eventually
for completely silent reading. In manuscript culture and hence in early print culture, reading had
tended to be a social activity, one person reading to others in a group. As Steiner … has sug-
gested, private reading demands a home spacious enough to provide for individual isolation and
quiet.

This last point draws architecture into the discussion as Aaron Bady noted in his 2011 essay for MIT
Review, “World Without Walls”:

Brandeis and Warren were concerned with the kind of privacy that could be afforded by walls:
even where no actual walls protected activities from being seen or heard, the idea of walls in-
formed the legal concept of a reasonable expectation of privacy. It still does … But contempo-
rary threats to privacy increasingly come from a kind of information flow for which the para-
digm of walls is not merely insufficient but beside the point.

This argument was also made by Marshall McLuhan who, like his student Ong, linked it to the
“coming of the book.” For his part, Ong concluded “print encouraged human beings to think of their
own interior conscious and unconscious resources as more and more thing-like, impersonal and reli-
giously neutral. Print encouraged the mind to sense that its possessions were held in some sort of inert
mental space.” Presumably, then, the accompanying assumption is that this thing-like inert mental
space is something to be guarded and shielded from intrusion.

While it is a letter, not a book that she reads, Vermeer’s Woman in Blue has always seemed to me a
fitting visual illustration of this media ecological perspective on the idea of privacy. The question all of
this begs is obvious: What does the decline of the age of print entail for the idea of privacy? What hap-
pens when we enter what McLuhan called the “electric age” and Ong called the age of “secondary oral-
ity,” or what we might now call the “digital age”?

McLuhan and Ong seemed to think that the notion of privacy would be radically reconfigured, if not
abandoned altogether. One could easily read the rise of social media as further evidence in defense of
their conclusion. The public/private divide has been endlessly blurred. Sharing and disclosure is ex-
pected. So much so that those who do not acquiesce to the regime of voluntary and pervasive self-dis-
closure raise suspicions and may be judged sociopathic.

Perhaps, then, privacy is a habit of thought we may have fallen out of. This possibility was explored
in an extreme fashion by Josh Harris, the dot-com era Internet pioneer who subjected himself, and will-
ing others, to unblinking surveillance. The experiment in prophetic sociology was documented by di-
rector Ondi Timoner in the film We Live in Public.

The film is offered as a cautionary tale. Harris suffered an emotional and mental breakdown as a
consequences of his experimental life. On the film’s website, Timoner added this about Harris’ girl-
friend who had enthusiastically signed up for the project: “She just couldn’t be intimate in public. And
I think that’s one of the important lessons in life; the Internet, as wonderful as it is, is not an intimate
medium. It’s just not. If you want to keep something intimate and if you want to keep something sa-
cred, you probably shouldn’t post it.”

This caught my attention because it introduced the idea of intimacy rather than, or in addition to,
that of privacy. As Solove argued in the piece mentioned above, we eliminate the rich complexity of all
that is gathered under the idea of privacy when we reduce it to secrecy or the ability to conceal socially
marginalized behaviors. Privacy, as Timoner suggests, can also be understood as the pre-condition of
intimacy, and, just to be clear, this should be understood as more than mere sexual intimacy.

The reduction of intimacy to sexuality recalls the popular mis-reading of the Fall narrative in the
Hebrew Bible. The description of the Edenic paradise concludes – unexpectedly until familiarity has
taught you to expect it – with the narrator’s passing observation that the primordial pair where naked
and unashamed. A comment on sexual innocence, perhaps, but much more I think. It spoke to a radical
and fearless transparency born of pure guilelessness. The innocence was total and so, then, was the
openness and intimacy.

Of course, the point of the story is to set up the next tragic scene in which innocence is lost and the
immediate instinct is to cover their nakedness. Total transparency is now experienced as total vulnera-
bility, and this is the world in which we live. Intimacy of every kind is no longer a given. It must
emerge alongside hard-earned trust, heroic acts of forgiveness, and self-sacrificing love. And perhaps
with this realization we run up against the challenge of our digital self-publicity and the risks posed by
perpetual surveillance. The space for a full-fledged flourishing of the human person is being both sur-
rendered and withdrawn. The voluntarily and involuntarily public self, is a self that operates under con-
ditions which undermine the possibility of its own well-being.

But, this is also why I believe Bady is on to something when he writes, “Privacy has a surprising re-
silience: always being killed, it never quite dies.” It is why I’m not convinced that we could entirely re-
duce all that is entailed in the notion of privacy to a function of print literacy. If something that answers
to the name of privacy is a condition of our human flourishing in our decidedly un-Edenic condition,
then one hopes we will not relinquish it entirely to either the imperatives of digital culture or the
machinations of the state. It is, admittedly, a tempered hope.

June 9, 2013
20. When Silence Is Power

In The Human Condition, Hannah Arendt wrote, “What first undermines and then kills political
communities is loss of power and final impotence.” She went on to add, “Power is actualized only
where word and deed have not parted company, where words are not empty and deeds not brutal,
where words are not used to veil intentions but to disclose realities, and deeds are not used to violate
and destroy but to establish relations and create new realities.”

In our present media environment, the opposite of this formula may be closer to the truth, at least in
certain situations. In these cases, the refusal to speak is action. Silence is power.

The particular situation I have in view is the hijacking of public discourse (and consequently the po-
litical order) by the endless proliferation of manufactured news and fabricated controversy.

These pseudo-events are hyperreal. They are media events that exist as such only in so far as they
are spoken about. “To go viral” is just another way of describing the achievement of hyperreality . To
be “spoken about” is to be addressed within our communication networks. In a networked society, we
are the relays and hyperreality is an emergent property of our networked acts of communication.

Every interest that constitutes our media environment and media economy is invested in the perpetu-
ation of hyperreality.

Daily, these pseudo-events consume our attention and our mental and emotional energy. They feed
off of and inspire frustration, rage, despair, paranoia, revenge, and, ultimately, cynicism. It is a daily
boom/bust cycle of the soul.

Because they are constituted by speech, the pseudo-events are immune to critical speech. Speaking
of them, even to criticize them, strengthens them.

When speaking is the only perceived form of action–it is, after all, the only way of existing on our
social media networks–then that which thrives by being spoken about will persist.
How does one protest when acts of protest are consistently swallowed up by that which is being
protested? When the act of protest has the perverse effect of empowering that which is being protested?

Silence.

Silence is the only effective form of boycott. Traditional boycotts, the refusal to purchase goods or
patronize establishments, are ineffective against hyperreality. They are sucked up into the pseu-
do-events.

Finally, the practice of silence must be silent about itself.

Here the practice of subversive silence threatens to fray against the edge of our media environment.
When the self is itself constituted by acts of speech within the same network, then refusal to speak feels
like self-deprivation. And it is. Silence under these conditions is an ascetic practice, a denial of the self
that requires considerable discipline.

But if we are relays in the network, then self-sabotage becomes a powerful act of protest.

Perhaps the practice of this kind of self-imposed, unacknowledged silence may be the power that
helps resuscitate public discourse.

December 21, 2013


21. Troubles We Must Not Refuse

In an editorial at Wired, “Today’s Apps Are Turning Us Into Sociopaths,” Evan Selinger provides
an incisive critique of an app that promises to automate aspects of interpersonal relationships.

Selinger approached his piece by interviewing the app designers in order to understand the rationale
behind their product. This leads into an interesting and broad discussion about technological determin-
ism, technology’s relationship to society, and ethics.

I was particularly intrigued by how assumptions of technological inevitability were deployed. Take
the following, for example:

“Embracing this inevitability, the makers of BroApp argue that ‘The pace of technological
change is past the point where it’s possible for us to reject it!’”

And:

“’If there is a niche to be filled: i.e. automated relationship helpers, then entrepreneurs will act
to fill that niche. The combinatorial explosion of millions of entrepreneurs working with acces-
sible technologies ensures this outcome. Regardless of moral ambiguity or societal push-back,
if people find a technology useful, it will be developed and adopted.’”

It seems that these designers have a pretty bad case of the Borg Complex, my name for the rhetoric
of technological determinism. Recourse to the language of inevitability is the defining symptom of a
Borg Complex, but it is not the only one exhibited in this case.

According to Selinger, they also deploy another recurring trope: the dismissal of what are derisively
called “moral panics” based on the conclusion that they amount to so many cases of Chicken Little, and
the sky never falls. This is an example of another Borg Complex symptom: “Refers to historical an-
tecedents solely to dismiss present concerns.”
Selinger has identified an important area of concern, the increasing ease with which we may out-
source ethical and emotional labor to our digital devices, and he is helping us think clearly and wisely
about it.

About a year ago, Evgeny Morozov raised related concerns that prompted me to write about the
inhumanity of smart technology. A touch of hyperbole, perhaps, but I do think the stakes are high. I’ll
leave you with two observations I drew then that apply to this case as well.

The first:

“Out of the crooked timber of humanity no straight thing was ever made,” Kant observed.
Corollary to keep in mind: If a straight thing is made, it will be because humanity has been
stripped out of it.

The second relates to a distinction Albert Borgmann drew some time ago between troubles we ac-
cept in practice and those we accept in principle. Those we accept in practice are troubles we need to
cope with but which we should seek to eradicate, take cancer for instance. Troubles we accept in prin-
ciple are those that we should not, even if we were able, seek to abolish. These troubles are somehow
essential to the full experience of our humanity and they are an irreducible component of those prac-
tices which bring us deep joy and satisfaction.

That’s a very short summary of a very substantial theory. I think Borgmann’s point is critical. It ap-
plies neatly to the apps Selinger has been analyzing. It also speaks to the temptations of smart technol-
ogy highlighted by Morozov, who rightly noted,

“There are many contexts in which smart technologies are unambiguously useful and even life-
saving. Smart belts that monitor the balance of the elderly and smart carpets that detect falls
seem to fall in this category. The problem with many smart technologies is that their designers,
in the quest to root out the imperfections of the human condition, seldom stop to ask how much
frustration, failure and regret is required for happiness and achievement to retain any meaning.”

From another angle, we can understand the problem as a misconstrual of the relationship between
means and ends. Technology, when it becomes something more than an assortment of tools, when it
becomes a way of looking at the world, technique in Jacques Ellul’s sense, fixates on means at the ex-
pense of ends. Technology is about how things get done, not what ought to get done or why. Conse-
quently, we are tempted to misconstrue means as ends in themselves, and we are also encouraged to
think of means as essentially interchangeable. We simply pursue the most efficient, effective means.
Period.

But means are not always interchangeable. Some means are integrally related to the ends that they
aim for. Altering the means undermines the end. The apps under consideration, and many of our digital
tools more generally, proceed on the assumption that means are, in fact, interchangeable. It doesn’t
matter whether you took the time to write out a message to your loved one or whether it was an auto-
mated app that only presents itself as you. So long as the end of getting your loved one a message is ac-
complished, the means matter not.

This logic is flawed precisely because it mistakes a means for an end and sees means as interchange-
able. The real end, of course, in this case anyway, is a loving relationship not simply getting a message
that fosters the appearance of a loving relationship. And the means toward that end are not easily inter-
changeable. The labor, or, to use Borgmann’s phrasing, the trouble required by the fitting means cannot
be outsources or eliminated without fatally undermining the goal of a loving relationship.

That same logic plays out across countless cases where a device promises to save us or unburden us
from moral and emotional troubles. It is a dehumanizing logic.

February 28, 2014


22. Just Livin' To Be Heard

I don’t ordinarily take my cues from John Mellencamp’s lyrics, but … consider:

“A million young poets


Screamin’ out their words
To a world full of people
Just livin’ to be heard”

The story of modernity could be neatly arranged around the theme of voice. As modernity unfolded,
more and more people found a way to be heard; they found their voice, often at great cost and sacrifice.
Protestantism gave a voice to the laity. Democratic movements gave a voice to the citizen. Labor
movements gave a voice to the worker. The woman’s movement gave a voice to women. Further ex-
amples come readily to mind, but you get the point. Along the way certain technologies played a criti-
cal role in this expansion and proliferation of voice. One need only consider print’s relationship to the
Reformation and the Enlightenment.

Of course, this way of telling the story will strike some as rather whiggish, and perhaps rightly so. It
is simplistic, certainly. And yet it does seem plain enough that more people, and a greater diversity of
people, now have, not only the freedom to speak, but the means to do so as well.

Much of the rhetoric that surrounded the advent of the World Wide Web, particularly in its 2.0 itera-
tion, triumphantly characterized the Internet as the consummation of this trajectory of empowerment. It
was wildly utopian rhetoric; and although the utopian hope has not yet been realized, it is nonetheless
true that blogs and social media have at least made it possible for a very great number of people to
speak publicly, and sometimes with great (if ephemeral) consequence.

More significantly, it seems to me, the use of social media nurtures the impulse to speak. The plat-
forms by their very design encourage users to speak often and continually. They feature mechanisms of
response that act not unlike little Pavlovian pellets of affirmation to keep us speaking in the hopes that
Likes and retweets and comments will follow.
We are all learning to “speak in memes,” as Nathan Jurgenson has recently put it. What is most sig-
nificant about this may be the assumption that we will speak. We are conditioning our speech to fit the
medium, but that we will speak is no longer in question.

Fine. Well and good. Speak, and speak truthfully and boldly — at least interestingly.

But what about hearing and listening? While we have been vigorously enlarging our voices, we ap-
pear to have neglected the art of listening. We want to be heard, of course, but are we as intent on lis-
tening? Do we desire to understand as ardently as we desire to be understood?

There is an art to listening, one might even say that it is a kind of virtue. At the very least it requires
certain virtues. Patience certainly, and humility as well. One might even say courage, for what you
learn when you listen may very well threaten beliefs and convictions that are very dear and defining. In
any case, listening is not easy or even natural. It is a discipline. It must be cultivated with great care and
it requires, to some degree, a willingness to still the impulse to speak.

It requires as well a wanton disregard for the pace of Internet time. Internet time demands near in-
stantaneous responses; but listening sometimes takes more time than that afforded by the meme cycle.

Often listening depends on silence and deep, unbroken attentiveness.

Listening, honest listening happens when there is tacit permission to be silent in response. Other-
wise, listening is overwhelmed by the pressure to formulate a response. And, of course, if, while I am
ostensibly listening, I am only thinking about what to say in response, I’m probably not really listening
— more like reloading.

“We’re living to be heard” — I suspect there is something rather profound about that observation.
Perhaps Mellencamp spoke better than he knew. It seems to me though, that those who would be heard
ought also to hear. We have our voices, but only if we learn to listen in equal measure will this ever
mean a thing.

November 4, 2012
23. The Transhumanist Promise: Happiness You Can-
not Refuse
Transhumanism, a diverse movement aimed at transcending our present human limitations, contin-
ues to gravitate away from the fringes of public discussion toward the mainstream. It is an idea that, to
many people, is starting to sound less like a wildly unrealistic science-fiction concept and more like a
vaguely plausible future. I imagine that as the prospect of a transhumanist future begins to take on the
air of plausibility, it will both exhilarate and mortify in roughly equal measure.

Recently, Jamie Bartlett wrote a short profile of the transhumanist project near the conclusion of
which he observed, “Sometimes Tranhumanism [sic] does feel a bit like modern religion for an individ-
ualistic, technology-obsessed age.” As I read that line, I thought to myself, “Sometimes?”

To be fair, many transhumanist would be quick to flash their secular bona fides, but it is not too
much of a stretch to say that the transhumanist movement traffics in the religious, quasi-religious, and
mystical. Peruse, for example, the list of speakers at last year’s Global Future 2045 conference. The
year 2045, of course, is the predicted dawn of the Singularity, the point at which machines and humans
become practically indistinguishable.

In its aspirations for transcendence of bodily limitations, its pursuit of immortality, and its promise
of perpetual well-being and the elimination of suffering, Transhumanism undeniably incorporates tradi-
tionally religious ambitions and desires. It is, in other words, functionally analogous to traditional reli-
gions, particularly the Western, monotheistic faiths. If you’re unfamiliar with the movement and are
wondering whether I might have exaggerated their claims, I invite you to watch a video introduction to
Transhumanism put together by British Institute of Posthuman Studies (BIOPS).

This clip amounts to a particularly robust instance of what the historian David Noble called, “the re-
ligion of technology.” Noble’s work highlighted the long-standing entanglement of religious aspira-
tions with the development of the Western technological project. The manifestation of the religion of
technology apparent in the Transhumanist movement betrays a distinctly gnostic pedigree. Transhu-
manist rhetoric is laced with a palpable contempt for humanity in its actual state, and the contempt is
directed with striking animus at the human body. Referring to the human body derisively as a “meat
sack” or “meat bag” is a common trope among the more excitable transhumanist. As Katherine Hayles
has put it, in Transhumanism bodies are “fashion accessories rather than the ground of being.”

In any case, the BIOPS video not too subtly suggests that Christianity has been one of the persistent
distractions keeping us from viewing aging as we should, not as a “natural” aspect of the human condi-
tion, but as a disease to be combatted. This framing may convey an anti-religious posture, but what
emerges on balance is not a dismissal of the religious aims, but rather the claim that they may be better
realized through other, more effective means. The Posthumanist promise, then, is the promise of what
the political philosopher Eric Voegelin called the immanentized eschaton. The traditional religious cat-
egory for this is idolatry with a healthy sprinkling of classical Greek hubris for good measure.

After discussing “super-longevity” and “super-intelligence,” the BIOPS video goes on to discuss
“super well-being.” This part of the video begins at the seven-minute mark, and it expresses some of
the more troubling aspects of the Transhumanist vision, at least as embraced by this particular group.
This third prong of the Transhumanist project seeks to “phase out suffering.” The segment begins by
asking viewers to imagine that as parents they had the opportunity to opt their child out of “chronic de-
pression,” a “low pain threshold,” and “anxiety.” Who would choose these for their own children? Of
course, the implicit answer is that no well-meaning, responsible parent would. We all remember
Gattaca, right?

A robust challenge to the Transhumanist vision is well-beyond the scope of this blog post, but it is a
challenge that needs to be carefully and thoughtfully articulated. For the present, I’ll leave you with a
few observations.

First, the nature of the risks posed by the technologies Posthumanists are banking on is not that of a
single, clearly destructive cataclysmic accident. Rather, the risk is incremental and not ever obviously
destructive. It takes on the character of the temptation experienced by the main character, Pahom, in
Leo Tolstoy’s short story, “How Much Land Does a Man Need?” If you’ve never read the story, you
should. In the story Pahom is presented with the temptation to acquire more and more land, but Tolstoy
never paints Pahom as a greedy Ebenezer Scrooge type. Instead, at the point of each temptation, it ap-
pears perfectly rational, safe, and good to seize an opportunity to acquire more land. The end of all of
these individual choices, however, is finally destructive.
Secondly, these risks are a good illustration of the ethical challenges posed by innovation. These
risks would be socially distributed, but unevenly and possibly even unjustly so. In other words, tech-
nologies of radical human enhancement (we’ll allow that loaded descriptor to slide for now) would
carry consequences for both those who chose such enhancements and also for those who did not or
could not. This problem is not, however, unique to these sorts of technologies. We generally lack ade-
quate mechanisms for adjudicating the socially distributed risks of technological innovation. (To be
clear, I don’t pretend to have any solutions to this problem.) We tolerate this because we generally tend
to assume that, on balance, the advance of technology is a tide that lifts all ships even if not evenly so.
Additionally, given our anthropological and political assumptions, we have a hard time imagining a no-
tion of the common good that might curtail individual freedom of action.

Lastly, the Transhumanist vision assumes a certain understanding of happiness when it speaks of the
promise of “super well-being.” This vision seems to be narrowly equated with the absence of suffering.
But it is not altogether obvious that this is the only or best way of understanding the perennially elusive
state of affairs that we call happiness. The committed Transhumanist seems to lack the imagination to
conceive of alternative pursuits of happiness, particularly those that encompass and incorporate certain
forms of suffering and tribulation. But that will not matter.

In the Transhumanist future one path to happiness will be prescribed. It will be objected that this
path will be offered not prescribed, but, of course, this is disingenuous because in this vision the tech-
nologies of enhancement confer not only happiness narrowly defined but power as well. As Gary Mar-
cus and Christof Koch recently noted in their discussion of brain implants, “The augmented among us
—those who are willing to avail themselves of the benefits of brain prosthetics and to live with the at-
tendant risks—will outperform others in the everyday contest for jobs and mates, in science, on the ath-
letic field and in armed conflict.” Those who opt out will be choosing to be disadvantaged and
marginalized. This may be a choice, but not one without a pernicious strain of tacit coercion.

Years ago, just over seventy years ago in fact, C.S. Lewis anticipated what he called the abolition of
man. The abolition of man would come about when science and technology found that the last frontier
in the conquest of nature was humanity itself. “Human nature will be the last part of Nature to surren-
der to Man,” Lewis warned, and when it did a caste of Conditioners would be in the position to “cut out
posterity in what shape they please.” Humanity, in other words, would become the unwilling subject of
these Last Men and their final decisive exercise of the will to power over nature, the power to shape
humanity in their own image.

Even as I write this, there is part of me that thinks this all sounds so outlandish, and that even to
warn of it is an unseemly alarmism. After all, while some of the touted technologies appear to be
within reach, many others seem to be well out of reach, perhaps forever so. But, then, I consider that
many terrible things once seemed impossible and it may have been their seeming impossibility that
abetted their eventual realization. Or, from a more positive perspective, perhaps it is sometimes the ar-
ticulation of the seemingly far-fetched dangers and risks that ultimately helps us steer clear of them.

April 5, 2014
24. Are Human Enhancement and AI Incompatible?

An essay adapted from Nick Bostrom’s book, Superintelligence: Paths, Dangers, Strategies, ap-
peared on Slate’s “Future Tense” blog with the cheerfully straightforward title, “You Should Be
Terrified of Super Intelligent Machines.”

I first came across Bostrom’s name in Cary Wolfe’s What Is Posthumanism?, which led me to
Bostrom’s article, “A History of Transhumanist Thought.” For his part, Wolfe sought to articulate a
more persistently posthumanist vision for posthumanism, one which dispensed with humanist assump-
tions about human nature altogether. Wolfe’s argued that Bostrom was guilty of building his transhu-
manist vision on a thoroughly humanist understanding of the human being. The humanism in view
here, it’s worth clarifying, is that which we ordinarily associate with the Renaissance or the Enlighten-
ment, one which highlights autonomous individuality, agency, and rationality. It is also one which as-
sumes a Platonic or Cartesian mind/body dualism. Wolfe, like N. Katherine Hayles before him, finds
this to be misguided and misleading, but I digress.

Whether Bostrom would’ve chosen such an alarmist title or not, his piece does urge us to lay aside
the facile assumption that super-intelligent machines will be super-intelligent in a predictably human
way. This is an anthropomorphizing fallacy. Consequently, we should consider the possibility that su-
per-intelligent machines will pursue goals that may, as an unintended side-effect, lead to human extinc-
tion. I suspect that in the later parts of his book, Bostrom might have a few suggestions about how we
might escape such a fate. I also suspect that none of these suggestions include the prospect of halting or
limiting the work being done to create super-intelligent machines. In fact, judging from the chapter ti-
tles and sub-titles, it seems that the answer Bostrom advocates involves figuring out how to instill ap-
propriate values in super-intelligent machines. This brings us back to the line of criticism articulated by
Wolfe and Hayles: the traditionally humanist project of rational control and mastery is still the underly-
ing reality.

It does seem reasonable for Bostrom, who is quite enthusiastic about the possibilities of human en-
hancement, to be a bit wary about the creation of super-intelligent machines. It would be unfortunate
indeed if, having finally figured out how to download our consciousness or perfect a cyborg platform
for it, a clever machine of our making later came around, pursuing some utterly trivial goal, and de-
cided, without a hint of malice, that it needed to eradicate these post-human humans as a step toward
the fulfillment of its task. Unfortunate, and nihilistically comic.

It is interesting to consider that these two goals we rather blithely pursue–human enhancement and
artificial intelligence–may ultimately be incompatible. Of course, that is a speculative consideration,
and, to some degree, so is the prospect of ever achieving either of those two goals, at least as their most
ardent proponents envision their fulfillment. But let us consider it for just a moment anyway for what it
might tell us about some contemporary versions of the posthumanist hope.

Years ago, C.S. Lewis famously warned that the human pursuit of mastery over Nature would even-
tually amount to the human pursuit of mastery over Humanity, and what this would really mean is the
mastery of some humans over others. This argument is all the more compelling now, some 70 or so
years after Lewis made it in The Abolition of Man. It would seem, though, that an updated version of
that argument would need to include the further possibility that the tools we develop to gain mastery
over nature and then humanity might finally destroy us, whatever form the “us” at that unforeseeable
juncture happens to take. Perhaps this is the tacit anxiety animating Bostrom’s new work.

And this brings us back, once again, to the kind of humanism at the heart of posthumanism. The
posthumanist vision that banks on some sort of eternal consciousness–the same posthumanist vision
that leads Ray Kurzweil to take 150 vitamins a day–that posthumanist vision is still the vision of some-
one who intends to live forever in some clearly self-identifiable form. It is, in this respect, a thoroughly
Western religious project insofar as it envisions and longs for the immortality of the individuated
self. We might even go so far as to call it, in an obviously provocative move, a Christian heresy.

Finally, our potentially incompatible technical aspirations reveal something of the irrationality, or a-
rationality if you prefer, at the heart of our most rational project. Technology and technical systems as-
sume rationality in their construction and their operation. Thinking about their potential risks and trying
to prevent and mitigate them is also a supremely rational undertaking. But at the heart of all of this ra-
tional work there is a colossal unspoken absence: there is a black hole of knowledge that, beginning
with the simple fact of our inability to foresee the full ramifications of anything that we do or make,
subsequently sucks into its darkness our ability to expertly anticipate and plan and manage with any-
thing like the confident certainty we project.

It is one thing to live with this relative risk and uncertainty when we are talking about simple tools
and machines (hammers, bicycles, etc.). It is another thing when we are talking about complex techni-
cal systems (automotive transportation, power grids, etc.). It is altogether something else when we are
talking about technical systems that may fundamentally alter our humanity or else eventuate in its anni-
hilation. The fact that we don’t even know how seriously to take these potential threats, that we cannot
comfortably distinguish between what is still science fiction and what will, in fact, materialize in our
lifetimes, that’s a symptom of the problem, too.

I keep coming back to the realization that our thinking about technology is often inadequate or inef-
fectual because it is starting from the wrong place; or, to put it another way, it is already proceeding
from assumptions grounded in the dynamics of technology and technical systems, so it bends back to-
ward the technological solution. If we already tacitly value efficiency, for example, if efficiency is al-
ready an assumed good that no longer needs to be argued for, then we will tend to pursue it by what-
ever possible means under all possible circumstances. Whenever new technologies appear, we will
judge them in light of this governing preference for efficiency. If the new technology affords us a more
efficient way of doing something, we will tend to embrace it.

But the question remains, why is efficiency a value that is so pervasively taken for granted? If the
answer seems commonsensical, then, I’d humbly suggest that we need to examine it all the more criti-
cally. Perhaps we will find that we value efficiency because this virtue native to the working of techni-
cal and instrumental systems has spilled over into what had previously been non-technical and non-in-
strumental realms of human experience. Our thinking is thus already shaped (to put it in the most neu-
tral way possible) by the very technical systems we are trying to think about.

This is but one example of the dynamic. Our ability to think clearly about technology will depend in
large measure on our ability to extricate our thinking from the criteria and logic native to technological
systems. This is, I fully realize, a difficult task. I would never claim that I’ve achieved this clarity of
thought myself, but I do believe that our thinking about technology depends on it.

September 13, 2014


25. Do Artifacts Have Ethics?

Writing about “technology and the moral dimension,” tech writer and Gigaom founder, Om Malik
made the following observation:

“I can safely say that we in tech don’t understand the emotional aspect of our work, just as we
don’t understand the moral imperative of what we do. It is not that all players are bad; it is just
not part of the thinking process the way, say, ‘minimum viable product’ or ‘growth hacking’
are.”

I’m not sure how many people in the tech industry would concur with Malik’s claim, but it is a re-
markably telling admission from at least one well-placed individual. Happily, Malik realized that “it is
time to add an emotional and moral dimension to products.” But what exactly does it mean to add an
emotional and moral dimension to products?

Malik’s own ensuing discussion is brief and deals chiefly with using data ethically and producing
clear, straightforward terms of service. This suggests that Malik is mostly encouraging tech companies
to treat their customers in an ethically responsible manner. If so, it’s rather disconcerting that Malik
takes this to be a discovery that he feels compelled to announce, prophetically, to his colleagues. Leav-
ing that unfortunate indictment of the tech community aside, I want to suggest that there is no need to
add a moral dimension to technology.

Years ago, Langdon Winner famously asked, “Do artifacts have politics?” In the article that bears
that title, Winner went on to argue that they most certainly do. We might also ask, “Do artifacts have
ethics?” I would argue that they do, indeed. The question is not whether technology has a moral dimen-
sion, the question is whether we recognize it or not. In fact, technology’s moral dimension is in-
escapable, layered, and multi-faceted.

When we do think about technology’s moral implications, we tend to think about what we do with a
given technology. We might call this the “guns don’t kill people, people kill people” approach to the
ethics of technology. What matters most about a technology on this view is the use to which it is put.
This is, of course, a valid consideration. A hammer may indeed be used to either build a house or bash
someone’s head in. On this view, technology is morally neutral and the only morally relevant question
is this: What will I do with this tool?

But is this really the only morally relevant question one could ask? For instance, pursuing the exam-
ple of the hammer, might I not also ask how having the hammer in hand encourages me to perceive the
world around me? Or, what feelings having a hammer in hand arouses?

Below are a few other questions that we might ask in order to get at the wide-ranging “moral dimen-
sion” of our technologies. There are, of course, many others that we could ask, but this is a start.

1. What sort of person will the use of this technology make of me?
2. What habits will the use of this technology instill?
3. How will the use of this technology affect my experience of time?
4. How will the use of this technology affect my experience of place?
5. How will the use of this technology affect how I relate to other people?
6. How will the use of this technology affect how I relate to the world around me?
7. What practices will the use of this technology cultivate?
8. What practices will the use of this technology displace?
9. What will the use of this technology encourage me to notice?
10. What will the use of this technology encourage me to ignore?
11. What was required of other human beings so that I might be able to use this technology?
12. What was required of other creatures so that I might be able to use this technology?
13. What was required of the earth so that I might be able to use this technology?
14. Does the use of this technology bring me joy?
15. Does the use of this technology arouse anxiety?
16. How does this technology empower me? At whose expense?
17. What feelings does the use of this technology generate in me toward others?
18. Can I imagine living without this technology? Why, or why not?
19. How does this technology encourage me to allocate my time?
20. Could the resources used to acquire and use this technology be better deployed?
21. Does this technology automate or outsource labor or responsibilities that are morally essential?
22. What desires does the use of this technology generate?
23. What desires does the use of this technology dissipate?
24. What possibilities for action does this technology present? Is it good that these actions are
now possible?
25. What possibilities for action does this technology foreclose? Is it good that these actions are no
longer possible?
26. How does the use of this technology shape my vision of a good life?
27. What limits does the use of this technology impose upon me?
28. What limits does my use of this technology impose upon others?
29. What does my use of this technology require of others who would (or must) interact with me?
30. What assumptions about the world does the use of this technology tacitly encourage?
31. What knowledge has the use of this technology disclosed to me about myself?
32. What knowledge has the use of this technology disclosed to me about others? Is it good to have
this knowledge?
33. What are the potential harms to myself, others, or the world that might result from my use of
this technology?
34. Upon what systems, technical or human, does my use of this technology depend? Are these sys-
tems just?
35. Does my use of this technology encourage me to view others as a means to an end?
36. Does using this technology require me to think more or less?
37. What would the world be like if everyone used this technology exactly as I use it?
38. What risks will my use of this technology entail for others? Have they consented?
39. Can the consequences of my use of this technology be undone? Can I live with those conse-
quences?
40. Does my use of this technology make it easier to live as if I had no responsibilities toward my
neighbor?
41. Can I be held responsible for the actions which this technology empowers? Would I feel better
if I couldn’t?

November 29, 2014


26. Lethal Autonomous Weapons and Thoughtless-
ness
In the mid-twentieth century, Hannah Arendt wrote extensively about the critical importance of
learning to think in the aftermath of a great rupture in our tradition of thought. She wrote of the desper-
ate situation when “it began to dawn upon modern man that he had come to live in a world in which his
mind and his tradition of thought were not even capable of asking adequate, meaningful questions, let
alone of giving answers to its own perplexities.”

Frequently, Arendt linked this rupture in the tradition, this loss of a framework that made our think-
ing meaningful to the appearance of totalitarianism in the early twentieth century. But she also recog-
nized that the tradition had by then been unravelling for some time, and technology played a not in-
significant role in this unraveling and the final rupture. In “Tradition and the Modern Age,” for exam-
ple, she argues that the “priority of reason over doing, of the mind’s prescribing its rules to the actions
of men” had been lost as a consequence of “the transformation of the world by the Industrial Revolu-
tion–a transformation the success of which seemed to prove that man’s doings and fabrications pre-
scribe their rules to reason.”

Moreover, in the Prologue to The Human Condition, after reflecting on Sputnik, computer automa-
tion, and the pursuit of what we would today call bio-engineering, Arendt worried that our Think-
ing would prove inadequate to our technologically-enhanced Doing. “If it should turn out to be true,”
she added, “that knowledge (in the modern sense of know-how) and thought have parted company for
good, then we would indeed become the helpless slaves, not so much of our machines as of our know-
how, thoughtless creatures at the mercy of every gadget which is technically possible, no matter how
murderous it is.”

That seems as good an entry as any into a discussion of Lethal Autonomous Robots. A
short Wired piece on the subject has been making the rounds the past day or two with the rather
straightforward title, “We Can Now Build Autonomous Killing Machines. And That’s a Very, Very
Bad Idea.” The story takes as its point of departure the recent pledge on the part of a robotics company,
Clearpath Robotics, never to build “killer robots.”

Clearpath’s Chief Technology Officer, Ryan Gariepy, explained the decision: “The potential for
lethal autonomous weapons systems to be rolled off the assembly line is here right now, but the poten-
tial for lethal autonomous weapons systems to be deployed in an ethical way or to be designed in an
ethical way is not, and is nowhere near ready.”

Not everyone shares Gariepy’s trepidation. Writing for the blog of the National Defense Industrial
Association, Sarah Sicard discussed the matter with Ronald Arkin, a dean at Georgia Tech’s School of
Interactive Computing. “Unless regulated by international treaties,”Arkin believes, “lethal autonomy is
inevitable.”

It’s worth pausing for a moment to explore the nature of this claim. It’s a Borg Complex claim, of
course, although masked slightly by the conditional construction, but that doesn’t necessarily make it
wrong. Indeed, claims of inevitability are especially plausible in the context of military technology, and
it’s not hard to imagine why. Even if one nation entertained ethical reservations about a certain technol-
ogy, they could never assure themselves that other nations would share their qualms. Better then to set
their reservations aside than to be outpaced on the battlefield with disastrous consequences. The force
of the logic is compelling. In such a case, however, the inevitability, such as it is, does not reside in the
technology per se; it resides in human nature. But even to put it that way threatens to obscure the fact
that choices are being and they could be made otherwise. The example set by Clearpath Robotics, a
conscious decision to forego research and development on principle only reinforces this conclusion.

But Arkin doesn’t just believe the advent of Lethal Autonomous Robots to be inevitable; he seems
to think that it will be a positive good. Arkin believes that the human beings are the “weak link” in the
“kill chain.” The question for roboticists is this: “Can we find out ways that can make them outperform
human warfighters with respect to ethical performance?” Arkin appears to be fairly certain that the an-
swer will be a rather uncomplicated “yes.”

For a more complicated look at the issue, consider the report (PDF) on Lethal Autonomous
Weapons presented to the UN’s Human Rights Council by special rapporteur, Christof Heyns. The re-
port was published in 2013 and first brought to my attention by a post on Nick Carr’s blog. The report
explores a variety of arguments for and against the development and deployment of autonomous
weapons systems and concludes, “There is clearly a strong case for approaching the possible introduc-
tion of LARs with great caution.” It continues:

“If used, they could have far-reaching effects on societal values, including fundamentally on the
protection and the value of life and on international stability and security. While it is not clear at
present how LARs could be capable of satisfying IHL and IHRL requirements in many re-
spects, it is foreseeable that they could comply under certain circumstances, especially if used
alongside human soldiers. Even so, there is widespread concern that allowing LARs to kill peo-
ple may denigrate the value of life itself.”

Among the more salient observations made by the report there is this note of concern about unin-
tended consequences:

“Due to the low or lowered human costs of armed conflict to States with LARs in their arsenals,
the national public may over time become increasingly disengaged and leave the decision to use
force as a largely financial or diplomatic question for the State, leading to the ‘normalization’ of
armed conflict. LARs may thus lower the threshold for States for going to war or otherwise us-
ing lethal force, resulting in armed conflict no longer being a measure of last resort.”

As with the concern about the denigration of the value of life itself, this worry about the normaliza-
tion of armed conflict is difficult to empirically verify (although US drone operations in Afghanistan,
Pakistan, and the Arabian peninsula are certainly far from irrelevant to the discussion). Consequently,
such considerations tend to carry little weight when the terms of the debate are already compromised
by technocratic assumptions regarding what counts as compelling reasons, proofs, or evidence.

Such assumptions appear to be all that we have left to go on in light of the rupture Arendt described
of the tradition of thought. Or, to put that a bit more precisely, it may not be all that we have left, but it
is what we have gotten. We have precious little to fall back on when we begin to think about what we
are doing when what we are doing involves, for instance, the fabrication of Lethal Autonomous Ro-
bots. There are no customs of thought and action, no traditions of justice, no culturally embodied wis-
dom to guide us, at least not in any straightforward and directly applicable fashion. We are thinking
without a bannister, as Arendt put it elsewhere, if we are thinking at all.

Perhaps it is because I have been reading a good bit of Arendt lately, but I’m increasingly struck by
situations we encounter, both ordinary and extraordinary, in which our default problem-solving,
cost/benefit analysis mode of thinking fails us. In such situations, we must finally decide what is unde-
cidable and take action, action for which we can be held responsible, action for which we can only
hope for forgiveness, action made meaningful by our thinking.

Arendt distinguished this mode of thinking, that which seeks meaning and is a ground for action,
from that which seeks to know with certainty what is true. This helps explain, I believe, what she meant
in the passage I cited above when she feared that we would become “thoughtless” and slaves to our
“know-how.” We are in such cases calculating and measuring, but not thinking, or willing, or judging.
Consequently, under such circumstances, we are also perpetually deferring responsibility.

Considered in this light, Lethal Autonomous Weapons threaten to become a symbol of our age; not
in their clinical lethality, but in their evacuation of human responsibility from one of the most profound
and terrible of actions, the taking of a human life. They will be an apt symbol for an age in which we
will grow increasingly accustomed to holding algorithms responsible for all manner of failures, mis-
takes, and accidents, both trivial and tragic. Except, of course, that algorithms cannot be held account-
able and they cannot be forgiven.

We cannot know, neither exhaustively nor with any degree of certainty, what the introduction of
Lethal Autonomous Weapons will mean for human society, at least not by the standards of techno-sci-
entific thinking. In the absence of such certainty, because we do not seem to know how to think or
judge otherwise, they will likely be adopted and eventually deployed as a matter of seemingly banal ne-
cessity.

February 8, 2015
27. Attention and the Moral Life

I’ve continued to think about a question raised by Frank Furedi in an otherwise lackluster essay
about distraction and digital devices. Furedi set out to debunk the claim that digital devices are under-
mining our attention and our memory. I don’t think he succeeded, but he left us with a question worth
considering: “The question that is rarely posed by advocates of the distraction thesis is: what are people
distracted from?”

This question can be usefully set alongside a mid-20th century observation by Hannah Arendt. Con-
sidering the advent of automation, Arendt feared “the prospect of a society of laborers without labor,
that is, without the only activity left to them.” “Surely, nothing could be worse,” she added.

The connection might not have been as clear as I imagined it, so let me explain. Arendt believed that
labor is the “only activity left” to the laborer because the glorification of labor in modern society had
eclipsed the older ends and goods to which labor had been subordinated and for the sake of which we
might have sought freedom from labor.

To put it as directly as I can, Arendt believed that if we indeed found ourselves liberated from the
need to labor, we would not know what to do with ourselves. We would not know what to do with our-
selves because, in the modern world, laboring had become the ordering principle of our lives.

Recalling Arendt’s fear, I wondered whether we were not in a similar situation with regards to atten-
tion. If we were able to successfully challenge the regime of digital distraction, to what would we give
the attention that we would have fought so hard to achieve? Would we be like the laborers in Arendt’s
analysis, finally free but without anything to do with our freedom? I wondered, as well, if it were not
harder to combat distraction, if we were inclined to do so, precisely because we had no telos for the
sake of which we might undertake the struggle.

Interestingly, then, while the link between Arendt’s comments about labor and the question about
the purpose of attention was initially only suggestive, I soon realized the two were more closely con-
nected. They were connected by the idea of leisure.
We tend to think of leisure merely as an occasional break from work. That is not, however,
how leisure was understood in either classical or medieval culture. Josef Pieper, a Catholic philosopher
and theologian, was thinking about the cultural ascendency of labor or work and the eclipse of leisure
around the same time that Arendt was articulating her fears of a society of laborers without labor. In
many respects, their analysis overlaps. (I should note, though, that Arendt distinguishes between labor
and work in way that Pieper does not. Work for Pieper is roughly analogous to labor in Arendt’s taxon-
omy.)

For her part, Arendt believed nothing could be worse than liberating laborers from labor at this stage
in our cultural evolution, and this is why:

“The modern age has carried with it a theoretical glorification of labor and has resulted in a fac-
tual transformation of the whole of society into a laboring society. The fulfillment of the wish,
therefore, like the fulfillment of wishes in fairy tales, comes at a moment when it can only be
self-defeating. It is a society of laborers which is about to be liberated from the fetters of labor,
and this society does no longer know of those other higher and more meaningful activities for
the sake of which this freedom would deserve to be won. Within this society, which is egalitar-
ian because this is labor’s way of making men live together, there is no class left, no aristocracy
of either a political or spiritual nature from which a restoration of the other capacities of man
could start anew.”

To say that there is “no aristocracy of either a political or spiritual nature” is another way of saying
that there is no leisured class in the older sense of the word. This older ideal of leisure did not entail
freedom from labor for the sake of endless poolside lounging while sipping Coronas. It was freedom
from labor for the sake of intellectual, political, moral, or spiritual aims, the achievement of which may
very well require arduous discipline. We might say that it was freedom from the work of the body that
made it possible for someone to take up the work of the soul or the mind. Thus Pieper can claim that
leisure is “a condition of the soul.” But, we should also note, it was not necessarily a solitary endeavor,
or, better, it was not an endeavor that had only the good of the individual in mind. It often involved ser-
vice to the political or spiritual community.
Pieper further defines leisure as “a form of that stillness that is the necessary preparation for accept-
ing reality; only the person who is still can hear, and whoever is not still cannot hear.” He makes clear,
though, that the stillness he has in mind “is not mere soundlessness or a dead muteness; it means,
rather, that the soul’s power, as real, of responding to the real – a co-respondence, eternally established
in nature – has not yet descended into words.” Thus, leisure “is the disposition of receptive understand-
ing, of contemplative beholding, and immersion – in the real.”

Pieper also claims that leisure “is only possible on the assumption that man is not only in harmony
with himself, whereas idleness is rooted in the denial of this harmony, but also that he is in agreement
with the world and its meaning. Leisure lives on affirmation.” The passing comment on idleness is es-
pecially useful to us.

In our view, leisure and idleness are nearly indistinguishable. But on the older view, idleness is not
leisure; indeed, it is the enemy of leisure. Idleness, on the older view, may even take the shape of fren-
zied activity undertaken for the sake of, yes, distracting us from the absence of harmony or agreement
with ourselves and the world.

We are now inevitably within the orbit of Blaise Pascal’s analysis of the restlessness of the human
condition. Because we are not at peace with ourselves or our world, we crave distraction or what he
called diversions. “What people want,” Pascal insists, “is not the easy peaceful life that allows us to
think of our unhappy condition, nor the dangers of war, nor the burdens of office, but the agitation that
takes our mind off it and diverts us.” “Nothing could be more wretched,” Pascal added, “than to be in-
tolerably depressed as soon as one is reduced to introspection with no means of diversion.”

The novelist Walker Percy, a younger contemporary of both Arendt and Pieper, described what we
called the “diverted self” as follows: “In a free and affluent society, the self is free to divert itself end-
lessly from itself. It works in order to enjoy the diversions that the fruit of one’s labor can purchase.”
For the diverted self, Percy concluded, “The pursuit of happiness becomes the pursuit of diversion.”

If leisure is a condition of the soul as Pieper would have it, then might we also say the same of dis-
traction? Discreet instances of being distracted, of failing to meaningfully direct our attention, would
then be symptoms of a deeper disorder. Our digital devices, in this framing of distraction, are both a
material cause and an effect. The absence of digital devices would not cure us of the underlying dis-
tractedness or aimlessness, but their presence preys upon, exacerbates, and amplifies this inner distract-
edness.

It is hard, at this point, for me not to feel that I have been speaking in another language or at least
another dialect, one whose cadences and lexical peculiarities are foreign to our own idiom and, conse-
quently, to our way of making sense of our experience. Leisure, idleness, contemplative beholding,
spiritual and political aristocracies–all of this recalls to mind Alasdair MacIntyre’s observation that we
use such words in much the same way that a post-apocalyptic society, picking up the scattered pieces
of the modern scientific enterprise would use “neutrino,” “mass,” and “specific gravity”: not entirely
without meaning, perhaps, but certainly not as scientists. The language I’ve employed, likewise, is the
language of an older moral vision, a moral vision that we have lost.

I’m not suggesting that we ought to seek to recover the fullness of the language or the world that
gave it meaning. That would not be possible, of course. But what if we, nonetheless, desired to bring a
measure of order to the condition of distraction that we might experience as an affliction? What if we
sought some telos to direct and sustain our attention, to at least buffer us from the forces of distraction?

If such is the case, I commend to you Simone Weil’s reflections on attention and will. Believing that
the skill of paying attention cultivated in one domain was transferable to another, Weil went so far as to
claim that the cultivation of attention was the real goal of education: “Although people seem to be un-
aware of it today, the development of the faculty of attention forms the real object and almost the sole
interest of studies.”

It was Weil who wrote, “Attention is the rarest and purest form of generosity.” A beautiful senti-
ment grounded in a deeply moral understanding of attention. Attention, for Weil, was not merely an in-
tellectual asset, what we require for the sake of reading long, dense novels. Rather, for Weil, attention
appears to be something foundational to the moral life:

“There is something in our soul that loathes true attention much more violently than flesh
loathes fatigue. That something is much closer to evil than flesh is. That is why, every time we
truly give our attention, we destroy some evil in ourselves.”
Ultimately, Weil understood attention to be a critical component of the religious life as well. “Atten-
tion, taken to its highest degree,” Weil wrote, “is the same thing as prayer. It presupposes faith and
love.” “If we turn our mind toward the good,” she added, “it is impossible that little by little the whole
soul will not be attracted thereto in spite of itself.” And this is because, in her view, “We have to try to
cure our faults by attention and not by will.”

So here we have, if we wanted it, something to animate our desire to discipline the distracted self,
something at which to direct our attention. Weil’s counsel was echoed closer to our own time by David
Foster Wallace, who also located the goal of education in the cultivation of attention.

“Learning how to think really means learning how to exercise some control over how and what you
think,” Wallace explained in his now famous commencement address at Kenyon College. “It means be-
ing conscious and aware enough to choose what you pay attention to and to choose how you construct
meaning from experience.”

“The really important kind of freedom,” Wallace added, “involves attention and awareness and dis-
cipline, and being able truly to care about other people and to sacrifice for them over and over in myr-
iad petty, unsexy ways every day. That is real freedom. That is being educated, and understanding how
to think.” Each day the truth of this claim impresses itself more and more deeply upon my mind and
heart.

Finally, and briefly, we should be wary of imagining the work of cultivating attention as merely a
matter of learning how to consciously choose what we will attend to at any given moment. That is part
of it to be sure, but Weil and Pieper both knew that attention also involved an openness to what is, a ca-
pacity to experience the world as gift. Cultivating our attention in this sense is not a matter of focusing
upon an object of attention for our own reasons, however noble those may be. It is also a matter of set-
ting to one side our projects and aspirations that we might be surprised by what is there. “We do not
obtain the most precious gifts by going in search of them,” Weil wrote, “but by waiting for them.” In
this way, we prepare for “some dim dazzling trick of grace,” to borrow a felicitous phrase from Walker
Percy, that may illumine our minds and enliven our hearts.

It is these considerations, then, that I would offer in response to Furedi’s question, What are we dis-
tracted from?
September 13, 2016
28. To Act, or Not to Act On Social Media

In November 2016, The Atlantic posted a video showing an audience of two-hundred or so reacting
fervently, some with Nazi salutes, when Richard Spencer came on stage and proclaimed, “Hail Trump,
hail our people, hail victory!” Spencer is a leading figure in a movement with white-nationalist ele-
ments, which he successfully branded as the “alt-right.”

This is, of course, reprehensible; of that there can be no doubt. But what to make of it or, better yet,
what to do about it? How to respond? I do not ask that in the abstract, for in the abstract, there are nu-
merous possibilities. I ask it concretely of myself. Or you may ask it concretely of yourself. In my par-
ticular circumstances, or in yours, what is to be done in response?

In my particular situation, removed from the event in time and space, having no association with any
of the assembled participants, what am I to do?

That question suggests another pair of related questions: why should I do anything at all? Why
should I feel compelled to do something?

I ask these questions in a continuing effort to think through, as so many others are attempting to do,
the relationship between digital media and the public sphere, particularly in its ethical and political di-
mensions.

I stress the particularity of my situation and yours with Kierkegaard’s understanding of the Press
and the Public in mind. The Public is an effect of the Press. The Press, that is the media, by its “mas-
sive distribution of desituated information,” in Hubert Dreyfus’ words, constitutes an audience of “de-
tached spectators.” These detached spectators have no real or meaningful connection to the events they
read about.

The intriguing thing about Kierkegaard’s diagnosis is that he imagined that the detached spectator
would be bloodless and indolent, desiring nothing more than to be entertained by having something to
gossip about. There is a good deal of truth to this, no doubt. But what accounts for the impulse felt by
many, myself included, to do something when confronted with a piece of news?

Of course, the specific nature of Spencer’s comments, especially given our political moment, ex-
plains why this case may elicit strong feelings and the urge to respond. We’ve heard the old line about
how evil triumphs when good people to do nothing, and here’s our chance to do something and prove
ourselves good people. In fact, if we query our own feelings a little more closely, we may even
find that we are deriving not a little pleasure from doing something to fight the fascists. What we get is
a little rush of emotion and a fleeting sense moral superiority, a taste virtuous, noble action in an other-
wise mundane and uneventful experience characterized by carefully maintained air of ironic detach-
ment.

But what does action amount to when we nevertheless remain, as Kierkegaard suggested, desituated
and detached spectators of events that are not materially connected to us? To be clear, I am not suggest-
ing that there are no cases when we may be materially connected to instances of racism or an-
ti-Semitism. In such cases, our actions may take a variety of forms as dictated by the circumstances and
our moral fortitude. But that is not the case here, at least not for me or countless others watching this
video online. So what does action look like for us in relation to this one specific case?

More often than not the shape our action will take is a social media post condemning Mr. Spencer
and his words. I was tempted to do the same. But I hesitated. I hesitated because it seemed to me not
only that I would be doing very little of consequence; that what I was, in fact, doing may be little more
than signaling my virtue; and, most importantly, that what I was doing may very well be counterpro-
ductive.

It would be counterproductive because of the particular nature of our media environment or our at-
tention economy. I’ve long thought that our best response to certain provocations is to maintain radio
silence. If we are nodes in a network of communication and if a message is successful not to the degree
that it is either true or good but only to the degree that it continues to exist in the network, then the best
I can do is to kill the message in my little corner of the network by remaining silent. In other words, if
what people want is attention because somehow attention is their path to influence, if even my outrage
and moral indignation is fuel for their fire, then denying them that attention seems the most reasonable
and practical course of action.
Let me try to sum up where this meandering line of thought has brought me. In continuity with older
forms of mass media, social media constitutes a desituated audience of detached spectators. However,
unlike older forms of mass media, social media does not render us passive spectators–it invites action
on our part. But the action it invites is action that feeds and empowers the network. We feel as if we are
doing something, but the thing we’re actually accomplishing is rarely the virtuous thing we think we’re
accomplishing. What we’re undoubtedly doing is sustaining the network. And the network itself is of-
ten the source of the problems we think we’re combatting.

Returning to Kierkegaard, he observed the following:

The public has a dog for its amusement. That dog is the Media. If there is someone better than
the public, someone who distinguishes himself, the public sets the dog on him and all the
amusement begins. This biting dog tears up his coat-tails, and takes all sort of vulgar liberties
with his leg–until the public bores of it all and calls the dog off. That is how the public levels.

Our situation, in the age of social media, appears to be a bit different. The public has itself become
the dog and what has been leveled is what we used to quaintly call the truth.

All of the preceding I offer to you, as per usual, in the spirit of thinking out loud. Feel free to tell me
just how I may have gone wrong.

November 29, 2016


30. The Ethics of Information Literacy

On NPR’s Here and Now, Derek Thompson of The Atlantic discussed the problem of “fake
news.” It was all very sensible, of course. Thompson impressed upon the audience the importance of
media literacy. He urged listeners to examine the provenance of the information they encounter. He
also cited an article that appeared in US News & World Report about teaching high schoolers how to
critically evaluate online information. The article, drawing on the advice of teachers, presented three
keys: 1. Teach teens to question the source, 2. Help students identify credible sources, and 3. Give stu-
dents regular opportunities to practice vetting information.

This is all fine. I suspect the problem is not limited to teens—an older cohort appears just as suscep-
tible, if not more so, to “fake news”—but whatever the case, I spend a good deal of time in my classes
doing something like what Thompson recommended. In fact, on more than one occasion, I’ve
claimed that among the most important skills teachers can impart to students is the ability to discern the
credible from the incredible and the serious from the frivolous. (I suspect the latter distinction is the
more important and the more challenging to make.)

But we mustn’t fall into the trap of believing that this is simply a problem of the intellect to
be solved with a few pointers and a handful of strategies. There is an ethical dimension to the problem
as well because desire and virtue bear upon knowing and understanding. Thompson himself alludes to
this ethical dimension, but he speaks of it mostly in the language of cognitive psychology–it is the
problem of confirmation bias. This is a useful, but perhaps too narrow way of understanding the prob-
lem. However we frame it though, the key is this: We must learn to question more than our sources, we
must also question ourselves.

I suggest a list of three questions for students, and by implication all of us, to consider. The first two
are of the standard sort: 1. Who wrote this? and 2. Why should I trust them?

It would be foolish, in my view, to pretend that any of us can be independent arbiters of the truthful-
ness of claims made in every discipline or field of knowledge. It is unreasonable to expect that we
would all become experts in every field about which we might be expected to have an informed opin-
ion. Consequently, it is better to frame critical examination of sources as a matter of trustworthiness.
Can I determine whether or not I have cause to trust the author or the organization that has produced
the information I am evaluating? Of course, trustworthiness does not entail truthfulness or accuracy.
When trustworthy sources conflict, for instance, we may need to make a judgment call or we might
find ourselves unable to arbitrate the competing claims. It inevitably gets complicated.

The third question, however, gets at the ethical dimension: 3. Do I want this to be true?

This question is intended as a diagnostic tool. The goal is to reveal, so far as we might become self-
-aware about such things, our biases and sympathies. There are three possible answers: yes, no, and I
don’t care. In each case, a challenge to discernment is entailed. If I want something to be true, and there
may be various reasons for this, then I need to do my best to reposition myself as a skeptical critic. If I
do not want something to be true, then I need to do my best to reposition myself as a sympathetic advo-
cate. A measure of humility and courage are required in each case.

If I do not care, then there is another sort of problem to overcome. In this case, I may be led astray
by a lack of care. I may believe what I first encounter because I am not sufficiently motivated to press
further. Whereas it is something like passion or pride that we must guard against when we want to be-
lieve or disbelieve a claim, apathy is the problem here.

When I have taught classes on ethics, it has seemed to me that the critical question is not, as it is of-
ten assumed to be, “What is the right thing to do?” Rather, the critical question is this: “Why should
someone desire to learn what is right and then do it?”

Likewise with the problem of information literacy. It is one thing to be presented with a set of skills
and strategies to make us more discerning and critical. It is another, more important thing, to care about
the truth at all, to care more about the truth than about being right.

In short, the business of teaching media literacy or critical thinking skills amounts to a kind of moral
education. In a characteristically elaborate footnote in “Authority and American Usage,” David Foster
Wallace got at this point, although from the perspective of the writer. In the body of his essay, Wallace
writes, “the error that Freshman Composition classes spend all semester trying to keep kids from mak-
ing—the error of presuming the very audience-agreement that it is really their rhetorical job to earn.”
The footnote to this sentence adds the following, emphasis mine:

Helping them eliminate the error involves drumming into student writers two big injunctions:
(1) Do not presume that the reader can read your mind — anything you want the reader to visu-
alize or consider or conclude, you must provide; (2) Do not presume that the reader feels the
same way that you do about a given experience or issue — your argument cannot just assume as
true the very things you’re trying to argue for. Because (1) and (2) are so simple and obvious, it
may surprise you to know that they are actually incredibly hard to get students to understand in
such a way that the principles inform their writing. The reason for the difficulty is that, in the
abstract, (1) and (2) are intellectual, whereas in practice they are more things of the spirit.
The injunctions require of the student both the imagination to conceive of the reader as a sepa-
rate human being and the empathy to realize that this separate person has preferences and con-
fusions and beliefs of her own, p/c/b’s that are just as deserving of respectful consideration as
the writer’s. More, (1) and (2) require of students the humility to distinguish between a univer-
sal truth (‘This is the way things are, and only an idiot would disagree’) and something that the
writer merely opines (‘My reasons for recommending this are as follows:’) . . . . I therefore sub-
mit that the hoary cliché ‘Teaching the student to write is teaching the student to think’ sells the
enterprise way short. Thinking isn’t even half of it.

I take Wallace’s counsel here to be, more or less, the mirror image of the counsel I’m offering to us
as readers.

Finally, I should say that all of the preceding does not begin to touch on much of what we would
also need to consider when we’re thinking about media literacy. Most of the above deals with the mat-
ter of evaluating content, which is obviously not unimportant, and textual content at that. However, me-
dia literacy in the fullest sense would also entail an understanding of more subtle effects arising from
the nature of the various tools we use to communicate content, not to mention the economic and politi-
cal factors conditioning the production and dissemination of information.

December 6, 2016
30. What Do I See When I See My Child?

At first glance, this may seem like a question with an obvious and straightforward answer, but it is-
n’t. Vision plays a trick on us all. It offers its findings to us as a plain representation of “what is there.”
But things are not so simple. Most of us know this because at some point our eyes have deceived us.
The thing we thought we saw was not at all what was, in fact, there. Even this cliche about our eyes de-
ceiving us reveals something about the implicit trust we ordinarily place in what our eyes show to us.
When it turns out that our trust has been betrayed we do not simply say that we were mistaken–we
speak as if we have been wronged, as if our eyes have behaved immorally. We are not in the habit, I
don’t think, of claiming that our ears deceived us or our nose.

What we ordinarily fail to take into account is that seeing is an act of perception and perception is a
form of interpretation.

Seeing is selective. Upon glancing at a scene, I’m tempted to think that I’ve taken it all in. But, of
course, nothing could be further from the truth. If I were to look again and look for a very long time, I
would continue to see more and more details that I did not see at first, second, or third glance. What-
ever it was that I perceived when I first looked is not what I will necessarily see if I continue to look; at
the very least, it will not be all that I will see. So why did I see what I saw when first I looked?

Sometimes we see what we think we ought to see, what we expect to see. Sometimes we see what
we want to see or that for which we are looking. Seeing is thus an act of both remembering and desir-
ing. And this is not yet to say anything of the meaning of what we see, which is also intertwined with
perception.

It is also the case that perception is often subject to mediation and this mediation is ordinarily tech-
nological in nature. Indeed, one of the most important consequences of any given technology is, in my
view, how it shapes our perception of the world. But we are as tempted to assume that technology is
neutral in its mediations and representations as we are to believe that vision simply shows us “what is
there.” So when our vision is technologically mediated it is as if we were subject to a double spell.
The philosopher Peter-Paul Verbeek, building on the work of Don Ihde, has written at length about
what he has called the ethics of technological mediation. Technologies bring about “specific relations
between human beings and reality.” They do this by virtue of their role in mediating both our percep-
tion of the world and our action in the world.

According to Ihde, the mediating work of technology comes in the form of two relations of media-
tion: embodiment relations and hermeneutic relations. In the first, tools are incorporated by the user
and the world is experienced through the tool. Consider the blind man’s stick an example of an embod-
iment relation; the stick is incorporated into the man’s body schema.

Verbeek explains hermeneutic relations in this way: “technologies provide access to reality not be-
cause they are ‘incorporated,’ but because they provide a representation of reality, which requires inter-
pretation.” Moreover, “technologies, when mediating our sensory relationship with reality, transform
what we perceive. According to Ihde, the transformation of perception always has the structure of am-
plification and reduction.”

We might also speak of how technological mediation focuses our perception. Perhaps this is implied
in Ihde’s two categories, amplification and reduction, or the two together amount to a technology’s fo-
cusing effect. We might also speak of this focusing effect as a directing of our attention.

So, once again, what do I see when I see my child?

There are many technologies that mediate how I perceive my child. When my child is in another
room, I perceive her through a video monitor. When my child is ill, I perceive her through a digital
thermometer, some which now continuously monitor body temperature and visualize the data on an
app. Before she was born, I perceived her through ultrasound technology. When I am away from home,
I perceive her through Facetime. More examples, I’m sure, may come readily to your mind. Each of
these merits some attention, but I set them aside to briefly consider what may be the most ubiquitous
form of technological mediation through which I perceive my child–the digital camera.

Interestingly, it strikes me that the digital camera, in particular the camera with which our phones
are equipped, effects both an embodiment relation and a hermeneutic relation. I fear that I may be
stretching the former category to make this claim, but I am thinking of the smartphone as a device
which, in many respects, functions as a prosthesis. I mean by this that it is ready-to-hand to such a de-
gree that it is experienced as an appendage of the body and that, even when it is not in hand, the ubiqui-
tous capacity to document has worked its way into our psyche as a frame of mind through which we
experience the world. It is not only the case that we see a child represented in a digital image, our ordi-
nary act of seeing itself becomes a seeing-in-search-of-an-image.

What does the mediation of the digital smartphone camera amplify? What does it reduce? How does
it bring my child into focus? What does it encourage me to notice and what does it encourage me to ig-
nore? What can it not account for?

What does it condition me to look for when I look at my child and, thus, how does it condition my
perception of my child?

Is it my child that I see or a moment to be documented? Am I perceiving my child in herself or am I


perceiving my child as a component of an image, a piece of the visual furniture?

What becomes of the integrity of the moment when seeing is mediated through an always-present
digital camera?

How does the representation of my child in images that capture discreet moments impact my experi-
ence of time with my child? Do these images sustain or discourage the formation of a narrative within
which the meaning of my relationship with my child emerges?

It is worth noting, as well, that the smartphone camera ordinarily exists as one component within a
network of tools that includes the internet and social media tools. In other words, the image is not
merely a record of a moment or an externalized memory. It is also always potentially an act of commu-
nication. An audience–on Facebook, Twitter, Instagram, Youtube, Snapchat, etc.–is everywhere with
me as an ambient potentiality that conditions my perception of all that enters into my experience. Con-
sequently, I may perceive my child not only as a potential image but as a potential image for an audi-
ence.

What is the nature of this audience? What images do I believe they care to see? What images do I
want them to see? From where does my idea of the images they care to see arise? Do they arise from
the images I see displayed for me as part of another’s audience? Or from professional media or com-
mercial marketing campaigns? Are these the visual patterns I remember, half-consciously perhaps,
when my perceiving takes the on the aspect of seeing-as-expectation? Do they form my percep-
tion-as-desire? For whom is my child under these circumstances?

I have raised many questions, which I have left unanswered. I leave these questions unanswered
chiefly because whatever my answers may be, they are not likely to be your answers. And the value of
these questions lies in the asking and not in the particular answers that I might give to them. Regardless
of the answers we give, the questions are worth asking for what they may reveal as we contemplate
them.

July 26, 2017


31. Growing Up With AI

In an excerpt from her forthcoming book, Who Can You Trust? How Technology Brought Us To-
gether and Why It Might Drive Us Apart, Rachel Botsman reflects on her three-year-old’s interactions
with Amazon’s AI assistant, Alexa.

Botsman found that her daughter took quite readily to Alexa and was soon asking her all manner of
questions and even asking Alexa to make choices for her, about what to wear, for instance, or what she
should do that day. “Grace’s easy embrace of Alexa was slightly amusing but also alarming,” Botsman
admits. “Today,” she adds, “we’re no longer trusting machines just to do something, but to decide what
to do and when to do it.” She then goes on to observe that the next generation will grow up surrounded
by AI agents, so that the question will not be “Should we trust robots?” but rather “Do we trust them
too much?”

Along with issues of privacy and data gathering, Botsman was especially concerned with the inter-
section of AI technology and commercial interests: “Alexa, after all, is not ‘Alexa.’ She’s a corporate
algorithm in a black box.”

To these concerns, philosopher Mark White, elaborating on Botsman’s reflections, adds the
following:

Again, this would not be as much of a problem if the choices we cede to algorithms only dealt
with songs and TV shows. But as Botsman’s story shows, the next generation may develop a
degree of faith in the “wisdom” of technology that leads them to give up even more autonomy
to machines, resulting in a decline in individual identity and authenticity as more and more de-
cisions are left to other parties to make in interests that are not the person’s own—but may be
very much in the interests of those programming and controlling the algorithms.

These concerns are worth taking into consideration. I’m ambivalent about framing a critique of tech-
nology in terms of authenticity, or even individual identity, but I’m not opposed to a conversation
along these lines. Such a conversation at least encourages us to think a bit more deeply about the role
of technology in shaping the sorts of people we are always in the process of becoming. This is, of
course, especially true of children.

Our identity, however, does not emerge in pristine isolation from other human beings or indepen-
dently from the fabric of our material culture, technologies included. That is not the ideal to which we
should aim. Technology will unavoidably be part of our children’s lives and ours. But which technolo-
gies? Under what circumstances? For what purposes? With what consequences? These are some of the
questions we should be asking.

Of an AI assistant that becomes part of a child’s taken-for-granted environment, other more specific
questions also come to mind.

What conversations or interactions will the AI assistant displace?

How will it effect the development of a child’s imagination?

How will it direct a child’s attention?

How will a child’s language acquisition be effected?

What expectations will it create regarding the solicitude they can expect from the world?

How will their curiosity be shaped by what the AI assistant can and cannot answer?

Will the AI assistants undermine the development of critical cognitive skills by their ability to im-
mediately respond to simple questions?

Will their communication and imaginative life shrink to the narrow parameters within which they
can interact with AI?

Will parents be tempted to offload their care and attentiveness to the AI assistant, and with what
consequences?
Of AI assistants generally, we might conclude that what they do well–answer simple direct ques-
tions, for example–may, in fact, prove harmful to a child’s development, and what they do poorly–pro-
vide for rich, complex engagement with the world–is what children need most.

We tend to bend ourselves to fit the shape of our tools. Even as tech-savvy adults we do this. It
seems just as likely that children will do likewise. For this reason, we do well to think long and hard
about the devices that we bring to bear upon their lives.

We make all sorts of judgements as a society about when it is appropriate for children to experience
certain realities, and this care for children is one of the marks of a healthy society. We do this through
laws, policy, and cultural norms. With regards to the norms that govern the technology that we intro-
duce into our children’s lifeworld, we would do well, it seems to me, to adopt a more cautionary
stance. Sometimes this means shielding children from certain technologies if it is not altogether obvi-
ous that their impact will be helpful and beneficial. We should, in other words, shift the burden of proof
so that a technology must earn its place in our children’s lives.

Botsman finally concluded that her child was not ready for Alexa to be a part of her life and that it
was possibly usurping her own role as parent:

Our kids are going to need to know where and when it is appropriate to put their trust in com-
puter code alone. I watched Grace hand over her trust to Alexa quickly. There are few checks
and balances to deter children from doing just that, not to mention very few tools to help them
make informed decisions about A.I. advice. And isn’t helping Gracie learn how to make deci-
sions about what to wear — and many more even important things in life — my job? I decided
to retire Alexa to the closet.

It is even better when companies recognize some of these problem and decide (from mixed motives,
I’m sure) to pull a device whose place in a child’s life is at best ambiguous.

October 12, 2017


32. Digital Devices and Learning to Grow Up

Last week the NY Times ran the sort of op-ed on digital culture that the cultured despisers love to
ridicule. In it, Jane Brody made a host of claims about the detrimental consequences of digital media
consumption on children, especially the very young. She had the temerity, for example, to call texting
the “next national epidemic.” Consider as well the following paragraphs:

“Two of my grandsons, ages 10 and 13, seem destined to suffer some of the negative effects of
video-game overuse. The 10-year-old gets up half an hour earlier on school days to play com-
puter games, and he and his brother stay plugged into their hand-held devices on the ride to and
from school. ‘There’s no conversation anymore,’ said their grandfather, who often picks them
up. When the family dines out, the boys use their devices before the meal arrives and as soon as
they finish eating.

‘If kids are allowed to play ‘Candy Crush’ on the way to school, the car ride will be quiet, but
that’s not what kids need,’ Dr. Steiner-Adair said in an interview. ‘They need time to daydream,
deal with anxieties, process their thoughts and share them with parents, who can provide reas-
surance.’

Technology is a poor substitute for personal interaction.”

Poor lady, I thought, and a grandmother no less. She was in for the kind of thrashing from the digital
sophisticates that is usually reserved for Sherry Turkle.

In truth, I didn’t catch too many reactions to the piece, but one did stand out. At The Awl, John
Hermann summed up the critical responses with admirable brevity:

“But the argument presented in the first installment is also proudly unsophisticated, and doesn’t
attempt to preempt obvious criticism. Lines like ‘technology is a poor substitute for personal in-
teraction,’ and non-sequitur quotes from a grab-bag of experts, tee up the most common and ef-
fective response to fears of Screen Addiction: that what’s happening on all these screens is not,
as the writer suggests, an endless braindead Candy Crush session, but a rich social experience
of its own. That screen is full of friends, and its distraction is no less valuable or valid than the
distraction of a room full of buddies or a playground full of fellow students. Screen Addiction
is, in this view, nonsensical: you can no more be addicted to a screen than to windows, sounds,
or the written word.”

But Hermann does not quite leave it at that: “This is an argument worth making, probably. But tell it
to an anxious parent or an alienated grandparent and you will sense that it is inadequate.” The argument
may be correct, but, Hermann explains, “Screen Addiction is a generational complaint, and genera-
tional complaints, taken individually, are rarely what they claim to be. They are fresh expressions of
horrible and timeless anxieties.”

Hermann goes on to make the following poignant observations:

“The grandparent who is persuaded that screens are not destroying human interaction, but are
instead new tools for enabling fresh and flawed and modes of human interaction, is left facing a
grimmer reality. Your grandchildren don’t look up from their phones because the experiences
and friendships they enjoy there seem more interesting than what’s in front of them (you).
Those experiences, from the outside, seem insultingly lame: text notifications, Emoji, selfies
of other bratty little kids you’ve never met. But they’re urgent and real. What’s different is that
they’re also right here, always, even when you thought you had an attentional claim. The mo-
ments of social captivity that gave parents power, or that gave grandparents precious access, are
now compromised. The TV doesn’t turn off. The friends never go home. The grandkids can do
the things they really want to be doing whenever they want, even while they’re sitting five feet
away from grandma, alone, in a moving soundproof pod.

Hermann is less sanguine:

“Screen Addiction is a new way for kids to be blithe and oblivious; in this sense, it is empower-
ing to the children, who have been terrible all along. The new grandparent’s dilemma, then, is
both real and horribly modern. How, without coming out and saying it, do you tell that kid that
you have things you want to say to them, or to give them, and that you’re going to die someday,
and that they’re going to wish they’d gotten to know you better? Is there some kind of curiosity
gap trick for adults who have become suddenly conscious of their mortality?”

“A new technology can be enriching and exciting for one group of people and create alienation for
another;” Hermann concludes, “you don’t have to think the world is doomed to recognize that the
present can be a little cruel.”

Well put.

I’m tempted to leave it at that, but I’m left wondering about the whole “generational complaint”
business.

To say that something is a generational complaint suggests that we are dealing with old men yelling,
“Get off my lawn!” It conjures up the image of hapless adults hopelessly out of sync with the brilliant
exuberance of the young. It is, in other words, to dismiss whatever claim is being made. Granted, Her-
mann has given us a more sensitive and nuanced discussion of the matter, but even in his account too
much ground is ceded to this kind of framing.

If we are dealing with a generational complaint, what exactly do we mean by that? Ostensibly that
the old are lodging a predictable kind of complaint against the young, a complaint that amounts to little
more than an unwillingness to comprehend the new or a desperate clinging to the familiar. Looked at
this way, the framing implies that the old, by virtue of their age, are the ones out of step with reality.

But what if the generational complaint is framed rather as a function of coming into responsible
adulthood. Hermann approaches this perspective when he writes, “Screen Addiction is a new way for
kids to be blithe and oblivious; in this sense, it is empowering to the children, who have been terrible
all along.” So when a person complains that they are being ignored by someone enthralled by their de-
vice, are they showing their age or merely demanding a basic degree of decency?

Yes, children are wont to be blithe and oblivious, often cruelly indifferent to the needs of others.
Traditionally, we have sought to remedy that obliviousness and self-centeredness. Indeed, coming into
adulthood more or less entails gaining some measure of control over our naturally self-centered im-
pulses for our own good and for the sake of others. In this light, asking a child–whether age seven or
thirty-seven–to lay their device aside long enough to acknowledge the presence of another human be-
ing is simply to ask them to grow up.

Others have taken a different tack in response to Brody and Hermann. Jason Kottke arrives at this
conclusion:

“People on smartphones are not anti-social. They’re super-social. Phones allow people to be
with the people they love the most all the time, which is the way humans probably used to be,
until technology allowed for greater freedom of movement around the globe. People spending
time on their phones in the presence of others aren’t necessarily rude because rudeness is a so-
cial contract about appropriate behavior and, as Hermann points out, social norms can vary
widely between age groups. Playing Minecraft all day isn’t necessarily a waste of time. The real
world and the virtual world each have their own strengths and weaknesses, so it’s wise to spend
time in both.”

Of course. But how do we allocate the time we spend in each–that’s the question. Also, I’m not
quite sure what to make of his claim about rudeness and the social contract except that it seems to sug-
gest that it’s not rudeness if you decide you don’t like the terms of the social contract that renders it so.
Sorry Grandma, I don’t recognize the social contract by which I’m supposed to acknowledge your
presence and render to you a modicum of my attention and affection.

Yes, digital devices have given us the power to decide who is worthy of our attention minute by
minute. Advocates of this constant connectivity—many of them, like Facebook, acting out of obvious
self-interest—want us to believe this is an unmitigated good and that we should exercise this power
with impunity. But, how to say this without sounding alarmist, encouraging people to habitually render
other human beings unworthy of their attention seems like a poor way to build a just and equitable so-
ciety.

June 12, 2015


33. The Ethics of Technological Mediation

Where do we look when we’re looking for the ethical implications of technology? A few would say
that we look at the technological artifact itself. Many more would counter that the only place to look
for matters of ethical concern is to the human subject. Philosopher of technology, Peter-Paul Verbeek,
argues that there is another, perhaps more important place for us to look: the point of mediation, the
point where the artifact and human subjectivity come together to create effects that cannot be located in
either the artifact or the subject taken alone.

Early on in Moralizing Technology: Understanding and Designing the Morality of Things (2011),
Verbeek briefly outlines the emergence of the field known as “ethics of technology.” “In its early
days,” Verbeek notes, “ethical approaches to technology took the form of critique. Rather than address-
ing specific ethical problems related to actual technological developments, ethical reflection on tech-
nology focused on criticizing the phenomenon of ‘Technology’ itself.” Here we might think of Heideg-
ger, critical theory, or Jacques Ellul. In time, “ethics of technology” emerged “seeking increased under-
standing of and contact with actual technological practices and developments,” and soon a host of sub-
fields appeared: biomedical ethics, ethics of information technology, ethics of nanotechnology, engi-
neering ethics, ethics of design, etc.

This approach remains, accordin to Verbeek, “merely instrumentalist.” “The central focus of ethics,”
on this view, “is to make sure that technology does not have detrimental effects in the human realm and
that human beings control the technological realm in morally justifiable ways.” It’s not that these con-
siderations are unimportant, quite the contrary, but Verbeek believes that this approach “does not yet
go far enough.”

Verbeek explains the problem:

“What remains out of sight in this externalist approach is the fundamental intertwining of these
two domains [the human and the technological]. The two simply cannot be separated. Humans
are technological beings, just as technologies are social entities. Technologies, after all, play a
constitutive role in our daily lives. They help to shape our actions and experiences, they inform
our moral decisions, and they affect the quality of our lives. When technologies are used, they
inevitably help to shape the context in which they function. They help specific relations be-
tween human beings and reality to come about and coshape new practices and ways of living.”

Observing that technologies mediate perception, how we register the world, and action, how we act
into the world, Verbeek elaborates a theory of technological mediation, built upon a postphenomeno-
logical approach to technology pioneered by Don Ihde. Rather than focus exclusively on either the arti-
fact “out there,” the technological object, or the will “in here,” the human subject, Verbeek invites us to
focus ethical attention on the constitution of both the perceived object and the subject’s intention in the
act of technological mediation. In other words, how technology shapes perception and action is also of
ethical consequence.

As Verbeek rightly insists, “Artifacts are morally charged; they mediate moral decisions, shape
moral subjects, and play an important role in moral agency.”

Verbeek turns to the work of Ihde for some analytic tools and categories. Among the many ways hu-
mans might relate to technology, Ihde notes two relations of “mediation.” The first of these he calls
“embodiment relations” in which the tools are incorporated by the user and the world is experienced
through the tool (think of the blind man’s stick). The second he calls a “hermeneutic relation.” Verbeek
explains:

“In this relation, technologies provide access to reality not because they are ‘incorporated,’ but
because they provide a representation of reality, which requires interpretation [….] Ihde shows
that technologies, when mediating our sensory relationship with reality, transform what we per-
ceive. According to Ihde, the transformation of perception always has the structure of amplifi-
cation and reduction.”

Verbeek gives us the example of looking at a tree through an infrared camera: most of what we see
when we look at a tree unaided is “reduced,” but the heat signature of the tree is “amplified” and the
tree’s health may be better assessed. Ihde calls this capacity of a tool to transform our perception “tech-
nological intentionality.” In other words, the technology directs and guides our perception and our at-
tention. It says to us, “Look at this here not that over there” or “Look at this thing in this way.” This
function is not morally irrelevant, especially when you consider that this effect is not contained within
the digital platform but spills out into our experience of the world.

Verbeek also believes that our reflection on the moral consequences of technology would do well to
take virtue ethics seriously. With regards to the ethics of technology, we typically ask, “What should I
or should I not do with this technology?” and thus focus our attention on our actions. In this, we follow
the lead of the two dominant modern ethical traditions: the deontological tradition stemming from Im-
manuel Kant, on the one hand, and the consequentialist tradition, closely associated with Bentham and
Mill, on the other. In the case of both traditions, a particular sort of moral subject or person is in view
—an autonomous and rational individual who acts freely and in accord with the dictates of reason.

In the Kantian tradition, the individual, having decided upon the right course of action through the
right use of their reason, is duty bound to act thusly, regardless of consequences. In the consequentialist
tradition, the individual rationally calculates which action will yield the greatest degree of happiness,
variously understood, and acts accordingly.

If technology comes into play in such reasoning by such a person, it is strictly as an instrument of
the individual will. The question, again, is simply, “What should I do or not do with it?” We ascertain
the answer by either determining the dictates of subjective reasoning or calculating the objective conse-
quences of an action, the latter approach is perhaps more appealing for its resonance with the ethos of
technique.

We might conclude, then, that the popular instrumentalist view of technology—a view which takes
technology to be a mere a tool, a morally neutral instrument of a sovereign will—is the natural posture
of the sort of individual or moral subject that modernity yields. It is unlikely to occur to such an indi-
vidual that technology is not only a tool with which moral and immoral actions are preformed but also
an instrument of moral formation, informing and shaping the moral subject.

It is not that the instrumentalist posture is of no value, of course. On the contrary, it raises important
questions that ought to be considered and investigated. The problem is that this approach is incomplete
and too easily co-opted by the very realities that it seeks to judge. It is, on its own, ultimately inade-
quate to the task because it takes as its starting point an inadequate and incomplete understanding of the
human person.

There is, however, another older approach to ethics that may help us fill out the picture and take into
account other important aspects of our relation to technology: the tradition of virtue ethics in both its
classical and medieval manifestations.

Verbeek comments on some of the advantages of virtue ethics. To begin with, virtue ethics does not
ask, “What am I to do?” Rather, it asks, in Verbeek’s formulation, “What is the good life?” We might
also add a related question that virtue ethics raises: “What sort of person do I want to be?” This is a
question that Verbeek also considers, taking his cues from the later work of Michel Foucault.

The question of the good life, Verbeek adds,

“does not depart from a separation of subject and object but from the interwoven character of
both. A good life, after all, is shaped not only on the basis of human decisions but also on the
basis of the world in which it plays itself out (de Vries 1999). The way we live is determined
not only by moral decision making but also by manifold practices that connect us to the mate-
rial world in which we live. This makes ethics not a matter of isolated subjects but, rather, of
connections between humans and the world in which they live.”

Virtue ethics, with its concern for habits, practices, and communities of moral formation, illuminates
the various ways technologies impinge upon our moral lives. For example, a technologically mediated
action that, taken on its own and in isolation, may be judged morally right or indifferent may appear in
a different light when considered as one instance of a habit-forming practice that shapes our disposition
and character.

Moreover, virtue ethics, which predates the advent of modernity, does not necessarily assume the
sovereign individual as its point of departure. For this reason, it is more amenable to the ethics of tech-
nological mediation elaborated by Verbeek. Verbeek argues for “the distributed character of moral
agency,” distributed that is among subject and the various technological artifacts that mediate the sub-
ject’s perception of and action in the world.
At the very least, asking the sorts of questions raised within a virtue ethic framework fills out our
picture of technology’s ethical consequences.

In Susanna Clarke’s delightful novel, Jonathan Strange & Mr. Norrell, a fantastical story cast in re-
alist guise about two magicians recovering the lost tradition of English magic in the context of the
Napoleonic Wars, one of the main characters, Strange, has the following exchange with the Duke of
Wellington:

“Can a magician kill a man by magic?” Lord Wellington asked Strange. Strange frowned. He
seemed to dislike the question. “I suppose a magician might,” he admitted, “but a gentleman
never would.”

Strange’s response is instructive and the context of magic more apropos than might be apparent.
Technology, like magic, empowers the will, and it raises the sort of question that Wellington asks: can
such and such be done?

Not only does Strange’s response make the ethical dimension paramount, he approaches the ethical
question as a virtue ethicist. He does not run consequentialist calculations nor does he query the delib-
erations of a supposedly universal reason. Rather, he frames the empowerment availed to him by magic
with a consideration of the kind of person he aspires to be, and he subjects his will to this larger project
of moral formation. In so doing, he gives us a good model for how we might think about the empower-
ments availed to us by technology.

As Verbeek, reflecting on the aptness of the word subject, puts it, “The moral subject is not an au-
tonomous subject; rather, it is the outcome of active subjection.” It is, paradoxically, this kind of sub-
jection that can ground the relative freedom with which we might relate to technology.

November 8, 2017
34. One Does Not Simply Add Ethics To Technology

In a twitter thread that was retweeted thousands of times, the actor Kumail Nanjiani took the tech in-
dustry to task for its apparent indifference to the ethical consequences of their work.

Nanjiani stars in the HBO series Silicon Valley and, as part of his research for the role, he spends a
good deal of time at tech conferences and visiting tech companies. When he brings up possible ethical
concerns, he realizes “that ZERO consideration seems to be given to the ethical implications of tech.”
“They don’t even have a pat rehearsed answer,” Nanjiani adds, “They are shocked at being asked.
Which means nobody is asking those questions.” Read the whole thread. It ends on this cheery note:
“You can’t put this stuff back in the box. Once it’s out there, it’s out there. And there are no guardians.
It’s terrifying. The end.”

Nanjiani’s thread appears to have struck a nerve. It was praised by many of the folks I follow on
Twitter, and rightly so. Yes, he’s an actor, not a philosopher, historian, or sociologist, etc., but there’s
much to commend in his observations and warnings.

But here’s what Nanjiani may not know: we had, in fact, been warned. Nanjiani believes that “no-
body is asking those questions,” questions about technology’s ethical consequence, but this is far from
the truth. Technology critics have been warning us for a very long time about the disorders and chal-
lenges, ethical and otherwise, that attend contemporary technology. In 1977, for example, Langdon
Winner wrote the following:

Different ideas of social and political life entail different technologies for their realization. One
can create systems of production, energy, transportation, information handling, and so forth that
are compatible with the growth of autonomous, self-determining individuals in a democratic
polity. Or one can build, perhaps unwittingly, technical forms that are incompatible with this
end and then wonder how things went strangely wrong. The possibilities for matching political
ideas with technological configurations appropriate to them are, it would seem, almost endless.
If, for example, some perverse spirit set out deliberately to design a collection of systems to in-
crease the general feeling of powerlessness, enhance the prospects for the dominance of techni-
cal elites, create the belief that politics is nothing more than a remote spectacle to be experi-
enced vicariously, and thereby diminish the chance that anyone would take democratic citizen-
ship seriously, what better plan to suggest than that we simply keep the systems we already
have?

It would not take very much time or effort to find similar expressions of critical concern about tech-
nology’s social and moral consequences from a wide array of writers, critics, historians, philosophers,
sociologists, political theorists, etc. dating back at least a century.

My first response to Nanjiani’s thread is thus mild irritation, bemusement really, about how novel
and daring his comments appear when, in fact, so many have for so long been saying just as much and
more trenchantly and at great length.

Beyond this, however, there are a few other points worth noting.

First, we are, as a society, deeply invested in the belief that technology is ethically neutral if not, in
fact, an unalloyed good. There are complex and longstanding reasons for this, which, in my view, in-
volve both the history of politics and of religion in western society over the last few centuries. Crudely
put, we have invested an immense measure of hope in technology and in order for these hopes to be re-
alized it must be assumed that technology is ethically neutral or unfailingly beneficent. For example, if
technology, in the form of Big Data driven algorithmic processes, is to function as arbiter of truth, it
can do so only to the degree that we perceive these processes to be neutral and above the biases and
frailties that plague human reasoning.

Second, the tech industry is deeply invested in the belief that technology is ethically neutral. If tech-
nology is ethically neutral, then those who design, market, and manufacture technology cannot be held
responsible for the consequences of their work. Moreover, we are, as consumers, more likely to adopt
new technologies if we are wholly untroubled by ethical considerations. If it occurred to us that every
device we buy was a morally fraught artifact, we might be more circumspect about what we purchase
and adopt.
Third, it’s not as easy as saying we should throw some ethics at our technology. One should imme-
diately wonder whose ethics are in view? We should not forget that ours is an ethically diverse society
and simply noting that technology is ethically fraught does not immediately resolve the question of
whose ethical vision should guide the design, development, and deployment of new technology. In-
deed, this is one of the reasons we are invested in the myth of technology’s neutrality in the first place:
it promises an escape from the messiness of living with competing ethical frameworks and accounts of
human flourishing.

Fourth, in seeking to apply ethics to technology we would not be entering into a void. In Autono-
mous Technology, Langdon Winner observed that “while positive, utopian principles and proposals can
be advanced, the real field is already taken. There are, one must admit, technologies already in exis-
tence—apparatus occupying space, techniques shaping human consciousness and behavior, organiza-
tions giving pattern to the activities of the whole society.”

Likewise, when we seek to apply ethics to technology, we must recognize that the field is already
taken. Not only are particular artifacts and devices not ethically neutral, they also partake of a pattern
that informs the broader technological project. Technology is not neutral and, in its contemporary man-
ifestations, it embodies a positive ethic. It is unfashionable to say as much, but it seems no less true to
me. I am here thinking of something like what Jacques Ellul called la technique or what Albert
Borgmann called the device paradigm. The principles of this overarching but implicit ethic embodied
by contemporary technology include axioms such as “faster is always better,” “efficiency is always
good,” “reducing complexity is always desirable,” “means are always indifferent and interchangeable.”

Fifth, the very idea of a free-floating, abstract system of ethics that can simply be applied to technol-
ogy is itself misleading and a symptom of the problem. Ethics are sustained within communities whose
moral visions are shaped by narratives and practices. As Langdon Winner has argued, drawing on the
work of Alasdair MacIntyre, “debates about technology policy confirm MacIntyre’s argument that
modern societies lack the kinds of coherent social practice that might provide firm foundations for
moral judgments and public policies.” “[T]he trouble,” Winner adds, “is not that we lack good argu-
ments and theories, but rather that modern politics simply does not provide appropriate roles and insti-
tutions in which the goal of defining the common good in technology policy is a legitimate project.”
Contemporary technology undermines the communal and political structures that might sustain an
ethical vision capable of directing and channeling the development of technology (creative destruction
and what not). And, consequently, it thrives all the more because these structures are weakened. In-
deed, alongside Ellul’s la technique and Borgmann’s device paradigm, we might add another pattern
that characterizes contemporary technology: the design of contemporary technology is characterized by
a tendency to veil or obscure its ethical ramifications. We can call it, with a nod to Borgmann, the ethi-
cal neutrality paradigm: contemporary technologies are becoming more ethically consequential while
their design all the more successfully obscures their ethical import.

I do not mean to suggest that it is futile to think ethically about technology. That’s been more or less
what I’ve been trying to do for the past seven years. But under these circumstances, what can be done?
I have no obvious solutions. It would be helpful, though, if designers worked to foreground rather than
veil the ethical consequences of their tools. That may be, in fact, the best we can hope for at present:
technology that resists the ethical neutrality paradigm, yielding moral agency back to the user or, at
least, bringing the moral valence of its use, distributed and mediated as it may be, more clearly into
view.

November 6, 2017
35. There Is No "We"

“Questioning AI ethics does not make you a gloomy Luddite,” or so the title of an article in a Lon-
don business newspaper assures us. The most important thing to be learned here is that someone feels
this needs to be said. Beyond that, there is also something instructive about the concluding paragraphs.
If we read them against the grain, these paragraphs teach us something about how difficult it is to bring
ethics to bear on technology.

“I’m simply calling for us to use all the tools at our disposal to build a better digital future,” the au-
thor tells us.

In practice, this means never forgetting what makes us human. It means raising awareness and
entering into dialogue about the issue of ethics in AI. It means using our imaginations to articu-
late visions for a future that’s appealing to us.

If we can decide on the type of society we’d like to create, and the type of existence we’d like
to have, we can begin to forge a path there.

All in all, it’s essential that we become knowledgeable, active, and influential on AI in every
small way we can. This starts with getting to grips with the subject matter and past extreme and
sensationalised points of view. The decisions we collectively make today will influence many
generations to come.”

Here are the challenges as I see them:

We have no idea what makes us human. You may, but we don’t.

We have nowhere to conduct meaningful dialogue; we don’t even know how to have meaningful di-
alogue.
Our imaginations were long ago surrendered to technique.

We can’t decide on the type of society we’d like to create or the type of existence we’d like to have,
chiefly because this “we” is rhetorical. It is abstract and amorphous.

There is no meaningful antecedent to the pronouns we and us used throughout these closing para-
graphs. Ethics is a communal business. This is no less true with regards to technology, perhaps it is all
the more true. There is, however, no we there.

As individuals, we are often powerless against larger forces dictating how we are to relate to tech-
nology. The state is in many respects beholden to the technological–ideologically, politically, economi-
cally. Regrettably, we have very few communities located between the individual and the state consti-
tuting a we that can meaningfully deliberate and effectively direct the use of technology.

Technologies like AI emerge and evolve in social spaces that are resistant to substantial ethical cri-
tique. They also operate at a scale that undermines the possibility of ethical judgment and responsibil-
ity. Moreover, our society is ordered in such a way that there is very little to be done about it, chiefly
because of the absence of structures that would sustain and empower ethical reflection and practice, the
absence, in other words, of a we that is not merely rhetorical.

December 3, 2017
36. Does Technology Evolve More Quickly Than Ethi-
cal and Legal Norms?
It is frequently observed that developments in technology run ahead of law and ethics, which never
quite catch up. This may be true, but not in the way it is usually imagined. What follows is a series of
loosely related considerations that might help us see the matter more clearly.

When people claim that technology outstrips law and ethics, they are usually thinking more about
the rapid advance of technology than they are about the structures of law and ethics. If we were to un-
pack the claim, it would run something like this: new technologies which empower us in novel ways
and introduce unprecedented capacities and risks emerge so quickly that existing laws and ethical prin-
ciples, both of which are relatively static, cannot adapt fast enough to keep up.

Thought of in this way, the real pressure point is missed. It is not merely the case that new technolo-
gies emerge for which we have no existing moral principles or laws to guide and constrain their use;
this is only part of the picture. Rather, it is also the case that modern* technologies, arising in tandem
with modern political and economic structures, have undermined the plausibility of ethical claims and
legal constraints, weakened the communities that sustained and implemented such claims and con-
straints, and challenged the understanding of human nature upon which they depended.

To put the matter somewhat more succinctly, contemporary technologies emerge in a social context
that is ideal for their unchecked and unconstrained development and deployment. In other words, tech-
nology appears to outstrip ethics and law only because of a prior hollowing out of our relevant moral
infrastructure.

Social and technological forces have untethered and deracinated the human person, construing her
primarily and perhaps even exclusively as an individual. However, valuable this construal may be, it
leaves us ill equipped to cope with technologies that necessarily involve us in social realities.

From the ethics side of the ledger, it is also the case that modern ethics (think Kant, for example)
also construed ethics chiefly as a matter of the individual will. A project undertaken by autonomous
and rational actors without regard for moral and political communities. Political philosophy (Locke, et
al) and economic theory (Smith, etc.) follow similar trajectories.

So, in theory (political, philosophical, and economic) the individual emerges as the basic unit of
thought and action. At the center of this modern theoretical picture is a novel view of freedom as indi-
vidual autonomy. The individual no longer bends their will to the shape of a moral and communal or-
der; they now bend the world to the shape of their will.

In practice, material conditions, including new technologies, sustain and reinforce this theoretical
picture. Indeed, the material/technological conditions likely preceded the theory. Moreover, technology
evolves as a tool of empowerment that makes the new understanding of freedom plausible and seem-
ingly attainable. Technology is thus not apprehended as an object of moral critique; it is perceived, in
fact, as the very thing that will make possible the realization of the new vision of the good life, one in
which the world is the field of our own self-realization.

While certain social and material realities were isolating and untethering the individual, by the mid-
19th century technologies arose that were, paradoxically, embedding her in ever more complex techni-
cal systems and social configurations.

Paradoxically, then, the more we took for granted our own agency and assumed that technology was
a neutral tool of the individual autonomous will, the more our will and agency was being compromised
and distributed by new technologies.
Shortest version of the preceding: Material conditions untether the individual. Modern theoreti-
cal accounts frame this as a benign and desirable development. Under these circumstances,
technology is unbridled and evolves to a scale that renders individual ethical action relatively
inconsequential.

Moreover, the scale of these new technologies eclipsed the scale of local communities and tradi-
tional institutions. The new institutions that arose to deal with the new scale of operation were bureau-
cracies, that is to say that they themselves embodied the principles and values implicit in the emerging
technological milieu.

It may be better, then, to say that it is the scale of new technologies that transcends the institutions
and communities which are the proper sites for ethical reflection about technology. The governing in-
stinct is to scale up our institutions and communities to meet the challenge, but this inevitably involves
a reliance on the same technologies that generate the problems. It never occurs to us that the answer
may lie in a refusal to operate at a scale that is inhospitable to the human person.

Something other than individual choices and laws are necessary. Something more akin to a renewal
of cultural givens about what it means to be a human being and how the human relates to the non-hu-
man, givens which inform ethical choices and laws but cannot be reduced to either, and the emergence
of institutions that embody and sustain individual lives ordered by these givens. It is hard, however, to
see how these emerge under present circumstances.

January 26, 2018


37. Why We Can’t Have Humane Technology

I wrote the title post for this collection of essays you are now reading in 2014. In it, I asked if arti-
facts had ethics? Yes, of course, was the answer, and I offered forty-one questions formulated to help
us think about an artifact’s complex ethical dimensions. A few months later, in 2015, I wrote about
what I called Humanist Tech Criticism.

You would think, then, that I’d be pleasantly surprised by the recent eruption of interest in both
ethics of technology and humane technology. I am, however, less than enthusiastic but also trying not
to be altogether cynical.

A quick search will easily pull up dozens of stories that focus on a set of interrelated topics: former
Silicon Valley executives and designers lamenting their earlier work, current investors demanding
more ethically responsible devices (especially where children are concerned), general fretting over the
consequences of social media (especially Facebook) on our political culture, and reporting on the for-
mation of the Center for Humane Technology.

As you might imagine, there have been critics of this recent ethical awakening. Critics tend to focus
on what can be interpreted as a rather opportunistic change of tune at little to no personal cost for most
of these Silicon Valley executives and designers. This cynicism is not altogether unwarranted. I’m
tempted by this same cynicism but it also appears to me that it cannot be justly applied indiscriminately
to all of those individuals working for a more ethical and humane tech industry.

My concerns lie elsewhere. What they amount to, mostly, is this: all efforts to apply ethics to tech-
nology or to envision a more humane technology will flounder because there is no robust public con-
sensus about either human flourishing or ethical norms.

Moreover, technology is both cause and symptom of this state of affairs: it advances, more or less
unchecked, precisely because of this absence while it progressively undermines the plausibility of such
a consensus. Thus, we are stuck in a vicious cycle generated by a reinforcing confluence of political,
cultural, economic, and technological forces.
Most of the efforts mentioned above appear to me, then, to address what amounts to the the tip of
the tip of the iceberg. That said, I want to avoid a critical hipsterism here—”I’m aware of problems so
deep you don’t even realize they exist.” And I also do not want to suggest that any attempt at reform is
useless unless it addresses the problem in its totality. But it may also be the case that such efforts, aris-
ing from and never escaping the more general technological malaise, only serve to reinforce and extend
the existing situation. Tinkering with the apparatus to make it more humane does not go far enough if
the apparatus itself is intrinsically inhumane.

Meaningful talk about ethics and human flourishing in connection with technology, to say nothing
of meaningful action, might only be possible within communities that can sustain both a shared vision
the good life and the practices that embody such a vision. The problem, of course, is that our technolo-
gies operate at a scale that eclipses the scope of such communities.

In “Friday’s Child,” Auden makes the parenthetical observation, “When kings were local, people
knelt.” Likewise, we might say that when technology was local, people ruled. Something changed once
technology ceased to be local, that is to say once it evolved into complex systems that overlapped com-
munities, states, countries, and cultures. Traditional institutions and cultural norms were no longer ade-
quate. They could not scale up to keep pace with technology because their natural habitat was the local
community.

A final set of observations: Modern technology, in the broadest sense we might imagine the phe-
nomena, closer to what Ellul means by technique, is formative. It tacitly conveys an anthropology, an
understanding of what it means to be a human being. It does so in the most powerful way possible:
inarticulately, as something more basic than a worldview or an ideology. It operates on our bodies, our
perception, our habits; it shapes our imagination, our relationships, our desires.

The modern liberal order abets technology’s formative power to the degree that it disavows any
strong claims about ethics and human flourishing. It is in the space of that disavowal that technology as
an implicit anthropology and an implicit politics takes root and expands, framing and conditioning any
subsequent efforts to subject it to ethical critique. Our understanding of the human is already condi-
tioned by our technological milieu. Fundamental to this tacit anthropology, or account of the human, is
the infinite malleability of human nature. Malleable humanity is a precondition to the unfettered expan-
sion of technology. (This is why transhumanism is the proper eschatology of our technological order.
Ultimately, humanity must adapt and conform, even if it means the loss of humanity as we have known
it. As explicit ideology, this may still seem like a fringe position; as implicit practice, however, it is
widely adopted.)

All of this accounts for why previous calls for more humane technology have not amounted to
much. And this would be one other quibble I have with the work of the Center for Human Technology
and others calling for humanistic technology: thus far there seems to be little awareness of or interest in
a longstanding history of tech criticism that should inform their efforts. Again, this is not about critical
hipsterism, it is about drawing on a diverse intellectual tradition that contains indispensable wisdom for
anyone working toward more ethical and humane technology. Maybe that work is still to come. I hope
that it is.

March 11, 2018


38. Beyond the Trolley Car: The Moral Pedagogy of
Ethical Tools
It is almost impossible to read about the ethics of autonomous vehicles without encountering some
version of the trolley car problem. You’re familiar with the general outline of the problem, I’m sure.
An out-of-control trolley car is barreling toward five unsuspecting people on a track. You are able to
pull a lever and redirect the trolley toward another track, but there is one person on this track who will
be hit by the trolley as a result. What do you do? Nothing and let five people die, or pull the lever and
save five at the expense of one person’s life?

The thought experiment has its origins in a paper by the philosopher Phillipa Foot on abortion and
the concept of double effect. I’m not sure when it was first invoked in the context of autonomous vehi-
cles, but I first came across trolly car-style hypothesizing about the ethics of self-driving cars in a 2012
essay by Gary Marcus, which I learned about in a post on Nick Carr’s blog.

Following the death of a pedestrian who was struck by one of Uber’s self-driving vehicles in Ari-
zona, Evan Selinger and Brett Frischmann co-authored a piece at Motherboard using the trolley car
problem as a way of thinking about the moral and legal issues at stake. As Selinger and Frischmann
point out, the trolley car problem tends to highlight drastic and deadly outcomes, but there are a host of
non-lethal actions of moral consequence that an autonomous vehicle may be programmed to take. It’s
important that serious thought be given to such matters now before technological momentum sets in.

“So, don’t be fooled when engineers hide behind technical efficiency and proclaim to be free from
moral decisions,” the authors conclude. “’I’m just an engineer’ isn’t an acceptable response to ethical
questions. When engineered systems allocate life, death and everything in between, the stakes are in-
evitably moral.”

In a piece at the Atlantic, however, Ian Bogost recommends that we ditch the trolley car problem as
a way of thinking about the ethics of autonomous vehicles. It is, in his view, too blunt an instrument for
serious thinking about the ethical ramifications of autonomous vehicles. Bogost believes “that much
greater moral sophistication is required to address and respond to autonomous vehicles.” The trolley
car problem blinds us to the contextual complexity of morally consequential incidents that will in-
evitably arise as more and more autonomous vehicles populate our roads.

I wouldn’t go so far as to say that trolley car-style thought experiments are useless, but, with Bogost,
I am inclined to believe that they threaten to eclipse the full range of possible ethical and moral consid-
erations in play when we talk about autonomous vehicles.

For starters, the trolley car problem, as Bogost suggests, loads the deck in favor of a utilitarian mode
of ethical reflection. I’d go further and say that it stacks the deck in favor of action-oriented approaches
to moral reflection, whether rule-based or consequentialist. Of course, it is not altogether surprising that
when thinking about moral decision making that must be programmed or engineered, one is tempted by
ethical systems that may appear to reduce ethics to a set of rules to be followed or calculations to be ex-
ecuted.

In trolley car scenarios involving autonomous vehicles, it seems to me that two things are true: a
choice must be made and there is no right choice.

There is no right answer to the trolley car problem. It is a tragedy either way. The trolley car prob-
lem is best thought of as a question to think with not a question to answer definitively. The point is not
to find the one morally correct way to act but to come to feel the burden of moral responsibility.

Moreover, when faced with trolley car-like situations in real life, rare as they may be, human beings
do not ordinarily have the luxury to reason their way to a morally acceptable answer. They react. It
may be impossible to conclusively articulate the sources of that reaction. If there is an ethical theory
that can account for it, it would be virtue ethics not varieties of deontology or consequentialism.

If there is no right answer, then, what are we left with?

Responsibility. Living with the consequences of our actions. Justice. The burdens of guilt. Forgive-
ness. Redemption.

Such things are obviously beyond the pale of programmable ethics. The machine, with which our
moral lives are entwined, is oblivious to such subjective states. It cannot be meaningfully held to ac-
count. But this is precisely the point. The really important consideration is not what the machine will
do, but what the human being will or will not experience and what human capacities will be sustained
or eroded.

In short, the trolley car problem leads us astray in at least two related ways. First, it blinds us to the
true nature of the equivalent human situation: we react, we do not reason. Second, building on this ini-
tial misconstrual, we then fail to see that what we are really outsourcing to the autonomous vehicle is
not moral reasoning but moral responsibility.

Katherine Hayles has noted that distributed cognition (distributed, that is, among human and non-
humans) implies distributed agency. I would add that distributed agency implies distributed moral re-
sponsibility. But it seems to me that moral responsibility is the sort of thing that does not survive such
distribution. (At the very least, it requires new categories of moral, legal, and political thought.) And
this, as I see it, is the real moral significance of autonomous vehicles: they are but one instance of a
larger trend toward a material infrastructure that undermines the plausibility of moral responsibility.

Distributed moral responsibility is just another way of saying deferred or evaded moral responsibil-
ity.

Let’s consider this from another angle. The trolley car problem focuses our ethical reflection on the
accident. What if we were to ask not “What could go wrong?” but “What if it all goes right?” My point
in inverting this query is to remind us that technologies that function exactly as they should and fade
seamlessly into the background of our lived experience are at least as morally consequential as those
that cause dramatic accidents.

Well-functioning technologies we come to trust become part of the material infrastructure of our ex-
perience, which plays an important role in our moral formation. This material infrastructure, the stuff
of life with which we as embodied creatures constantly interact, both consciously and unconsciously, is
partially determinative of our habitus, the set of habits, inclinations, judgments, and dispositions we
bring to bear on the world. This includes, for example, our capacity to perceive the moral valence of
our experiences or our capacity to subjectively experience the burden of moral responsibility. In other
words, it is not so much a matter of specific decisions, although these are important, but of underlying
capacities, orientations, and dispositions.
I suppose the question I’m driving at is this: What is the implicit moral pedagogy of tools to which
we outsource acts of moral judgment?

While it might be useful to consider the trolley car, it’s important as well that we leave it behind for
the sake of exploring the fullest possible range of challenges posed by emerging technologies with
which our moral lives are increasingly entangled.

April 3, 2018
39. In Defense of Technology Ethics, Properly Under-
stood
One of the more puzzling aspects of technology discourse that I encounter with some frequency is
the tendency to oppose legal, political, and economic analysis of technology to ethical analysis. Maybe
I’m the weird outlier on this matter, but I just don’t see how it is helpful or even ultimately tenable to
oppose these two areas of analysis to one another. (Granted, I have my own reservations about “tech
ethics” discourse, but they’re of a different sort.)

I think I understand where the impulse to oppose law, politics, and economics to ethics comes from.
In its simplest form the idea is that ethics without legal, political, or economic action is, at best, basi-
cally a toothless parlor game. More likely, it’s a clever public relations scheme to put up a presentable
front that shields a company’s depredations from public view.

This is fine, I get that. But the answer to this very real possibility is not to deprecate ethical reflec-
tion regarding technology as such. The answer, it seems to me, is to move on all fronts.

What exactly do we think we’re aiming to accomplish with law, regulation, and economic policy
anyway if not acting to address some decidedly ethical concern regarding technology? Law, regulation,
and policy is grounded in some understanding of what is right, what is ethical, what is humane—in
short, what ought to be. It’s not clear to me how such determinations would not be improved by serious
and sustained reflection on the ethical consequences of technological change. If you set aside serious
reflection on the ethics of technology, it’s not as if you get to now properly focus on the more serious
work of law and policy apart from ethics; you simply get to work on law and policy from a more
naive, uninformed, and unacknowledged ethical foundation.

(As an example of the sort of work that does this well, consider Evan Selinger and Brett Frischman-
n’s Reengineering Humanity. Their book blends ethical/philosophical and legal/political analysis in a
way that recognizes the irreducible interrelations among these fields of human experience.)
Moreover, most of us would agree that law and public policy do not and decidedly should not en-
tirely encompass the whole scope of a society’s moral and ethical concerns. There is an indispensable
role to be played by norms and institutions that exist outside the scope of law and government. It is un-
clear to me how exactly these norms and institutions are to evolve so as to more effectively promote
the health of civil society and private life if they are not informed by the work of scholars, journalists,
educators, and writers who deliberately pursue a better understanding of technology’s ethical and moral
consequences. Technology’s moral and ethical consequences far exceed the scope of law and public
policy; are we to limit our thinking about these matters to what can be addressed by law and policy?
Might it not be that possible that it is precisely these morally formative aspects of contemporary tech-
nology that have already compromised our society’s capacity to enact legal and political remedies?

Perhaps the very best thing we can do now is to focus on the hard, deliberate work of educating and
building with a view not to our own immediate future but to the forthcoming generations. As Dietrich
Bonhoeffer put it in another age, “The ultimate question for a responsible man to ask is not how he is to
extricate himself heroically from the affair, but how the coming generation is to live. It is only from
this question, with its responsibility towards history, that fruitful solutions can come, even if for the
time being they are very humiliating.”

(For a good discussion of Bonhoeffer’s view explicitly applied to our technological predicament, see
Alan Jacobs’s reflections here.)

Lastly, while we wait for better policies, better regulations, better laws to address the worst excesses
of the technology sector, what exactly are individuals supposed to do about what they may experience
as the disordering and disorienting impact of technology upon their daily lives? Wait patiently and do
nothing? Remain uninformed and passive? Better, I say, to empower individuals with legal protections
and knowledge. Encourage action and reflection: better yet, action grounded in reflection.

Yes, by all means let us also resist the cheap instrumentalization of ethics and the capture of ethical
reflection by the very interests that ought to be the objects of ethical critique and, where appropriate, le-
gal action. But please, let’s dispense with the rhetorical trope that opposes ethical reflection as such to
the putatively serious work of law and policy.

October 24, 2018


PART III

Media and the Self


40. The (Un)Naturalness of Privacy

Andrew Keen is not an Internet enthusiast, at least not since the emergence of Web 2.0. That much
has been clear since his 2007 book The Cult of the Amateur: How Today’s Internet is Killing Our
Culture and his 2006 essay equating Web 2.0 with Marxism in The Weekly Standard, a publication in
which such an equation is less than flattering. More recently, Keen has taken on all things social in
a Wired UK article, “Your Life Torn Open: Sharing is a Trap.”

Now, I’m all for a good critique and lampoon of the excesses of social media, really, but Keen may
have allowed his disdain to get the better of him and veered into unwarranted, misanthropic excess.
From the closing paragraph:

Today’s digital social network is a trap. Today’s cult of the social, peddled by an unholy al-
liance of Silicon Valley entrepreneurs and communitarian idealists, is rooted in a misunder-
standing of the human condition. The truth is that we aren’t naturally social beings. Instead, as
Vermeer reminds us in The Woman in Blue, human happiness is really about being left
alone. …. What if the digital revolution, because of its disregard for the right of individual pri-
vacy, becomes a new dark ages? And what if all that is left of individual privacy by the end of
the 21st century exists in museums alongside Vermeer’s Woman in Blue? Then what?

“Human happiness is really about being left alone” and “The truth is that we aren’t naturally social
beings”? Striking statements, and almost certainly wrong. It seems rather that the vast majority of hu-
man beings long for, and sometimes despair of never finding, meaningful and enduring relationships
with other human beings. That human flourishing is conditioned on the right balance of the private and
the social, individuality and relationship, seems closer to the mark. And while I suppose one could be
raised by wolves in legends and stories, I’d like to know how infants would survive biologically in iso-
lation from other human beings. On this count, better stick with Aristotle. The family, the clan, the
tribe, the city — these are closer to the ‘natural’ units of human existence.
The most ironic aspect of these claims is Keen’s use of Vermeer’s “Woman in Blue” or, more pre-
cisely, “Woman in Blue Reading a Letter,” to illustrate them. That she is reading a letter is germane to
the point at issue here which is the naturalness of privacy. Contrary to Keen’s assertion of the natural
primacy of privacy, it is closer to the truth to correlate privacy with literacy, particularly silent reading
(which has not always been the norm), and the advent of printing. Changing socio-economic condi-
tions also factor into the rise of modern notions of privacy and the individual. Notions formalized by
Locke and Hobbes who enshrine the atomized individual as the foundation of society, notably, with
founding myths which are entirely a-historical.

In The Vineyard of the Text, Ivan Illich, citing George Steiner, suggests this mutual complicity of
reading and privacy:

According to Steiner, to belong to ‘the age of the book’ meant to own the means of reading.
The book was a domestic object; it was accessible at will for re-reading. The age presupposed
private space and the recognition of the right to periods of silence, as well as the existence of
echo-chambers such as journals, academies, or coffee circles.

Likewise, Walter Ong, drawing on Eric Havelock, explains that

By separating the knower from the known, writing makes possible increasingly articulate intro-
spectivity, opening the psyche as never before not only to the external objective world quite dis-
tinct from itself but also to the interior self agaisnt whom the objective world is set.

Privacy emerges from the dynamics of literacy. The more widespread literacy becomes, as for ex-
ample with the printing press, the more widespread and normalized the modern sense of privacy be-
comes. What Keen is bemoaning is the collapse of the experience privacy wrought by print culture. I
do think there is something to mourn there, but to speak of its “naturalness” misconstrues the situation
and seems to beget a rather sociopathic view human nature.

Finally, it is also telling that Vermeer’s woman is reading a letter. Letters, after all, are a social
genre; letter writing is a form of social life. To be sure, it is a very different form of social life than
what the social web offers, but it is social. And were we not social beings we would not, as Auden puts
its, “count some days and long for certain letters.” “The Woman in Blue Reading a Letter” reminds us
that privacy is bound to literate culture and human beings are bound to one another.

March 7, 2011
41. Living for the Moment in the Age of the Image

We live for the moment because the moment is what an image captures.

It’s not uncommon, I presume, to snap a picture again and again in the often vain attempt to get it
just so. Getting it just so in such cases entails matching the image captured by the photograph to the
image in our mind of what that moment should look like (and feel like).

Two questions follow.

First, where did that image in our mind come from? Likely from countless similar images we’ve
seen on Facebook or Pinterest or Flickr or television or Norman Rockwell or whatever.

The other night, I stood in a near empty section of a big box store waiting, surrounded by aisles of
Christmas decorations, enveloped in the projected sounds of Christmas music, and I thought to myself,
if this were a movie, this is the scene in which the director would zoom further and further out, show-
ing me standing there alone with would-be purchases in hand, and it would scream that tired-late-capi-
talist-suburban-ennui cliché.

And even if I had felt as much, not simply thought that this was what the image suggested, but actu-
ally felt that ennui, would it have been because I was, in fact, an instance of the case, or would it have
been because I had that pre-interpreted image in my head?

We’re Platonists, but our Ideas are not eternal, timeless Forms remembered from glimpses we
caught of them in some preexistent state of our souls. Our Ideas, against which we seek to test the
truthfulness and reality of our experience, are the Images that have become iconic commonplaces gen-
erated in the age of photography, film, and Madison Avenue, and now by social media on which we all
play Don Draper to our own curated brand identity.

The second question, then, is this: Why are we so intent on getting that image just so?
Because it is what our dominant forms of remembering will receive. To be remembered is to appear,
to be taken notice of, to be; so we desire deeply to remember and be remembered. So much so that we
will transform the logic by which we make sense of our lives so that our lives may be subject to means
of remembering.

In the age of stories, be they stories told by the rhapsode, the bard, or the novelist, what mattered
was the whole, not the part. Individual scenes were subordinate to the logic of the whole plot. They
gave one the sense that there was a beginning, middle, and end; and it was not until the end that the
whole significance of the beginning and the middle could be perceived, much less understood.

In the age of images this is reversed. In the image, the whole is instantaneously present. We crave
that moment and the image that captures it, and so we pose and point and click and frame and click and
click again and pose again, but naturally, and click.

Remember in Saving Private Ryan, how in the closing scene Ryan, played by Matt Damon, now far
advanced in years, breaks down before the grave of Capt. Miller, Tom Hank’s character, and pleads
with his uncomprehending wife to tell him that he has lived a good life — a life that, in the end, made
sense of the sacrifice of those who died for him? We could care less about the good life taken whole
and judged from the end, but, ah, we’d love to play a scene like that. It would work so well on
Youtube, and it would feel just right, just then.

We are connoisseurs of the moments and scenes that the camera can frame, but we have little pa-
tience or taste for the satisfactions that arise not in the moment, but in deferred time, when, long after
the moment has passed, it may finally be understood in light of some larger canvas. How, then, could
we be expected to take notice of and live in light of some as of yet future whole. There is no memory to
sustain such a project any longer. But there is memory enough and more to sustain the capture, storage,
and retrieval of the moment.

I’ve suggested that Facebook might undermine the quest for the narrative unity of a life. This was
naïve. It is not that Facebook undermines the quest for narrative unity, it is that Facebook makes such a
quest implausible to begin with. Facebook — as a means of remembering, as our treasury of memory
— receives the image, not the story. No one will write our story, and even if someone would, who
would have the patience to listen to or read it.
If we will remember and be remembered, it will be by the image — and so we will live for the im-
age, for the moment.

December 3, 2012
42. Et in Facebook ego

In Nicolas Poussin’s mid-seventeenth century painting, Et in Arcadia ego, shepherds have stumbled
upon an ancient tomb on which the titular words are inscribed. Understood to be the voice of death, the
Latin phrase may be roughly translated, “Even in Arcadia there am I.” Because Arcadia symbolized a
mythic pastoral paradise, the painting suggested the ubiquity of death. To the shepherds, the tomb was
a momento mori: a reminder of death’s inevitability.

Poussin was not alone among artists of the period in addressing the certainty of death. During the
seventeenth and eighteenth century, vanitas art flourished. The designation stems from the Latin phrase
vanitas vanitatum omni vanitas, a recurring refrain throughout the biblical book of Ecclesiastes: ”van-
ity of vanities, all is vanity,” in the King James translation. Paintings in the genre were still lifes depict-
ing an assortment of objects which represented all that we might pursue in this life: love, power, fame,
fortune, happiness. In their midst, however, one might also find a skull or an hour glass. These were
symbols of death and the brevity of life. The idea, of course, was to encourage people to make the most
of their living years.

For the most part, we don’t go in for this sort of thing anymore. Few people, if any, operate under
the delusion that we might escape death (excepting, perhaps, the Singularity crowd), but we do a pretty
good job of forgetting what we know about death. We keep death out of sight and, hence, out of mind.
We’re certainly not going out of our way to remind ourselves of death’s inevitability. And, who knows,
maybe that’s for the better. Maybe all of those skulls and hourglasses were morbidly unhealthy.

But while vanitas art has gone out of fashion, a new class of memento mori has emerged: the social
media profile.

I’m one of those on again, off again Facebook users. Lately, I’ve been on again, and recently I no-
ticed one of those birthday reminders Facebook places in the column where it puts all of the things
Facebook would like you to click on. It was for a high school friend who I had not spoken to in over
eight years. It was in that respect a very typical Facebook friendship: the sort that probably wouldn’t
exist at all were it not for Facebook. And that’s not necessarily a knock on the platform. For the most
part, I appreciate being able to maintain at least minimal ties to old friends. In this case, though, it
demonstrated just how weak those ties can be.

Upon clicking over to their profile, I read a few odd notes, and very quickly it became disconcert-
ingly clear that my friend had died over a year ago. Naturally, I was taken a back and saddened. He
died while I was off Facebook, and news had not reached me by any other channel. But there it was.
Out of nowhere and without warning my browser was haunted by the very real presence of death. Mo-
mento mori.

Just a few days prior I logged on to Facebook and was greeted by the tragic news of a former stu-
dent’s sudden passing. Because we had several mutual connections, photographs of the young man
found their way into my news feed for several days. It was odd and disconcerting and terribly sad all at
once. I don’t know what I think of social media mourning. It makes me uneasy, but I won’t criticize
what might bring others solace. In any case, it is, like death itself, an unavoidable reality of our social
media experience.

Facebook sometimes feels like a modern-day Arcadia. It is a carefully cultivated space in which life
appears Edenic. The pictures are beautiful, the events exciting, the faces always smiling, the children
always amusing, the couples always adoring. Some studies even suggest that comparing our own expe-
rience to these immaculately curated slices of life leads to envy, discontent, and unhappiness. Under-
standably so … if we assume that these slices of life are comprehensive representations of the lives
people actually lead. Of course, they are not.

Lest we be fooled, however, there, alongside the pets and witty status updates and wedding pictures
and birth announcements, we will increasingly find our virtual Arcadias haunted by the digital, disem-
bodied presence of the dead. Our digital memento mori.

Et in Facebook ego.

June 23, 2013


43. Dead and Going to Die

It’s not uncommon to hear someone say that they were haunted by an image, often an old photo-
graph. It is a figurative and evocative expression. To say that an image is haunting is to say that the im-
age has lodged itself in the mind like a ghost might stubbornly take up residence in a house, or that it
has somehow gotten a hold of the imagination and in the imagination lives on as a spectral after-image.
When we speak of images of the deceased, of course, the language of haunting approaches its literal
meaning. In these photographs, the dead enjoy an afterlife in the imagination.

I’ve lately been haunted myself by one such photograph. It is a well-known image of Lewis Powell,
the man hung for his failed attempt to assassinate Secretary of State William Seward. On the same
night that John Wilkes Booth murdered the president, Powell was to kill the secretary of state and their
co-conspirator, George Atzerodt, was to kill Vice-President Andrew Johnson. Atzerodt failed to at-
tempt the assassination altogether. Powell followed through, and, although Seward survived, he in-
flicted tremendous suffering on the Seward household.

I came upon the haunting image of Powell in a series of recently colorized Civil War photographs,
and I was immediately captivated by the apparent modernity of the image. Nineteenth century photo-
graphs tend to have a distinct feel, one that clearly announces the distant “pastness” of what they have
captured. That they are ordinarily black-and-white only partially explains this effect. More signifi-
cantly, the effect is communicated by the look of the people in the photographs. It’s not the look of
their physical appearance, though; rather, it’s the “look” of their personality.

There is distinct subjectivity—or, perhaps, lack thereof—that emerges from these old photographs.
There is something in the eyes that suggests a way of being in the world that is foreign and impenetra-
ble. The camera is itself a double cause of this dissonance. First, the subjects seem unsure of how to
position themselves before the camera; they are still unsettled, it seems, by the photographic technique.
They seem to be wrestling with the camera’s gaze. They are too aware of it. It has rendered them ob-
jects, and they’ve not yet managed to negotiate the terms under which they may recover their status as
subjects in their own right. In short, they had not yet grown comfortable playing themselves before the
camera, with the self-alienated stance that such performance entails.

But then there is this image of Powell, which looks as if it could have been taken yesterday and
posted on Instagram. The gap in consciousness seems entirely closed. The “pastness” is eclipsed. Was
this merely a result of his clean-shaven, youthful air? Was it the temporal ambiguity of his clothing or
of the way he wore his hair? Or was Powell on to something that his contemporaries had not yet
grasped? Did he hold some clue about the evolution of modern consciousness? I went in search of an
answer, and I found that the first person I turned to had been there already.

Death on Film

Roland Barthes’ discussion of death and photography in Camera Lucida: Reflections on


Photography has achieved canonical status, and I turned to his analysis in order to shed light on my ex-
perience of this particular image that was so weighted with death. I soon discovered that an image of
Powell appears in Camera Lucida. It is not the same image that grabbed my attention, but a similar
photograph taken at the same time. In this photograph, Powell is looking at the camera, the manacles
that bind his hands are visible, but still the modernity of expression persists.

Barthes was taken by the way that a photograph suggests both the “that-has-been” and the “this-
will-die” aspects of a photographic subject. His most famous discussion of this dual gesture involved a
photograph of his mother, which does not appear in the book. But a shot of Powell is used to illustrate a
very similar point. It is captioned, “He is dead, and he is going to die …” The photograph simultane-
ously witnesses to three related realities. Powell was; he is no more; and, in the moment captured by
this photograph, he is on his way to death.

Barthes also borrowed two Latin words for his analysis: studium and punctum. The studium of a
photograph is its ostensible subject matter and what we might imagine the photographer seeks to con-
vey through the photograph. The punctum by contrast is the aspect that “pricks” or “wounds” the
viewer. The experience of the punctum is wholly subjective. It is the aspect that disturbs
the studium and jars the viewer. Regarding the Powell photograph, Barthes writes,
“The photograph is handsome, as is the boy: that is the studium. But the punctum is: he is going
to die. I read at the same time: this will be and this has been; I observe with horror an anterior
future of which death is the stake. By giving me the absolute past of the pose, the photograph
tells me death in the future. What pricks me is the discovery of this equivalence.”

In my own experience, the studium was already the awareness of Powell’s impending death.
The punctum was the modernity of Powell’s subjectivity. Still eager to account for the photograph’s ef-
fect, I turned from Barthes to historical sources that might shed light on the photographs.

The Gardner Photographs

The night of the assassination attempt, Powell entered the Seward residence claiming that he was
asked to deliver medicine for Seward. When Seward’s son, Frederick, told Powell that he would take
the medicine to his father, Powell handed it over, started to walk away, but then wheeled on Frederick
and put a gun to his head. The gun misfired and Powell proceeded to beat Frederick over the head with
it. He did so with sufficient force to crack Frederick’s skull and jam the gun.

Powell then pushed Seward’s daughter out of the way as he burst into the secretary of state’s room.
He leapt onto Seward’s bed and repeatedly slashed at Seward with a knife. Seward was likely saved by
an apparatus he was wearing to correct an injury to his jaw sustained days earlier. The apparatus de-
flected Powell’s blows from Seward’s jugular. Powell then wounded two other men, including another
of Seward’s sons, as they attempted to pull him off of Seward. As he fled down the stairs, Powell also
stabbed a messenger who had just arrived. Like everyone else who was wounded that evening, the mes-
senger survived, but he was paralyzed for life.

Powell then rushed outside to discover that a panicky co-conspirator who was to help him make his
getaway had abandoned him. Over the course of three days, Powell then made his way to a boarding-
house owned by Mary Surratt where Booth and his circle had plotted the assassinations. He arrived,
however, just as Surratt was being questioned, and, not providing a very convincing account of himself,
he was taken into custody. Shortly thereafter, Powell was picked out of a lineup by one of Seward’s
servants and taken aboard the ironclad USS Saugus to await his trial.
It was aboard the Saugus that Powell was photographed by Alexander Gardner, a Scot who had
made his way to America to work with Matthew Brady. According to Powell’s biographer, Betty
Ownsbey, Powell resisted having his picture taken by vigorously shaking his head when Gardner pre-
pared to take a photograph. Given the exposure time, this would have blurred his face beyond recogni-
tion. Annoyed by Powell’s antics, H. H. Wells, the officer in charge of the photo shoot, struck Powell’s
arm with the side of his sword. At this, Major Eckert, an assistant to the secretary of war who was there
to interrogate Powell, interposed and reprimanded Wells.

Powell then seems to have resigned himself to being photographed, and Gardner proceeded to take
several shots of Powell. Gardner must have realized that he had something unique in these exposures
because he went on to copyright six images of Powell. He didn’t bother to do so with any of the other
pictures he took of the conspirators. Historian James Swanson explains:

“[Gardner’s] images of the other conspirators are routine portraits bound by the conventions of
nineteenth century photography. In his images of Powell, however, Gardner achieved some-
thing more. In one startling and powerful view, Powell leans back against a gun turret, relaxes
his body, and gazes languidly at the viewer. There is a directness and modernity in Gardner’s
Powell suite unseen in the other photographs.”

My intuition was re-affirmed, but the question remained: What accounted for the modernity of these
photographs?

Resisting the Camera’s Gaze

Ownsbey’s account of the photo shoot contained an important clue: Powell’s subversive tactics.
Powell clearly intuited something about his position before the camera that he didn’t like. He attempted
one form of overt resistance, but appears to have decided that this choice was untenable. He then seems
to acquiesce. But what if he wasn’t acquiescing? What if the modernity that radiates from these pic-
tures arises out of Powell’s continued resistance by other means?

Powell could not avoid the gaze of the camera, but he could practice a studied indifference to it. In
order to resist the gaze, he would carry on as if there were no gaze. To ward off the objectifying power
of the camera, he had to play himself before the camera. Simply being himself was out of the question;
the observer effect created by the camera’s presence so heightened one’s self-consciousness that it was
no longer possible to simply be. Simply being assumed self-forgetfulness. The camera does not allow
us to forget ourselves. In fact, as with all technologies of self-documentation, it heightens self-con-
sciousness. In order to appear indifferent to the camera, Powell had to perform the part of Lewis Powell
as Lewis Powell would appear were there no camera present.

In doing so, Powell stumbled upon the negotiated settlement with the gaze of the camera that eluded
his contemporaries. He was a pioneer of subjectivity. Before the camera, many of his contemporaries
either stared blankly, giving the impression of total vacuity, or else they played a role–the role of the
brave soldier, or the statesman, or the lover, etc. Powell found another way. He played himself. There
was nothing new about playing a role, of course. But playing yourself, that seems a watershed of con-
sciousness. Playing a role entails a deliberate putting on of certain affectations; playing yourself sug-
gests that there is nothing to the self but affectations. The anchor of identity in self-forgetfulness is
lifted and the self is set adrift. Perhaps the violence that Powell had witnessed and perpetrated prepared
him for this work against his psyche.

If indeed this was Powell’s mode of resistance, it was Pyrrhic: ultimately it entailed an even more
profound surrender of subjectivity. It internalized the objectification of the self which the external the
external presence of the camera elicited. This is what gave Powell’s photographs their eerie modernity.
They were haunted by the future, not the past. It wasn’t Powell’s imminent death that made them un-
canny; it was the glimpse of our own fractured subjectivity. Powell’s struggle before the camera, then,
becomes a parable of human subjectivity in the age of pervasive documentation. We have learned to
play ourselves with ease, and not only before the camera. The camera is now irrelevant.

In the short time that was left to him after the Gardner photographs were taken, Powell went on to
become a minor celebrity. He was, according to Swanson, the star attraction at the trial of Booth’s co-
conspirators. Powell “fascinated the press, the public, and his own guards.” He was, in the words of a
contemporary account, “the observed of all observers, as he sat motionless and imperturbed, defiantly
returning each gaze at his face and person.” But the performance had its limits. Although Ownsbey has
raised reasonable doubts about the claim, it was widely reported that Powell had attempted suicide by
repeatedly pounding his head against a wall.
On July 7, 1865, a little over two months since the Gardner photographs, Powell was hanged with
three of his co-conspirators. It doesn’t require Barthes’ critical powers to realize that death saturates the
Powell photographs, but death figured only incidentally in the reading I’ve offered here. It is not, how-
ever, irrelevant that this foray into modern consciousness was undertaken under the shadow of death. It
is death, perhaps, that gave Powell’s performance its urgency. And perhaps it is now death that serves
as the last lone anchor of the self.

October 12, 2013


44. Eight Theses Regarding Social Media

1. Social media are the fidget spinners of the soul.

2. Each social media platform is a drug we self-prescribe and consume in order to regulate our emo-
tional life, and we are constantly experimenting with the cocktail.

3. Law of Digital Relativity: Perception of space and time is relative to the digital density of the ob-
server’s experience.

4. Affect overload is a more serious problem than information overload. The product of both is
moral apathy and mental exhaustion.

5. While text and image flourish online, the psycho-dynamics of digital culture are most akin to
those of oral cultures (per Walter Ong).

6. Just as the consumer economy was boundless in its power to commodify, so the attention econ-
omy is boundless in its power to render reality standing reserve for the project of identity
construction/performance. The two processes, of course, are not unrelated.

7. In the attention economy, strategic silence is power. But, because of the above, it is also a deeply
demanding practice of self-denial.

8. Virtue is self-forgetting. The structures of social media make it impossible to forget yourself.

May 23, 2017


45. The Interrupted Self

In Letters From Lake Como: Explorations in Technology and the Human Race, written in the
1920’s, Romano Guardini, related the following experience: “I recall going down a staircase, and sud-
denly, when my foot was leaving one step and preparing to set itself down on another, I became aware
of what I was doing. I then noted what self-evident certainty is displayed in the play of muscles. I felt
that a question was thus raised concerning motion.”

“This was a triviality,” Guardini acknowledges, “and yet it tells us what the issue is here.” He goes
on to explain the “issue” as follows:

Life needs the protection of nonawareness. We are told this already by the universal psycholog-
ical law that we cannot perform an intellectual act and at the same time be aware of it. We can
only look back on it when it is completed. If we try to achieve awareness of it when we are do-
ing it, we can do so only be always interrupting it and thus hovering between the action and
knowledge of it. Obviously the action will suffer greatly as a result. It seems to me that this typ-
ifies the life of the mind and spirit as a whole. Our action is constantly interrupted by reflection
on it. Thus all our life bears the distinctive character of what is interrupted, broken. It does not
have the great line that is sure of itself, the confident movement deriving from the self.

It seems to me that the tendency Guardini identifies here has only intensified during the nearly 100
years since he wrote down his observations.

As an aside, I find works like Guardini’s useful for at least two reasons. The first, perhaps more ob-
vious, reason is that they offer genuine insights that remain applicable in a more or less straightforward
way. The second, perhaps less obvious, reason is that they offer a small window into the personal and
cultural experience of technological change. When we think about the difference technologies make in
our life and for society more broadly, we often have only our experience by which to judge. But, of
course, we don’t know what we don’t know, or we can’t remember what we have never known. And
this is especially the case when we consider what me might call the existential or even affective aspects
of technological change.

Returning to Guardini, has he notes in the letter on “Consciousness” from which that paragraph was
taken, literature was only one sphere of culture where this heightened consciousness was making itself
evident.

I can’t know what literary works Guardini had in mind, but there is one scene in Tolstoy’s short
novel, The Death of Ivan Ilyich (1886), that immediately sprung to mind. Early on in the story, which
begins with Ilyich’s death, a co-worker, Peter Ivanovich, has come to Ilyich’s home to pay his respects.
Upon entering the room where Ilyich’s body lay, Peter Ivanovich is uncertain as to how to proceed:

Peter Ivanovich, like everyone else on such occasions, entered feeling uncertain what he would
have to do. All he knew was that at such times it is always safe to cross oneself. But he was not
quite sure whether one should make obeisances while doing so. He therefore adopted a middle
course. On entering the room he began crossing himself and made a slight movement resem-
bling a bow.

I’ve come to read this scene as a microcosm of an extended, possibly recurring, cultural moment in
the history of modernity, one that illustrates the emergence of self-consciousness.

Here is Peter Ivanovich, entering into a socially and psychologically fraught encounter with the
presence of death. It is the sort of moment for which a robust cultural tradition might prepare us by
supplying scripts that would relieve us of the burden of knowing just what to do while also conveying
to us a meaning that renders the event intelligible. But Peter Ivanovich faces this encounter at a mo-
ment when the old traditions are only half-recalled and no new forms have arisen to take there place.
He lives, that is, in a moment when, as Gramsci evocatively put it, the old is dying and the new cannot
be born. In such a moment, he is thrown back upon himself: he must make choices, he must improvise,
he must become aware of himself as one who must do such things.

His action, as Guardini puts it, “bears the distinctive character of what is interrupted.”

“Peter Ivanovich,” we go on to read, “continued to make the sign of the cross slightly inclining his
head in an intermediate direction between the coffin, the Reader, and the icons on the table in a corner
of the room. Afterwards, when it seemed to him that this movement of his arm in crossing himself had
gone on too long, he stopped and began to look at the corpse.”

He is not inhabiting a ritual act, he is performing it and badly, as all such performances must be. “He
felt a certain discomfort,” the narrator tells us, “and so he hurriedly crossed himself once more and
turned and went out of the door — too hurriedly and too regardless of propriety, as he himself was
aware.”

I’m not suggesting that Tolstoy intended this scene as a commentary on the heightened conscious-
ness generated by liquid modernity, only that I have found in Peter Ivanovich’s awkwardness a memo-
rable dramatic illustration of such.

Technology had a role to play in the generation of this state of affairs, particularly technologies of
self-expression or technologies that represent the self to itself. It was one of Walter Ong’s key con-
tentions, for example, that “writing heightened consciousness.” This was, in his view, a generally good
thing. Of course, writing had been around long before Tolstoy was active in the late 19th century. He
lived during an age when new technologies worked more indirectly to heighten self-consciousness by
eroding the social structures that anchored the experience of the self.

In the early 20th century, Guardini pointed to, among other things, the rise of statistics and the bu-
reaucracies that they empowered and to newspapers as the sources of a hypertrophied consciousness.
We might substitute so-called Big Data and social media for statistics and newspapers. Rather, with re-
gards to consciousness, we should understand the interlocking regimes of the quantified self* and so-
cial media as just a further development along the same trajectory. Fitbits and Facebook amplify our
consciousness by what they claim to measure and by how they position the self vis-a-vis the self.

It seems to me that this heightened sense of self-consciousness is both a blessing and a curse and
that it is the condition out of which much of our digital culture emerges. For those who experience it as
a curse it can be, for example, a paralyzing and disintegrating reality. It may, under such circumstances
further yield resentment, bitterness, and self-loathing (consider Raskolnikov or the Underground Man).
Those who are thus afflicted may seek for renewed integrity through dramatic and/or violent acts, acts
that they believe will galvanize their identity. Others may cope by adopting the role of happy nihilist or
liberal ironist. Still others may double-down and launch out on the self-defeating quest for authenticity.
“Plants can grow only when their roots are in the dark,” Guardini wrote as he closed his letter on
consciousness. “They emerge from the dark into the light. That is the direction of life. The plant and its
direction die when the root is exposed. All life must be grounded in what is not conscious and from that
root emerge into the brightness of consciousness. Yet I see consciousness becoming more and more
deeply the root of our life.”

All of this leads him to ask in conclusion, “Can life sustain this? Can it become consciousness and at
the same time remain alive?”

April 21, 2018


46. Social Media and Loneliness

In September of last year, psychologist Jean Twenge published a widely-discussed essay in The At-
lantic titled “Have Smartphones Destroyed a Generation?” In it she wrote, “Social-networking sites
like Facebook promise to connect us to friends. But the portrait of iGen teens emerging from the data is
one of a lonely, dislocated generation.”

In 2012, The Atlantic also ran Stephen Marche’s essay “Is Facebook Making Us Lonely?” (I should
note in passing that The Atlantic, in my view, has a penchant for misleading titles.) It, too, was widely
discussed, including by me here. Marche noted that “Facebook arrived in the middle of a dramatic in-
crease in the quantity and intensity of human loneliness, a rise that initially made the site’s promise of
greater connection seem deeply attractive.”

In 2000, sociologist Robert Putnam published Bowling Alone: The Collapse and Revival of
American Community, which has now become something of a classic. At least we may say that the title
has entered the popular lexicon. In the opening chapter, Putnam writes, “For the first two-thirds of the
twentieth century a powerful tide bore Americans into ever deeper engagement in the life of their com-
munities, but a few decades ago — silently, without warning — that tide reversed and we were over-
taken by a treacherous rip current. Without at first noticing, we have been pulled apart from one an-
other and from our communities over the last third of the century.”

In 1958, Hannah Arendt published The Human Condition in which she discussed the rise of the “so-
cial,” a realm she distinguished from the private and the public spheres. It was marked by anonymity.
In The Origins of Totalitarianism, published in 1951, she argued that loneliness and isolation were the
seedbeds of totalitarianism.

In 1950, David Riesman, Nathan Glazer, and Reuel Denney published their landmark study of the
emerging character type of the American middle class, The Lonely Crowd. It is worth noting that Ries-
man did not just describe the inner directed person and the outer directed person. Both were preceded,
in his account, by the tradition directed person.
In 1913, Willa Cather had one of her characters in O Pioneers! describe his life in the cities this
way:

Freedom so often means that one isn’t needed anywhere. Here you are an individual, you have a
background of your own, you would be missed. But off there in the cities there are thousands of
rolling stones like me. We are all alike; we have no ties, we know nobody, we own nothing.
When one of us dies, they scarcely know where to bury him. Our landlady and the delicatessen
man are our mourners, and we leave nothing behind us but a frock-coat and a fiddle, or an easel,
or a typewriter, or whatever tool we got our living by. All we have ever managed to do is to pay
our rent, the exorbitant rent that one has to pay for a few square feet of space near the heart of
things. We have no house, no place, no people of our own. We live in the streets, in the parks,
in the theaters. We sit in restaurants and concert halls and look about at the hundreds of our own
kind and shudder.

I could go on, but you get the point.

One response to all of this might be to say that we have always been fretting about loneliness, so
there must actually be nothing to it. It is merely another instance of the discourse of moral panic.

Putnam himself cites Barry Wellman to this effect in the opening chapter of Bowling Alone:

It is likely that pundits have worried about the impact of social change on communities ever
since human beings ventured beyond their caves…. In the [past] two centuries many leading so-
cial commentators have been gainfully employed suggesting various ways in which large-scale
social changes associated with the Industrial Revolution may have affected the structure and op-
eration of communities…. This ambivalence about the consequences of large-scale changes
continued well into the twentieth century. Analysts have kept asking if things have, in fact,
fallen apart.

I have a hard time with the implicitly dismissive tone of this kind of commentary. Yes, for the past
two centuries social commentators have been reflecting on the effects of large-scale social change on
communities; those two centuries witnessed the often rapid dissolution of the communities and social
structures that provided the background against which an integrated experience of self and place
emerged. It seems perfectly normal for people to be critically reflective and ambivalent when they
sense that the only world of they’ve known is coming apart.

A better approach might be to grant that there is something about the structure of modern society
that engenders isolation and loneliness. The concern recurs because it is never really resolved. Each
generation merely confronts a new iteration of the same underlying dynamic.

Most recently, a survey conducted by Cigna has once more drawn attention to the seemingly peren-
nial problem of loneliness. NPR reported on the findings:

Using one of the best-known tools for measuring loneliness — the UCLA Loneliness Scale —
Cigna surveyed 20,000 adults online across the country. The University of California, Los An-
geles tool uses a series of statements and a formula to calculate a loneliness score based on re-
sponses. Scores on the UCLA scale range from 20 to 80. People scoring 43 and above were
considered lonely in the Cigna survey, with a higher score suggesting a greater level of loneli-
ness and social isolation.

More than half of survey respondents — 54 percent — said they always or sometimes feel that
no one knows them well. Fifty-six percent reported they sometimes or always felt like the peo-
ple around them “are not necessarily with them.” And 2 in 5 felt like “they lack
companionship,” that their “relationships aren’t meaningful” and that they “are isolated from
others.”

Readers of this blog may be especially interested in the relationship of technology to the condition
of chronic loneliness. In the first essay linked above, Jean Twenge argued that there is, in fact, a con-
nection among smart phones, social media, and heightened levels of anxiety, depression, and loneli-
ness. The Cigna survey, on the other hand, did not find a significant correlation between social media
use and loneliness although it did confirm a rise in loneliness among the youngest cohort.

As in most cases of this sort, it is impossible to set down, as a rule, how a given technology or set of
technologies will impact any given person. There are simply too many variables. The best we can do is
identify general trends and tendencies. And even that may not be very useful when it comes down to it.
I’ve never understood why we should be relieved when we read about a study which concludes that a
majority of people do not experience some negative consequence of technology. What about the often
sizable minority that does? Do they not matter?

It seems clear enough that the problem of isolation and loneliness predates the advent of the Inter-
net, social media, and smart phones. It seems clear as well, contrary to the claims of the prophets and
ideologues of connection, that they mediate relationships in a manner that does not assuage the root
causes of isolation and loneliness. What’s more, it may often be the case that they aggravate the condi-
tion.

Consider for a moment the much and rightly maligned business model at work on social media plat-
forms like Facebook. It requires engagement, it needs you to keep coming back and spending time on
the platform. In order to do this, the platforms have been designed for addiction (compulsive interac-
tion if you’d rather not employ the language of addiction). Deep and abiding emotional satisfaction is
not conducive to the patterns of engagement social media companies want to see from me and you.
Loneliness and anxiety work in their favor. They exploit the human desire for companionship and be-
longing.

I’m going to close by venturing a rather fraught and, as they say, problematic analogy. In relation-
ship to the problem of loneliness, maybe we should think of social media as a painkiller. It’s fairly ef-
fective, at least initially, at treating the symptoms. But the underlying condition is never touched. It
persists. You find over time that you’re getting diminishing returns, so you turn to it more frequently.
Eventually, you might even find that you’re taking the painkiller compulsively and, on the whole, it’s
left you feeling worse. Not only has the root condition remained, now you have two problems rather
than one.

Technological fixes rarely alleviate and often exasperate social disorders, especially those that in-
volve the most deeply engrained desires of the human heart. We’re better off refusing to treat social
media as a remedy for loneliness or even as a form of community. Chastened expectations may be the
best way to use a tool without being used by the tool in turn.

May 3, 2018
47. Early Modern and Digital Reading Practices

When I wrote the About page for this blog I cited an article by Alan Jacobs from several years ago
in which he likened blogs to commonplace books. Commonplace books, especially popular during the
sixteenth century when printing first began to yield an avalanche of relatively affordable books, served
as a means of ordering and making sense out of the massive amounts of information confronting early
modern readers. As is frequently noted, the dismay and disorientation they experienced is not alto-
gether unlike the angst that sometimes accompanies our recent and ongoing digital explosion of avail-
able information. And so, taking a cue from Jacobs, I intended for this blog to be something akin to a
commonplace book.

As it turned out, the analogy was mostly suggestive. Much that I write here does not quite fit the
commonplace genre. Nonetheless, something of the spirit, if not the law, persists. The commonplace
genre would find a nearer kin in Tumblr than in traditional blogs.

In a 2000 essay reprinted in The Case for Books (2009), historian of the book Robert Darnton also
reflects on commonplace books and the scholarly attention they attracted. The attention was not mis-
placed. Commonplace books offered a window into the reading practices and mental landscape of their
users; and for an era in which they were widely kept, they could offer a glimpse at the mental land-
scape of whole segments of society as well. In the spirit of the commonplace book, here are some ex-
cerpts from Darnton’s essay with a few reflections.

Describing the practice of commonplacing:

“It involved a special way of taking in the printed word. Unlike modern readers, who follow the
flow of a narrative from beginning to end (unless they are digital natives and click through texts
on machines), early modern Englishmen read in fits and starts and jumped from book to book.
They broke texts into fragments and assembled them into new patterns by transcribing them in
different sections of their notebooks.
Then they reread the copies and rearranged the patterns while adding more excerpts. Reading
and writing were therefore inseparable activities. They belonged to a continuous effort to make
sense of things, for the world was full of signs: you could read your way through it; and by
keeping an account of your readings, you made a book of your own, one stamped with your
own personality.”

What is only a parenthetical aside in Darnton’s opening paragraphs was for me a key insight. Darn-
ton’s description of commonplacing could easily be applied to the forms of reading practiced with digi-
tal texts, all the way down to the personalization. What is missing, of course, and this is no small thing,
is the public or social dimension.

On what commonplace books reveal:

“By selecting and arranging snippets from a limitless stock of literature, early modern English-
men gave free play to a semi-conscious process of ordering experience. The elective affinities
that bound their selections into patterns reveal an epistemology at work below the surface.”

That last sentence could easily function as a research paradigm for analysis of social media. Map the
“elective affinities” of what Facebook or Twitter or Google+ users link and post and the emergent pat-
terns will be suggestive of underlying epistemologies. Although here again the social dimension com-
plicates the matter considerably. The “elective affinities” on display in social networking sites are per-
formative in a way that private commonplacing was not, thus injecting a layer of distorting self-reflex-
ivity. But, then, that performative dimension is interesting on its own terms.

Commonplacing as reading for action:

“But they read in the same way — segmentally, by concentrating on small chunks of text and
jumping from book to book, rather than sequentially, as readers did a century later, when the
rise of the novel encouraged the habit of perusing books from cover to cover. Segmental read-
ing compelled its practitioners to read actively, to exercise critical judgment, and to impose
their own pattern on their reading matter. It was also adapted to ‘reading for action,’ an appro-
priate mode for men like Drake, Harvey, [etc.] and other contemporaries, who consulted books
in order to get their bearings in perilous times, not to pursue knowledge for its own sake or to
amuse themselves.”

Again the resemblance between early modern reading practices as described by Darnton and digital
reading practices is uncanny. The rise of sustained, linear reading is often attributed to the appearance
of printing. Darnton, however, would have us connect sustained, cover-to-cover reading with the later
rise of the novel. In this case, the age of the novel stands as an interlude between early modern and dig-
ital forms of reading which are more similar to one another than either is to reading as practiced in the
age of the novel.

The idea of “reading for action” is also compelling as it suggests the agonistic character of both
early modern English politics and early 21st century American politics. I suspect that a good deal of
online reading today is done in the spirit of loading a gun. At least this is often the ethos of the political
blogosphere.

Nonetheless, Darnton would have us see that this form of reading, at least in its early modern mani-
festation, had its merits in what it required from the reader as an active agent.

Finally, on reading and the attempt to make sense of out of experience:

“… we may pay closer attention to reading as an element in what used to be called the history
of mentalities — that is, world views and ways of thinking. All the keepers of commonplace
books, from Drake to Madan, read their way through life, picking up fragments of experience
and fitting them into patterns. The underlying affinities that held those patterns together repre-
sented an attempt to get a grip on life, to make sense of it, not by elaborating theories but by im-
posing form on matter.”

Early modern Britons and those of us who are living through the digital revolution (an admittedly
overplayed phrase) share a certain harried and anxious disposition. It was, after all, the early modern
poet John Donne, who wrote of his age, “Tis all in pieces, all coherence gone.” Early moderns de-
ployed the commonplace book as a means of collecting some of the pieces and putting them together
once more. If we follow the analogy, and this is always a precarious move, it would suggest that the
impulses at work in contemporary digital commonplacing practices — which have not only written in-
formation, but lived experience as the field from which fragments are culled — are deeply conserva-
tive. They would amount to an effort to impose order on the chaotic flux of live.

August 23, 2011


48. Spectrum of Attention

In March of 2015, I was invited to attend a seminar at the Institute for Advanced Studies in Culture
led by Alan Jacobs, which focused on his 79 Theses on Technology. The these dwelt chiefly on the
theme of attention. My response, and that of others in attendance, was published on one of the Insti-
tute’s blogs, which has recently been mothballed. Consequently, I’m publishing the essay here, lightly
edited. I still rather like it. The link to Jacobs’s theses is to an expanded version published by The New
Atlantis. I encourage you to read them, if you missed them a couple of years ago. My response can be
read on its own terms, however.

Attention has become a problem, but, because of this, we have an opportunity of sorts. We may now
think about attention as if for the first time; we may see it in a new, more revealing light. And this is
exactly what Alan Jacobs 79 Theses on Technology helps us to do. While ranging over impressively
varied terrain, Jacobs’ propositions provoke us to consider again the question of attention in digital cul-
ture. I offer the following as a response that seeks not so much to dispute as to pursue certain paths sug-
gested by Jacobs’ theses, hopefully to fruitful ends.

“We should evaluate our investments of attention,” Jacobs urges, “at least as carefully and critically
as our investments of money.” This is undoubtedly true, and we will be in a better position to undertake
such an evaluation when we understand what exactly we are talking about when we talk about atten-
tion. Attention, after all, is one of those words we never bother to define because we assume everyone
knows exactly what is meant by it. We assume this, in part, because we conceive of attention as a natu-
ral faculty, which is experienced in the same way by everyone. But as Matthew Crawford’s recent
work has shown, attention has a history. It has been imagined, and thus experienced, differently over
time. My sense, too, is that attention names various states or activities that we might do well to distin-
guish.

More often than not, we speak of attention as the work of intently focusing on one object or task.
Reading a long, demanding text is a frequent example of attention in this sense. This is the kind of at-
tention that is sometimes described metaphorically as a searchlight. Out of the whole environment, it
singles out one narrow aspect for the mind to consider; everything else darkens around it. It is with this
sort of attention that Nicholas Carr opened his well-known Atlantic article “Is Google Making Us
Stupid?”: “Immersing myself in a book or a lengthy article used to be easy,” Carr noted, but now “my
concentration often starts to drift after two or three pages. I get fidgety, lose the thread, begin looking
for something else to do. I feel as if I’m always dragging my wayward brain back to the text.”

I suspect that most of us can sympathize with Carr’s plight. If so, then what we are experiencing
feels like an inability to direct our attention at will. Echoing the apostle, we might lament, “What I
want to pay attention to, I cannot. What I do not want to pay attention to, to that I do.” This failure to
direct our attention presents itself as a failure of the will, and it assumes at some level that I am, as an
autonomous subject, responsible for this failure.

But sometimes we talk about attention in a slightly different way; we speak of it as openness to the
world, without any particular focal point. Sometimes the language of presence is used to articulate this
kind of attention: are we living in the moment? It is also the sort of attention that is advocated by pro-
ponents of “mindfulness,” to which Jacobs devoted two theses:

11. “Mindfulness” seems to many a valid response to the perils of incessant connectivity be-
cause it confines its recommendation to the cultivation of a mental stance without objects.

13. The only mindfulness worth cultivating will be teleological through and through.

On the surface, we appear to have a contradiction between these two ways of talking about attention.
Directed attention is inconceivable without an object (mental or material) to sustain it, but no object
would appear apart from an already existing form of attention. Much depends on what exactly is meant
by “mindfulness,” but I think we might be able to preserve a valuable distinction while still heeding Ja-
cobs’ critique.

It is helpful to understand “mindfulness” as a clearing of mental space that makes possible the de-
ployment of the directed, teleological form of attention. It is attention as openness to the world, atten-
tion that is ready to be elicited by the world. The telos of mindfulness in this sense, then, is the empow-
ering of directed or teleological attention. Consider again the distinction between the diversions/distrac-
tions that we seek and those that seek us. In our present digital environment, we must parry the diver-
sions that seek us out in order to deploy sustained, directed attention. It is at this point, of course, that
we may inject, as Jacobs does, a discussion of an attentional commons.

We may think of attention, then, as a dance whereby we both lead and are led. To speak of it as a
dance usefully suggests that receptivity and directedness may work in harmony. The proficient dancer
knows when to lead and when to be led, and she also knows that such knowledge emerges out of the
dance itself. This analogy reminds us, as well, that attention is the work of a unity of body and mind
making its way in a world that is solicitous of its attention. It raises what I take to be a critical question,
What difference does it make when we talk about attention to remember that we are embodied crea-
tures? We may go a little further down this path if we take a few cues from Maurice Merleau-Ponty.

In Phenomenology of Perception, Merleau-Ponty discussed the shortcomings of both empiricist and


intellectualist (rationalist) approaches to attention. In the course of that discussion, he made the follow-
ing observation: “Empiricism does not see that we need to know what we are looking for, otherwise we
would not go looking for it; intellectualism does not see that we need to be ignorant of what we are
looking for, or again we would not go looking for it.”

This simultaneous knowing and not-knowing seems to me another way of talking about attention as
both openness to the world and as a directed work of the mind. It is a work of both receptivity, of per-
ceiving the world as a gift, and care, of willfully and lovingly attending to particular aspects of the
world. And, as Merleau-Ponty goes on to argue, attention is also a form of embodied perception, per-
ception that construes the world as much as it registers it. In this sense, our attention is never merely a
searchlight spotting items out in the world as they are; rather, attention is always interpreting the world
in keeping with the desires and demands of the moment.

To a hiker on a long walk, for example, a stone is a thing to step on and is registered as such without
conscious mental effort. It is attended to by the body in motion more than by the cogitating mind. To a
geologist on a walk, on the other hand, a stone may suddenly become an object of conscious intellec-
tual inquiry. Reflecting a little further on this example, we can note that both of these instances of per-
ceiving-as are the product of prior experience. The expert hiker moves along at a steady pace making
countless adjustments and course corrections as a matter of bodily habit. The geologist, likewise, has
trained his perception through hours of intellectual labor. Both the hiker and the geologist have, by
their prior experience, built up a repertoire of possible perceptions, of multiple ways of seeing-as. Mer-
leau-Ponty called this repertoire of possible perceptions the “intentional arc,” which subtends “the life
of consciousness – cognitive life, the life of desire or perceptual life.” In either situation, a novice
would not be able to hike as adroitly or perceive the geologically interesting stone; their intentional arc
sustains a less robust perceptual repertoire.

This example suggests two poles of attention—bodily and mental. It is important that we don’t con-
ceive of these as mutually exclusive binaries. Rather, they constitute a spectrum of possibilities. To-
ward one end, conscious mental activity dominates; toward the other end, non-conscious bodily activity
is paramount. Consider these ideal types as examples of each case. The person lost deep in thought, or
lost in a daydream, is deeply attentive but not to their surroundings. It is as if they have ceased to per-
ceive the world about them through their sensory apparatus. They are lost in their own world, but it is
not the world perceived by their bodies. They have to be called back to an awareness of their body and
their surroundings.

By contrast, we may imagine the athlete, musician, or dancer who is, to borrow Mihály Csíkszent-
mihályi’s formulation, “in the flow.” They, too, like the thinker or daydreamer, are in a state of deep at-
tention, but in a different mode. Conscious thought would, in fact, disrupt their state of attention. We
may complicate this picture even further by noting how the hiker in the flow, precisely because their
conscious, mental attention is disengaged, may soon find themselves lost in thought, even as they ex-
pertly navigate the terrain.

But where do our technologies come into the picture we are painting? That is, after all, where we be-
gan and where Jacobs directs our attention. I would suggest that we think of another spectrum intersect-
ing the one running from the bodily to the mental: one that runs from mediated to unmediated forms of
attention. Consider our hiker one more time. Imagine that they are now equipped with a walking stick.
Aspects of their attending to the world about which they make their way are now mediated through the
walking stick, which enters into the circuit of mind, body, and world. Of course, the walking stick is an
adept tool for this particular context. It enters rather seamlessly into the world and extends the hiker’s
perceptions in useful ways. It would be very different, for instance, if the hiker where walking about
with a hose. The walking stick extends the perceptive reach of the hiker rather than attenuating or inter-
rupting it. It provides the hiker with a firmer grip on the world given the task they are undertaking. It
strengthens and extends the intentional arc.
Imagine, however, giving the hiker a different tool in hand—a smartphone. The smartphone medi-
ates perception as well. In the act of taking a picture, for example, we have an obvious act of media-
tion: the landscape is seen through the lens. But a subtler act of mediation is at work as well, even
when the smartphone’s camera is not in use. The ability to take photographs expands (and limits) the
hiker’s perceptive repertoire—it creates new possibilities, the landscape now appears differently to our
hiker. Smartphone in hand, the hiker might now perceive the world as field of possible images. This
may, for example, direct attention up from the path toward the horizon, causing even our experienced
hiker to stumble. We may be tempted to say, consequently, that the hiker is no longer paying attention,
that the device has distracted him. But this is, at best, only partly true. The hiker is still paying atten-
tion. In fact, he may be paying very close, directed attention, hunting for that perfect image. But his at-
tention is of a very different sort than the “in the flow” attention of a hiker on the move. They are now
missing some aspects of the surroundings, but picking up on others. Without the smartphone in hand,
the hiker may not stumble, but they may not notice a particularly striking vista either.

So where does all of this leave us? I’d suggest that it might be helpful to think of attention in light of
the range of possibilities created by our two spectrums. Along one axis, we range from bodily to mental
forms of attention. Along the other, we range from mediated to unmediated forms of attention.
(Granted that our attention is never, strictly speaking, absolutely unmediated.) This yields a range of
possibilities among the following categories: bodily, mediated attention; bodily, unmediated attention;
mental, mediated attention; and mental, unmediated attention. Consider the following as ideal types in
each case: the musician, the dancer, the scientist, and the philosopher. This schema yields a series of
questions we may ask as we seek to evaluate our investments of attention. What kind of attention is re-
quired in this context? To what aspects of the world does a device invite me to pay attention? Does a
device or tool encourage mental forms of attention when the context is better suited to bodily forms of
attention? Is a device or tool encouraging me to direct my attention, when attentive openness would be
more useful? What device or tool would best help me deploy the kind of attention required by the task
before me?

The result of this exploration has been to break up the opposition of device to attention. An opposi-
tion, I should say, I don’t think Jacobs himself advocates. Instead, my hope is to expand our conceptual
tool kit so that we might make better judgments regarding our devices and our attention to the world.

August 3, 2018
49. Landlines, Cell Phones, and Their Social Conse-
quences
If you’re of a certain age, you’ll remember the pre-cellular days of household phones. One line for
everyone, and only one person on the phone at a time. Under the best of circumstances this situation
would often lead to more than a few inconveniences. In less than ideal cases, inconvenience could yield
to much worse. I’m not entirely sure what got me thinking about the place of the phone in my high
school years, but, once I started collecting memories, I began to realize that a number of experiences
and situations that were then common have disappeared following the emergence of cell phones. And,
it seems to me, that not all of these transformations are altogether trivial.

For the record, my high school years were in the 1990s; cell phones were not quite rare and they had
already evolved well past the “brick” era. Yet, they were not exactly common either, and they certainly
had not displaced the landline. Beepers were then the trendy communication accessory of choice.

As I thought back to the pre-cellular era, it was the rather public nature of the landline conversation
that most caught my attention. The household phone was not a subtle creature. Placing a call to a friend
meant, in a sense, placing a call to their whole family. The ring of a phone was indiscriminate, so it was
that your call was a matter of public record. If your friend picked up, they may later be asked who it
was that called because everyone knew someone had called. If they did not pick up, then you might end
up talking to a family member, hopefully one who was kind and polite. So not, for example, a bratty
sibling or a cranky parent. Or both, since there was always the possibility that more than one person
would pick up and the awkward process of determining who the call was for and getting them and them
alone on the line would ensue.

That possibility alone, of perforce having to interact with someone other than the person you in-
tended to speak to, functioned as a form of socialization. It meant that you got to know your friend’s
family, including adults, whether you wanted to or not. Consider that it is not altogether unusual for us
now to resort to texting so as not to talk to even the person with which we intend to communicate. Back
then, we not only aimed to talk to someone, but we ran the risk of talking to other people as well. This
strikes me as somewhat consequential.

Then, of course, there were all of those not quite licit conversations and the devious ingenuity they
occasioned. For example, aiming to talk past a curfew or after other members of the family had gone to
bed, one would arrange a set time for the call and then sit waiting with hand on phone, maybe even fin-
ger on hook, in order to pick up the call at the very first vibration of sound. Or the more serious variety,
which often involved the maintenance of unacknowledged and disapproved relationships. Again, if you
are of a certain age, I suspect you will be able to supply a number of anecdotes on that score.

This dynamic was recently dramatized in the series Mad Men, set in the early 1960s as both Don
and Betty Draper maintain illicit relationships and their phone calls, placed and received, constantly
threaten to unravel their secrets.

Also, the landline was public not only in that it made phone calls a matter of public notice, but it
was also a shared resource. If you were on the phone, someone else could not be; some equitable sys-
tem of sharing this resource, that was at times in heavy demand, would need to be devised. The diffi-
culty of arriving at such an equitable distribution was, naturally, directly proportional to the number of
teenagers in the house.

All of this together led me to recall the distinctions Hannah Arendt drew in her hefty book, The
Human Condition, among the private, public, and social realms. I want to borrow these distinctions to
think about the differences between landlines and cell phones, but I won’t be using the terms in quite
the same way that she does. On one point, though, I do want to track more closely to her usage, and
that is her conception of what constitutes the public realm: disclosure. The public realm was one in
which individuals acted in such a manner that they disclosed themselves to others and were, in turn, ac-
knowledge by others. The public realm was a function of scale. Its scale was such that the individual
acted among many, but not so many that identity was lost and action rendered unintelligible.

The social realm featured a multiplicity of individuals as well — it was not private — but it took
place on a mass scale and even though (or, because) it included multitudes, it was, in fact, a realm of
anonymity — its image was the faceless crowd. This differentiation between the public and the social
is especially useful now that the digital social realm has emerged over the last decade. Even though we
can’t simply elide what we call social media with Ardent’s social realm, the awareness of a distinction
among ways of not being by oneself is all the more important.

In Arendt’s analysis, what counted as the private realm shifted its terms according to whether it was
paired with the public or social. In relation to the public realm, the private was the relative seclusion of
household, a publicly respected zone. But as the household itself became a province of the social, pri-
vacy was reconfigured as anonymity.

Consider the landline an instance of the public dynamic and the cell phone a manifestation of the so-
cial dynamic, loosely following Arendt’s model. For all the reasons listed above, the landline brought
the user into public view. It entailed a necessary appearing in the midst of others, the taking of a certain
responsibility for one’s actions, the negotiation of rights to a shared resource, and it yielded a privacy
that must be granted by others rather than seized by seclusion.

On that last point consider that while one could lock themselves in their room to have some privacy,
the holy grail of teenage life back then, this privacy could rather easily be violated through numerous
forms of eavesdropping. To be actualized, this privacy must be conceived of as a transaction of public
trust.

By contrast the cell phone allows for a form of privacy that is closer to mere anonymity rather than
to a publicly acknowledge and respected right. The cell phone also encourages concealment, rather
than disclosure. If my phone is silenced, there is hardly any necessary reason why anyone would know
that I have received a call, and if I require privacy I simply take myself and my phone where no one
can hear me. I absent myself, I make myself disappear and consequently make no claims upon the ci-
vility or trust of others in order to have my privacy. What’s more, the cell phone is typically not shared
materially, even though something abstract, like minutes, may be shared in a family plan. No limits are
therefore placed on use of the resource, at least for those who can afford high-end plans.

If we take the habits of phone use to be a practice that reinforces certain ways of being, then the dif-
ferences between the landline and the cell phone are not insignificant. Landlines yielded a public self,
constituted privacy as a right premised upon public virtues, and instilled a sense of limits that come
from the use of a shared and bounded resource. Cell phones, by contrast, yield an anonymous self, con-
stitute privacy as a function of anonymity and dis-appearing, and instill habits of unbounded and un-
limited consumption.

September 20, 2011


50. A Chance to Find Yourself

At The American Scholar you can read William Deresiewicz’s lecture to the plebe class of 2009 at
West Point. The lecture is titled “Solitude and Leadership” and it makes an eloquent case for the neces-
sity of solitude, and solitary reading in particular, to the would-be leader.

Throughout the lecture, Deresiewicz draws on Joseph Conrad’s The Heart of Darkness and near the
end of his talk he cites the following passage. Speaking of an assistant to the manager of the Central
Station, Marlow observes:

“I let him run on, this papier-mâché Mephistopheles and it seemed to me that if I tried I could poke
my forefinger through him, and would find nothing inside but a little loose dirt. . . .

It was a great comfort to turn from that chap to . . . the battered, twisted, ruined, tin-pot
steamboat. . . . I had expended enough hard work on her to make me love her. No influential friend
would have served me better. She had given me a chance to come out a bit—to find out what I could
do. No, I don’t like work. I had rather laze about and think of all the fine things that can be done. I
don’t like work—no man does—but I like what is in the work,—the chance to find yourself. Your own
reality—for yourself, not for others—what no other man can ever know.”

Much to think about in those few short lines. “Papier-mâché Mephistopheles” — what a remarkably
apt image for what Arendt would later call the banality of evil. It is also worth reflecting on Conrad’s
estimation of work in this passage. He evocatively captures part of what I’ve tried to describe in my
posts on the discontents of the frictionless life and disposable reality.

It was, however, the line “for yourself, not for others” that struck me with peculiar force. I’ve writ-
ten here before about the problems with solipsistic or misanthropic individualism. And it should go
without saying that, in some important sense, we certainly ought to think and act for others. But I don’t
think this is the sort of thing that Conrad had in mind. Perhaps he was driving at some proto-existenial-
ist pour soi. In any case, what came to my mind was the manner in which a life mediated by social me-
dia and smart phones is lived “for others”.
Let me try to clarify. The mediated variety of being “for others” is a form of performance and pre-
sentation. What we are doing is constructing and offering an image of ourselves for others to consume.
The pictures we post, the items we Like, the tweets we retweet, the status updates, the locations we an-
nounce on Foursquare, the music we stream, and dare I say it, the blog posts we write — all of these
are “for others” and, at least potentially, “for others” without real regard for them. Others, in the worst
forms of this dynamic, are merely an audience that can reflect back to us and reinforce our performance
of ourselves. In being “for others” in this sense, we risk being “for ourselves” in the worst way.

There is another, less problematic way of being “for others”. At the risk of oversimplifying, let’s
call this an unmediated way of being “for others”. This mode of being for others is not self-consciously
focused on performance and presentation. This way of being for others does not reduce others to the
status of mirrors reflecting our own image back to us. Other are in this case an end, not a means. We
lose ourselves in being for others in this way. We do not offer ourselves for consumption, but we are
consumed in the work of being for others. The paradox here is that those who are able to lose them-
selves in this way tend to have a substantial and steady sense of self. Perhaps because they have been
“for themselves” in Conrad’s sense, they have nurtured their convictions and character in solitude so
that they can be for other in themselves, that is “for others” for the sake of others.

Those who are for others only by way of being for themselves finally end up resembling Conrad’s
papier-mâché Mephistopheles, we could poke our fingers through them and find nothing but a little
dirt. All is surface.

Altogether, we might conclude that there is an important difference between being for other for the
sake of being for yourself and being for yourself for the sake of being for others.

The truth, of course, is that these modes of being “for others” are not new and the former certainly
does not owe its existence uniquely to social media. The performed self has roots in the emergence of
modernity and this mode of being for others has a family resemblance to flattery which has an even
older pedigree. But ubiquitous connectivity and social media do the work of amplifying and generaliz-
ing the condition. When their use becomes habitual, when we begin to see the world as potential mate-
rial for social media, then the space to be for ourselves/by ourselves collapses and we find that we are
always being for others for our own sake, preoccupied with the presentation of surfaces.
The consequences of this mode of being are good neither for us, nor for others.

January 5, 2012
51. What Do I Like When I "Like" On Facebook

By one of those odd twists of associative memory, John Caputo’s little book, On Religion, came to
mind today. Specifically, I recalled a particular question that he posed in the opening pages.

“So the question we need to ask ourselves is the one Augustine puts to himself in the Confessions,
“what do I love when I love God?,” or “what do I love when I love You, my God?,” as he also puts it,
or running these two Augustinian formulations together, “what do I love when I love my God?”.

I appreciate this formulation because it forces a certain self-critical introspection. It refuses us the
comforts of thoughtlessness.

A little further on, Caputo takes the liberty of putting his words to the spirit of Augustine’s quest:

“… I am after something, driven to and fro by my restless search for something, by a deep de-
sire, indeed by a desire beyond desire, beyond particular desires for particular things, by a de-
sire for I-know-not-what, for something impossible. Still, even if we are lifted on the wings of
such a love, the question remains, what do I love, what am I seeking?”

Then Caputo makes an important observation:

“When Augustine talks like this, we ought not to think of him as stricken by a great hole or lack
or emptiness which he is seeking to fill up, but as someone overflowing with love who is seek-
ing to know where to direct his love. He is not out to see what he can get, but out to see what he
can give.”

Not too long ago I posted some thoughts on what I took to be the Augustinian notes sounded in Matt
Honan’s account of his time at the Consumer Electronics Show in Las Vegas. In that post, “Hole In
Our Hearts,” I employed the language Caputo cautioned against, but I’m now inclined to think that Ca-
puto is on to something. His distinction is not merely academic.
Plummeting, perhaps, from the sublime to the … what to call it, let us just say the ordinary, this for-
mulation somehow triggered the question, “what do I like when I “Like” on Facebook?” Putting it this
way suggests that what I like may not, in fact, be what I “like”. The question pushes us to examine why
it is that we do what we do in social media contexts (Facebook being here a synecdoche).

Very often what we do on social media platforms is analyzed as a performance or construction of


the self. On this view, what we are doing is giving shape to our identity. What we like, if you will, is
the projected identity, or better yet, the perception and affirmation of that identity by others. This, of
course, does not exhaust what is done with social media, but it is a significant part of it.

There are, remembering Caputo’s distinction, two ways we might understand this. Caputo distin-
guished between love or desire understood as a lack seeking to be filled and love or desire understood
as a surplus seeking to be expended. This distinction can be usefully mapped over the motivations driv-
ing our social media activity.

When we think about social media as a field for the construction and enactment of identities, we
tend to think of it as the projection, authentic or inauthentic, of a fixed reality. Perhaps we would do
well to consider the possibility that identity on social networks is not so much being performed as it is
being sought, that behind the identity-work on social media platforms there is an inchoate and fluid re-
ality seeking to take shape by expending itself.

The entanglement of our loves (or, likes) and our identity on social media has, it turns out, an an-
tecedent in the Augustinian articulation of the human condition. Caputo went on to note that the ques-
tion of what we love is also bound up with another Augustinian query:

“Augustine’s question — “what do I love when I love my God?” — persists as a life-long and
irreducible question, a first, last, and constant question, which permanently pursues us down the
corridors of our days and nights, giving salt to fire to our lives. That is because that question is
entangled with the other persistent Augustinian question, “who am I?” …

What we love and desire and who we are — these two are bound up irrevocably with one another.

“I have been made a question to myself,” Augustine famously declared. And so it is with all of us.
The problem with our talk about the performance of identity is that it tends to tacitly assume a fixed
and knowing identity engaging in the performance. The reality, as Augustine understood, is more com-
plex and whatever it is we are doing online is tied up with that complexity.

February 18, 2012


52. Facebook and Loneliness

In 2008, Nick Carr’s article in The Atlantic, “Is Google Making Us Stupid?”, touched off a lively
and still ongoing debate about the relative merits of the Internet. Of course, the title was a provocation
and perhaps played a role in generating initial interest in the piece. I’ve often wondered whether that
was Carr’s own choice for a title or if an editor with the magazine slapped it on as link bait. In any
case, I tend to think it does the essay as a whole a disservice. It suggested a straw man to readers before
they read the first word of the article. Having used the piece in a variety of classes that I’ve taught, I’m
struck by how often readers respond to the title rather than Carr’s argument in the body of the essay.

In this month’s issue, The Atlantic has once again published a cover story bearing a strikingly simi-
lar title — “Is Facebook Making Us Lonely?” by novelist Stephen Marche. I suppose it was too tempt-
ing to pass up.

This time around, however, the title is at best generically provocative and more like predictably
lame. And, as with Carr’s piece, it threatens to obscure the argument.

Take, for example, the quite interesting response to Marche by sociologist Zeynep Tufekci. In a blog
post, she takes on the article’s title more so than the contents of the article. Or so it seems to me.
Tufekci emphasizes the need to rely on empirical research and she cites a number of studies that fail to
find a causal correlation between social media and loneliness. In fact, studies suggest that on the whole
social media users report lower rates of loneliness than non-users.

But as I read (and reread) Marche’s article, I failed to find Marche himself advocating such a causal
connection. In fact, at several points Marche is quite clear in denying that social media (since Face-
book, like Google in Carr’s article, stands in for a larger reality) causes loneliness. At the outset of the
last main section of the article, Marche writes: “Loneliness is certainly not something that Facebook or
Twitter or any of the lesser forms of social media is doing to us. We are doing it to ourselves.”

That seems pretty straightforward to me.


In fact, Marche and Tufekci seem to be in broad agreement. Both agree that individuals have be-
come more isolated over the course of the last few decades. Tufekci cites three studies to that effect:

“We are, on average, more isolated, at least in terms of strong ties. Three separate studies say
so–and as we say in social sciences, once is a question, twice is a coincidence, thrice is a find-
ing. (That is the General Social Survey with follow-up here, Pew Internet studies written up by
Keith Hampton (with others) and a recent study by Matt Brashears).”

Marche makes the same point; in fact, I would suggest that Marche’s essay is really about this broad
trend toward loneliness and isolation that predates the rise of social media. It is true that Marche clearly
thinks Facebook is less than an ideal antidote to this loneliness and that it engenders certain problem-
atic forms of socialization, but he does not claim that social media is making us lonely. It is the unfor-
tunate title that suggests that.

The more interesting part of Tufekci’s response lies in her notion of cyberasociality which she de-
fines as “the inability or unwillingness of some people to relate to others via social media as they do
when physically-present.” Happily, Tufekci links to an unpublished paper in which she lays out her
case for the existence of cyberasociality. She draws on an analogy to dyslexia to argue that some peo-
ple may have an inherent inability to socialize via text based media. As she acknowledges, this is some-
thing she is still “working through empirically and conceptually,” but it is certainly an intriguing possi-
bility.

Interestingly, at one point in Marche’s essay he himself appears to acknowledge as much. While dis-
cussing the work of Moira Burke—which (again) he himself notes “does not support the assertion that
Facebook creates loneliness”—Marche ventures the following introspective confession: “Perhaps it
says something about me that I think Facebook is primarily a platform for lonely skulking.” Perhaps. If
so, Tufekci may already be working on the theory that explains why.

The real issue, it seems to me, is not whether Facebook makes us lonely, but whether Facebook is
reconfiguring our notions of loneliness, sociability, and relationships. These are not, after all, static
concepts. Here is where I think Marche raises some substantial concerns that are unfortunately lost
when the debate goes down the path of determining causality.
What Facebook offers is the dream of managing the social and curating the self, and we seem to ob-
sessively take to the task. The asynchronicity of Facebook is rather safe, after all, when compared to
the messy and risky dynamics of face-to-face interactions, and we naturally gravitate toward this sort of
safety. I suspect this is in part also why we would sometimes rather text than call and, if we do call,
why we hope to get sent to voicemail. It seems reasonable to ask whether we will be tempted to take
the efficiency and smoothness of our social media interactions as the norm for all forms of social inter-
action.

Finally, it seems to me that we should draw a distinction among desires that are bundled together
under the notion of loneliness. There is, for example, a difference between the desire for companion-
ship (and distinctions among varieties of companionship) and the desire simply to be noticed or ac-
knowledged. C. S. Lewis, eloquent as per usual, writes:

“We should hardly dare to ask that any notice be taken of ourselves. But we pine. The sense
that in this universe we are treated as strangers, the longing to be acknowledged, to meet with
some response, to bridge some chasm that yawns between us and reality, is part of our incon-
solable secret.”

Among Facebook’s more problematic aspects, in my estimation, is the manner in which the platform
exploits this desire with rather calculated ferocity. That little red notifications icon is our own version
of Gatsby’s green light.

April 17, 2011


53. Bodies in Conversation

When she published her NY Times op-ed, “The Flight From Conversation,” Sherry Turkle was
quickly dismissed and lampooned by critics. Clearly Turkle had struck a nerve. Critics noted that
Turkle presented a false dichotomy. Conversations can still happen even in a world that includes social
media and text messaging. This is true in principle, of course. And, in principle, I suspect Turkle would
agree. But I’m not sure this is really the best way of approaching these sorts of concerns.

Perhaps it would be better to reframe the issue in terms of presence. Granting that, in the abstract,
the use of electronic forms of communication does not necessarily preclude the possibility of conversa-
tion, and granting, of course, that not every conversation is nor ought to be of the deep and absorbing
variety, it seems worthwhile to explore how actual instances of face-to-face conversation might be af-
fected by the kinds of technology Turkle has in view.

And to narrow our focus even further, I’ll focus on the cellular phone. It is after all the cellular
phone that materializes electronic communication across the whole field of our experience and it is the
materiality of the cellular phone that presents itself in the context of face-to-face conversation.

It seemed to me that Turkle’s concerns were strongest when they dealt with the manner in which
technology impinges on face-to-face communication. And on this point many of her critics agreed with
her concerns even while they disagreed with the manner in which they were packaged. This is also the
aspect of Turkle’s work that I think contributes to its obvious resonance. After all, much to her critics
bemusement, the threaded comments seemed mostly to validate Turkle’s point-of-view.

It is easy to see why. Most of us have been annoyed by someone who was unable to give another
human being their undivided attention for more than seconds at a time. And perhaps more significantly,
most of us have felt the pull to do same. We have struggled to keep our attention focused on the person
talking to us as we know we ought to because some shred of our humanity remains intact and we know
very well that the person in front of us is more significant than the text that just made our phone vibrate
in our pocket. We have been on both ends of the kind of distractedness that the mere presence of a
smartphone can occasion and we are alive enough to be troubled by it. We begin to feel the force of Si-
mone Weil’s judgment: “Attention is the rarest and purest form of generosity.”

And so Turkle’s piece and others like it resonate despite the theoretical shortcoming that make cer-
tain scholars cringe. What difference does it make that some study showed that a statistically signifi-
cant portion of the population reports feeling less lonely when using social media if I can’t get the per-
son standing two feet away from me to treat me with the barest level of decency.

The question remains, however, “Are smartphones at fault?” This is always the question. Is Google
making us stupid? Is Facebook making us lonely? Are smartphones ruining face-to-face conversation?
Put that way, I might say, “No, not exactly.” That’s usually not the best way of stating the question.
Rather than begin with a loaded question, perhaps it’s better simply to seek clarity and understanding.
What is happening when cellular phones become part of an environment that also consists of two peo-
ple engaged in conversation?

Out of the many possible approaches to this question, it is the path offered by Merleau-Ponty’s no-
tion of the “intentional arc” that I want to take. Merleau-Ponty writes:

“The life of consciousness – cognitive life, the life of desire or perceptual life – is subtended by an
‘intentional arc’ which projects round about us our past, our future, [and] our human setting ….”

Hubert Dreyfus, a philosopher whose work has built on Merleau-Ponty’s, adds this explanatory
note:

“It is crucial that the agent does not merely receive input passively and then process it. Rather, the
agent is already set to respond to the solicitations of things. The agent sees things from some perspec-
tive and sees them as affording certain actions. What the affordances are depends on past experience
with that sort of thing in that sort of situation.”

Here’s what all of this amounts to. The “intentional arc” describes the manner in which our experi-
ence and perception is shaped by what we intend. Intending here means something more than what we
mean when we say “I intended to get up early” or “I intend to go to the store later.” Intention in this
sense refers in large measure to a mostly non-conscious work of perceiving the world that is shaped by
what we are doing or aim to do. Our perception in other words is always already interpreting reality
rather than simply registering it as a pure fact or objective reality.

This work of perception-as-interpretation builds up over time as an assortment of “I cans” carried or


remembered by our bodies. In this way the assortment becomes part of the background, or pre-under-
standing, that we bring to bear on new situations. And this is how our intentional arc “projects round
about us our past, our future.”

What is particularly interesting for our purposes is how the insertion of a tool into our experience re-
configures the “intentional arc” supporting our experience. The phenomenon is neatly captured by the
expression, “To a man with a hammer everything looks like a nail.” This line suggests that how we per-
ceive our environment is shaped by the mere presence of a tool in hand. (Notice, by the way, how this
“effect” is registered even before the tool is used.)

Merleau-Ponty might analyze the situation as follows: The feel of a hammer in hand, especially
given prior use of a hammer, transforms how the environment presents itself to us. Aspects of the envi-
ronment that would not have presented themselves as things-to-be-struck now do. Our interpretive per-
ception interprets differently. Our seeing-as is altered. New possibilities suggest themselves. The affor-
dances presented to us by our environment are re-ordered.

Try this at home, go pick up a hammer, or for that matter any object you can hold in hand that is
weighted on one end. See what you feel. Hold it and look around you and pay really close attention to
the way your perceive these objects. Actually, on second thought, don’t try this at home.

Another example, perhaps more readily apprehended (and less fraught with potential danger) is of-
fered by the camera. With camera in hand our environment presents itself differently to us. I would go
so far as to suggest that we see differently when we see with camera in hand. The concrete objectivity
of the world has not changed, but the manner in which our perception interprets the world has; and this
change was effected by the presence of a tool in hand (even prior to its use).

In this sense, the tool does have a certain causal force, it causes the environment to present itself dif-
ferently to the user. It may not cause action, but it invites it. It causes the environment to hail the user
in a new way.
Returning to the situation with which we began, we can ask again how the presence of a smartphone
reconfigures face-to-face conversation. How does it alter the intentional arc that suspends the act of
conversation? I first began thinking through this question by focusing on the phone itself, but this ap-
proach foreclosed itself; it wasn’t proving to be very helpful to me. But then I thought about the act of
conversation itself and the question of presence. What would it mean to be fully present to one another
and what difference would this make for the act of conversing?

I realized then that the really interesting dynamic involved what two people offered to one another
in the act of conversing face-to-face. Presence was not a uni-directional phenomenon involving the in-
tentionality of each partner individually. Presence was not something one person achieved. Rather pres-
ence emerged from the manner in which the act of conversation coupled the intentionality of each indi-
vidual. To borrow Merleau-Ponty’s lingo (and give it my own somewhat sappy twist), two intentional
arcs come together to form a circle of presence.

Merleau-Ponty spoke of our body’s natural tendency to seek an “optimal grip” on our environment.
In face-to-face conversation, our bodies seek an optimal grip as well. While our conscious attention is
focused on words and their meaning, our fuller perceptive capabilities are engaged in reading the whole
environment. In conversation, then, each person becomes a part of a field of communication that in-
cludes, but is not limited to verbal expression. To put it another way, our intentional arc includes acts
of interpretative perception of the other’s body as well as words.

When we perceive eyes and hands, facial gestures and posture we perceive these not merely as eyes,
hands, etc., but as eyes that signify, hands that mean, etc. We are attuned to much more than the words
a person offers to us. Conversation involves the whole body in an act of holistic communication. And
much of that communication is perceived by us at a non-conscious level, perceiving these dynamics be-
comes a part of our pre-understanding applied to the act of conversation.

But this dynamic that enriches and shapes face-to-face communication depends on each person of-
fering themselves up to read in certain ways. Our attention intends the other’s body as a nexus of com-
munication, but when the other’s body is not engaged in the act of conversation, dissonance results and
presence is broken.
Back to the smartphone. When the smartphone enters into the dynamic it disrupts the body’s com-
municative patterns. Gestures, eye contact, posture, facial expression — all of it is altered. It no longer
means in the way our body is used to perceiving meaning. Perception finds it impossible to achieve
an optimal grip on the embodied interaction. And because our bodies give and receive this sort of com-
munication tacitly and often in remarkably subtle ways, we may not be conscious of this dissonance in
the act of conversation. We may only register a certain feeling of being out of sync, a certain feeling
that something is off. Presence fails to emerge and conversation, of the sort that Turkle champions, in-
deed, of the sort we all acknowledge as one of the great consolations offered to us in this world — that
kind of conversation becomes more difficult to achieve. Given the bodily dimensions of face-to-face
conversation, I’m not sure it could be otherwise.

It is not that “social media” in some abstract generic form or the practice of texting in general that
threatens conversation. It is the concrete materiality of the device entering into the intentional arcs of
our perceiving and meaningful bodies engaged in face-to-face communication that is problematic.

May 1, 2012
54. The Allegory of the Cave for the Digital Age

You remember Plato’s famous allegory of the cave. Plato invites us to imagine a cave in which pris-
oners have been shackled to a wall unable to move or turn their heads. On the wall before them are
shadows that are being cast as the light of a fire shines on people walking along a path above and be-
hind the prisoners. Plato asks us to consider whether these prisoners would not take the shadows for re-
ality and then whether our situation were not quite similar to that of these prisoners.

So far as Plato is concerned, sense experience cannot reveal to us what is, in fact, real. What is real,
what is true, what is good exists outside the material realm and is accessible chiefly by the operations
of the mind acting independently of sense experience.

In the allegory, Plato imagines that one of the prisoners has managed to free himself. He turns
around and sees the fire that has been casting the light and the people whose shadows he had mistaken
for reality. It is a painful experience, of course, chiefly because the fire dazzles the eyes. Plato also tells
us that this same prisoner manages with much difficulty to climb out of the cave in order to see the
light of the sun and behold the sun itself. That experience is analogous to the experience of one, who
through the exercise of reason and contemplation, has attained to behold the highest order of being, the
form of the good.

The allegory of the cave, odd as it might strike us, memorably exemplifies one of Plato’s enduring
contributions to what we might think of as the Western imagination. It posits the existence of two
worlds, as it were: one material and one immaterial, the former accessible to the senses, the latter not.
In Plato’s account, it is the philosopher who takes it upon himself, like the man whose has escaped the
cave, to discover the truth of things.

I thought of the allegory of the cave as I read about an online service I wrote about a few days ago,
Predictim, which promises to vet potential babysitters by analyzing their social media feeds. The ser-
vice is representative of a class of tools and services that claim to leverage the combined power of data,
AI, and/or algorithms in order to arrive at otherwise inaccessible knowledge, insight, or certainty. They
claim, in other words, to lead us out of the cave to see the truth of things, to grant us access to what is
real beyond appearances.

Only in the new allegory, it is not the philosopher who is able to ascend out of the cave, it is the data
analyst or, more precisely, the tools of data gathering and analysis that the data analyst himself may not
fully understand. Indeed, the knowledge gained is very often knowledge without understanding. This
is, of course, an element of the residual but transmuted Platonism, the knowledge is inaccessible to the
senses not because it lies in an immaterial realm, unless one conceives of information or abstracted data
as basically immaterial, but because it requires the encompassing of data so expansive no one mind can
comprehend it.

Arendt noted that the two-tiered Platonic structure was subject to a series of inversions in the 19th
century, most notably at the hands of Kierkegaard, Marx, and Nietzsche. But, as she points out, they
cannot altogether escape Plato because they are working within the tensions he generated. It might be
better to imagine, then, that the two-tiered world was simply immanentized, the dual structure was
merely brought into this world. Rather than conceiving of an immaterial realm that transcends the ma-
terial realm, we can conceive of the material realm itself divided into two spheres. The first is generally
accessible to our senses, but the second is not.

As Arendt herself observes, modernity was characterized in part by this shattering of confidence in
our perceptive capacities. The world was not the center of the universe, for example, as our senses sug-
gested, and it turned out there were whole worlds, telescopic and microscopic, which were inaccessible
to our unaided perceptive apparatus. Indeed, Arendt and others have suggested that in the modern
world we claim to know only what we can make.

We might say that border of the two-tiered world now runs through self. Data and some magic algo-
rithmic sauce is now the key that unlocks the truth about the self. It’s knowledge that others seek about
us and it is knowledge we also seek about ourselves. My main interest here has not been to question the
validity of such knowledge, although that’s a necessary exercise, but to note the assumptions that make
the promises of data analytics plausible. One of those assumptions seems to be the belief that the truth,
about the self in this case, lies in a realm that is inaccessible to our ordinary means of perception but
accessible by other esoteric means.
Regarding the validity of the knowledge of the self to be gained, I would suggest the only self we’ll
discover is the self we’ve become as we have sought the self in our data. In this respect then, the Pla-
tonic structure is modernized: it is not ultimately about beholding and knowing what is there but about
knowing what we are fabricating. The instruments constitute the reality they purport to reveal, both by
my awareness of them and by the their delimiting the legible aspects of the self. The only self I can
know on these terms is the self that I am in the process of creating through my tools. In some non-triv-
ial way it would seem that the work of climbing out of the cave is simultaneously the work of creating
a new cave in which we will dwell.

December 1, 2018
55. The Exhausting Work of Unremitting Self Presen-
tation
In The Presentation of the Self in Everyday Life, Goffman suggested that we understand our social
interaction by analogy to the theater. When we interact directly with others in their presence, it is as if
we are actors on stage. On the stage, we are engaged in the work of impression management—trying to
manage how we are perceived by controlling the impressions we give—the particular shape of which
depends on the audience. But, in keeping with the analogy, we also have a back stage. This is where we
are no longer immediately before a public audience. In our back stage area others may be present, but,
if they are, they constitute a more intimate, familiar audience before which we are more at ease, some
might say more ourselves. In our back stage area, we are able to let down our guard to some significant
degree.

Goffman’s examples are all rather concrete and grounded in face-to-face experience. For example,
for restaurant workers the kitchen is the back stage to the dining area’s front stage. Part of what I argue
in the Real Life piece is that we can usefully extend Goffman’s analysis to the experience of the self on
social media, especially when sustained by ubiquitous mobile devices. The idea is that we are now al-
ways potentially on the front stage, relentlessly managing impressions. When the stage is virtual, in
other words, it is potentially everywhere. There is no backstage, or, to put it more moderately, the front
stage begins to colonize what used to be backstage time and space. What I might’ve done a better job
of explaining in the essay is that front stage work amounts to a practice of the self, a practice that be-
comes habitual and formative. It’s not so much that we internalize any one performance but that we in-
ternalize the performative mode.

But it wasn’t Goffman’s analogy to the theater that I spoke of intuiting at the outset of this post,
rather it was the analogy between Goffman’s dramaturgy and medieval carnival. Briefly stated, certain
medieval festivals and carnivals had the function of relieving, if only temporarily, the burden and pres-
sure of living a holy life. During these festivals or carnivals traditional roles were reversed, conven-
tional pieties were overturned, even the sacred was profaned. All of it, mind you, ultimately in the ser-
vice of the established order, more or less.
Charles Taylor, who discusses medieval carnivals at some length in his history western secularism,
cites a medieval French cleric who explains the inversions and apparent profanations of carnival this
way:

“We do these things in jest and not in earnest, as the ancient custom is, so that once a year the
foolishness innate in us can come out and evaporate. Don’t wine skins and barrels burst open
very often if the air-hole is not opened from time to time? We too are old barrels ….”

As Taylor notes, the French cleric did not think in terms of blowing off steam, a metaphor more at
home in the industrial age, but that’s essentially his point as we might put it today.

In his discussion, Taylor draws on Victor Turner’s discussion of carnival in his work on the ritual
process. In Turner’s view, medieval carnival is just one manifestation of a wide-spread phenomenon:
the relationship between structure and anti-structure.

Taylor summarizes what Turner means by structure this way: “the code of behavior of a society, in
which are defined the different roles and statuses, and their rights, duties, powers, vulnerabilities.”
Consequently, Taylor writes, “Turner’s point is that in many societies where this code is taken per-
fectly seriously, and enforced, even harshly most of the time, there are nevertheless moments or situa-
tions in which it is suspended, neutralized, or even transgressed.” But why?

Taylor notes again the “blowing off steam” hypothesis. If you don’t find a way to relieve the pres-
sure within the relative safety of semi-sanctioned ritual, then you will get more serious, uncontrolled,
and violent eruptions. But Taylor also notes an alternative or possibly complementary hypothesis
present in Turner’s work: “that the code relentlessly applied would drain us of all energy; that the code
needs to recapture some of the untamed force of the contrary principle.”

Coming back, then, to my intuited analogy, it goes something like this: carnival is to the ordinary
demands of piety in medieval society as, in contemporary society, the back stage is to the front stage
relative to identity work.

It’s not a perfect analogy. Indeed, I confess that I may be stretching a bit to make it work. It really
only focuses on one aspect of the backstage experience as Goffman theorized it: the backstage as a
space to let one’s guard down, to relieve the pressures of a constantly calibrated performance before an
ill-defined virtual audience, to blow off some steam.

Nonetheless, I think there’s something useful in the approach. The main idea that emerged for me
was this: in our contemporary, digitally augmented society the mounting pressure we experience is not
the pressure of conforming to the rigid demands of piety and moral probity, rather it is the pressure of
unremitting impression management, identity work, and self-consciousness. Moreover, there is no car-
nival. Or, better, what presents itself as a carnival experience is, in reality, just another version of the
disciplinary experience.

Consider the following.

First, the early internet, Web 1.0, was a rather different place. In fact, a case could be made for the
early internet being itself the carnivalesque experience, the backstage where, under the cloak of
anonymity, you got to play a variety of roles, try on different identities, and otherwise step out of the
front stage persona (“on the internet nobody knows you are a dog,” Sherry Turkle’s Life on the Screen:
Identity in the Age of the Internet, etc.). As our internet experience, especially post-Facebook, became
more explicitly tied to our “IRL” identity, then the dynamic flipped. Now we could no longer experi-
ence “life on screen” as anti-structure, as backstage, as a place of release. Online identity and offline
identity became too hopelessly entangled. Confusion about this entanglement during the period of tran-
sition accounts for all manner of embarrassing and damaging gaffs and missteps. The end result is that
the mainstream experience of the internet became an expansive, always on front stage. A corollary of
this development is the impulse to carve out some new online backstage experience, as with fake Insta-
gram accounts or through the use of ephemeral-by-design communication of the sort that Snapchat pio-
neered.

Indeed, this may be a way of framing the history of the internet: as a progression, or regression,
from the promise of a liberating experience of anti-structure to the imposition of a unprecedentedly ex-
pansive and invasive instrument of structure. Many of our debates about the internet seem to be use-
fully illuminated by the resulting tension. Perhaps we might put it this way, the internet becomes an in-
strument of structure on a massive scale precisely by operating in the guise of an anti-structure. We are
lured, as it were, by the promise of liberation and empowerment only to discover that we have been en-
snared in a programmable web of discipline and control.
Second, maybe the analogy that occurred to me is more straightforward than I first imagined. My
initial focus, given the essay I was working on, involved the experience of hypertrophied self-con-
sciousness. So the analogy in this light operated at a sort of meta level. No real moral code was in-
volved, only the psychic burden of constant identity management. But maybe there is a moral code in-
volved. Of course, there’s a moral code involved! Our experience of social media can be an infamously
surveilled and policed experience. Undoubtedly, there is pressure to conform to ever-evolving stan-
dards regulating speech and expression, for example. This pressure manifests itself through blunt in-
struments of enforcement (blocking, harassment, doxxing, etc.) or more tacit mechanisms of reward.
Either way, it is not a stretch to say that we negotiate the demands of an emerging, perhaps ever-emerg-
ing moral code whenever we use online platforms. We might even say that the disciplinary character of
the social media activity takes on an oddly ritualistic quality, as if it were the manifestation of some an-
cient and deep-seated drive to cleanse the social body of all forms of threatening impurities.

But it’s one thing to conform to a standard to which you more or less assent and which arises from a
community you inhabit. It’s quite another to conform to a standard you don’t even buy into or maybe
even resent. This is basically the case on many of our most popular digital forums and platforms. They
gather together individuals with disparate, diverging, and conflicting moral, political, religious stances,
and they thrust these individuals into meta-communities of performative and competitive display. Not
surprisingly, interested parties will take recourse to whatever tools of control and discipline are avail-
able to them within the structures of the platforms and forums that sustain the meta-community. The re-
sult, again, a disciplinary experience in a space that was assumed to be liberating and empowering.

Taylor is helpful on this score as well. He tells a long and complex story, so I won’t do justice to it
here, but one key concept he deploys is what he calls the nova effect. The nova effect, in Taylor’s anal-
ysis, is the explosion of possible and plausible options regarding the good life that emerge in the mod-
ern world. The result is, of course, experienced as freedom and liberation, but also as fragmentation
and fragilization of the self. Social media, it seems to me, dramatically intensifies the nova effect. It
brings us into a space where we become aware of and interact with an exponentially greater variety of
perspectives, stances, and forms of life within structures that foreground the performative experience of
the self, which only accents its sense of fragility.

Think, then, of the dark perfection of a structure that has convinced it is really an anti-structure, a
front stage that invites us to think of it as a back stage. We end up unwittingly turning to the very
source of our exhaustion, anxiety, burnout, and listlessness for release and relief from the same. The re-
sult is recreation without rest, familiarity without intimacy, play without joy, laughter without mirth,
carnival without release—in short, the feeling that society is on the brink of exploding and the self is on
the brink of imploding.

March 14, 2019


PART IV

Technology and Society


56. Resisting Disposable Reality

Technology and consumerism coalesce to create disposable reality. Let’s try that idea on for a mo-
ment by drawing together observations made about each by Albert Borgmann and William Cavanaugh
respectively.

Writing about technological culture, Borgmann distinguished between devices characterized by a


“commodious,” accessible surface and a hidden, opaque machinery below the surface on the one hand
and what he calls focal things on the other. Devices are in turn coupled with consumption and focal
things are paired with focal practices. Focal things and practices, according to Borgmann, “gather our
world and radiate significance in ways that contrast with the diversion and distraction afforded by com-
modities.” In short, we merely use devices while we engage with focal things.

With those distinctions in mind, Borgmann continues, “Generally, a focal thing is concrete and of
commanding presence.” A commanding presence or reality is later opposed to “a pliable or disposable
reality.” Further on still, Borgmann writes, “Material culture in the advanced industrial democracies
spans a spectrum from commanding to disposable reality. The former reality calls forth a life of en-
gagement that is oriented within the physical and social world. The latter induces a life of distraction
that is isolated from the environment and from other people.” On that last point, bear in mind that
Borgmann is writing in the early 2000s before the onset of social media. (Although, it is debatable
whether or not his point still stands.)

Borgmann then addresses his analysis to human desire by noting that:

To the dissolution of commanding reality corresponds on the human side a peculiar restless-
ness. Since every item of cyberpresence can be x-rayed, zoomed into, overlayed, and aban-
doned for another more promising site, human desire is at every point at once satiated, disap-
pointed, and aroused to be once more gorged, left hungry, and spurred on.
Writing about contemporary consumerism, William T. Cavanaugh observes, “What really character-
izes consumer culture is not attachment to things but detachment. People do not hoard money; they
spend it. People do not cling to things; they discard them and buy other things.” Furthermore, Ca-
vanaugh adds, “Consumerism is not so much about having more as it is about having something else;
that’s why it is not simply buying but shopping that is the heart of consumerism. Buying brings a tem-
porary halt to the restlessness that typifies consumerism.”

Both Borgmann and Cavanaugh have identified an analogous pattern at the heart of both contempo-
rary technology and the consumerist spirit: both render reality essentially disposable. Both also note
how this disposable quality yields a restlessness or unsettledness that permeates our experience. This
experience of reality as essentially disposable and its attendant restlessness are characteristic of what
sociologist Zygmunt Bauman has termed, “liquid modernity.”

Interestingly, one of the focal things identified by Borgmann is the book with its corresponding fo-
cal practice, reading. While Cavanaugh did not make this observation, it seems to me that the book as
object is one of the few commodities that resists his analysis of contemporary consumerism. That is to
say that books tend to be purchased and kept. There are exceptions, of course. Many books turn out
not to be worth keeping. We trade some books at used books stores for others. We also now some-
times sell certain books through services provided by Amazon.com and the like. Nonetheless, I would
venture to say that those who purchase books often do so with an eye to keeping them. Where we
would typically encounter detachment, with the book we find a measure of attachment. In a sea of
technological, consumerist flux the book is a fixed point. It is an object that is engaged and not merely
used, it is possessed rather than readily disposed; and perhaps, in modest measure, it tacitly alleviates
our restlessness.

Perhaps this then provides one angle of approach to the analysis of electronic books and e-readers.
Consider Matt Henderson’s recent observations regarding his children’s experience of “reading” Al
Gore’s Our Choice, “Push Pop Press’s highly-anticipated first interactive book.” Henderson intro-
duced Our Choice to his two children whom he describes as technologically savvy readers.

I showed them Our Choice, and just observed. They quickly figured out the navigation, and dis-
covered all the interactive features. But… they didn’t read the content. Fascinated, they skipped
through the book, hunting for the next interactive element, to see how it works. They didn’t
completely watch a single video.

When they finished, I asked them to tell me about the book. They described how they could
blow on the screen and see the windmill turn, how they could run their fingers across the inter-
active map and see colors changing. How they could pinch to open and close images. But they
couldn’t recall much of what the book was about. They couldn’t recall the message intended to
be communicated in any of the info-graphics (though they could recall, in detail, how they
worked.)

Run through Borgmann’s grid this seems to be an instance of contrast between a focal thing with its
attendant practice and a device with its attendant consumption. The Kindle comes off better in Hender-
son’s analysis, and in his children’s experience, and this makes sense since the Kindle’s interface lends
itself more readily to focused engagement. And yet, the Kindle fails to provide the physical presence
of books we keep which seems to be not insignificant as we search for anchors in an environment of
manufactured restlessness and disposable realities. To borrow a line from T. S. Eliot, nostalgia for the
book in this case is just our pursuit of a “still point of the turning world.”

May 17, 2011


57. Kevin Kelly, God, and Technology

As I have read and thought about technology and its cultural consequences, I have especially appre-
ciated the works of Marshall McLuhan, Walter Ong, Jacques Ellul, Ivan Illich, and Albert
Borgmann. My appreciation stems not only from the quality and originality of their work, but also from
a curiosity about the manner in which their religion informed their thinking; all were deeply committed
to some expression of the Christian faith. We would do well to add the name of Kevin Kelly to the list
of theorists and students of technology who bring a theological perspective to their work.

Of course, the Christian tradition is an ocean with many currents, and so it is not surprising that de-
spite their common core commitments, the work of the scholars mentioned each takes on a distinct
hue. Of those mentioned, Kelly is in my estimation the most optimistic about the future of technology
and that comes across quite clearly in his recent interview with Christianity Today.

There Kelly connects technology with God’s own creative capacity and the freedom with which He
endows humanity:

“We are here to surprise God. God could make everything, but instead he says, ‘I bestow upon
you the gift of free will so that you can participate in making this world. I could make every-
thing, but I am going to give you some spark of my genius. Surprise me with something truly
good and beautiful.’”

He also provides the following explanation of the term technium, which he coined:

“I use technium to emphasize that human creation is more than the sum of all its parts. An
ecosystem behaves differently from its individual plant and animal components. We have
thoughts in our minds that are more than the sum of all neuron activity. Society itself has cer-
tain properties that are more than the sum of the individuals; there is an agency that’s bigger
than us. In the same way, the technium will have a behavior that you’re not going to find in
your iPhone or your light bulb alone. The technium has far more agency than is suggested by
the word culture.”

I find this emergent model to be an interesting way to get at the influence of technology. I try to
navigate a path between approaches to technology that take the tools to be determinative of human ac-
tion on the one hand, and others which take the tools to be merely neutral objects subject to human ac-
tion on the other. I’m not sure if I’m prepared to unreservedly endorse Kelly’s formulation, but I am
generally sympathetic.

I’m less inclined to sign onto the remarkably positive outlook Kelly articulates for the technium, al-
though I must admit that it is refreshing. Kelly is sure that “… the world is a better place now than it
was 1,000 years ago. Whatever quantifiable metric you want to give to me about what’s good in life, I
would say there’s more of it now than there was 1,000 years ago.” And, indeed, by many if not most
measures, it most certainly is. Yet, I would hesitate to claim that in every way that life has improved it
has done so because of the technium, and I would be inclined to argue that in certain important respects
elements of the technium have worked against human happiness and fulfillment.

Kelly acknowledges, but underemphasizes the fallibility and folly of humanity. He believes that
God’s grace, seemingly operating through the technium, more than cancels out the folly. I share the
hope in principle, but would not so closely connect the operations of God’s grace to the sphere of tech-
nological advance.

Perhaps the point of tension that I experience with Kelly’s position stems from his definition of
goodness: “overall the technium has a positive force, a positive charge of good. And that good is pri-
marily measured in terms of the possibilities and choices it presents us with.” Kelly illustrates his point
by asking us to imagine Mozart being born into a world in which the piano has not been invented –
what a tragedy. This resonates, but then we might ask, what of all of those would-be Mozarts that did
in fact live, as surely they must have. Was their happiness and fulfillment so tied to an as-of-yet future
invention that their life was rendered unfulfilled? Would this not suggest that, in fact, the grass is al-
ways greener in the future perpetually and so happiness and fulfillment is never finally attainable? Ful-
fillment would taunt us from just around the corner that is the future.
Perhaps the problem arises from too quickly eliding the infinite creative possibilities of the Creator
with the limited, derivative creativity of the creature. To be human is to flourish within the limitations
of material and embodied existence. Expanding choice is not necessarily a bad thing, of course, but
hitching the possibility of human fulfillment to the relentless expansion of choice seems to overlook the
manner in which the voluntary curtailment of choice might also serve as the path to a well-lived life.

Curiously, Kelly practices a way of life that would seem on the surface to be at odds with the gospel
of choice maximization. He has written engagingly about the Amish and recommended aspects of their
approach to technology. In his personal life, Kelly has implemented a good bit of Amish minimal-
ism. When asked about whether this constituted an inconsistency between his words and his actions,
Kelly responded:

“Technology can maximize our special combination of gifts, but there are so many technologi-
cal choices that I could spend all my time just trying out technologies. So I minimize my tech-
nological choices in order to maximize my output. The Amish (and the hippies) are really good
at minimizing technologies. That’s what I am trying to do as well. I seek to find those technolo-
gies that assist me in my mission to express love and reflect God in the world, and then disre-
gard the rest.

But at the same time, I want to maximize the pool of technologies that people can choose from, so
that they can find those tools that maximize their options and minimize the rest.”

I can see his angle and would stop short of suggesting that this was indeed an inconsistency on Kel-
ly’s part, but I will say that for my part I find more wisdom in Kelly’s practice than in his unbounded
hope for the technium.

July 19, 2011


58. The Borg Complex: A Primer

I coined the term “Borg Complex” on a whim, and, though I’ve written on the concept a handful of
times, nowhere have I presented a clear, straightforward description. That’s what this post provides —
a quick, one-stop guide to the Borg Complex.

What is a Borg Complex?

A Borg Complex is exhibited by writers and pundits who explicitly assert or implicitly assume that
resistance to technology is futile. The name is derived from the Borg, a cybernetic alien race in the Star
Trek universe that announces to their victims some variation of the following: “We will add your bio-
logical and technological distinctiveness to our own. Resistance is futile.”

For example:

“In fifty years, if not much sooner, half of the roughly 4,500 colleges and universities now operating
in the United States will have ceased to exist. The technology driving this change is already at work,
and nothing can stop it.” (Nathan Harden)

“It may be hard to believe, but before the end of this century, 70 percent of today’s occupations will
likewise be replaced by automation. Yes, dear reader, even you will have your job taken away by ma-
chines. In other words, robot replacement is just a matter of time.” (Kevin Kelly)

“I’ve used [Google Glass] a little bit myself and – I’m making a firm prediction – in as little as three
years from now I am not going to be looking out at the world with glasses that don’t have augmented
information on them. It’s going to seem barbaric to not have that stuff.” (Phil Libin)

What are some other symptoms of a Borg Complex?

These symptoms may occur singly, or in combination.


1. Makes grandiose, but unsupported claims for technology: “[MOOCs have] more potential to lift
more people out of poverty — by providing them an affordable education to get a job or improve in the
job they have. Nothing has more potential to unlock a billion more brains to solve the world’s biggest
problems.”

2. Uses the term Luddite a-historically and as a casual slur: “But [P2P apps are] considerably less
popular among city regulators, whose reactions recall Ned Ludd’s response to the automated loom.”

3. Pays lip service to, but ultimately dismisses genuine concerns: “This is going to add a huge
amount of new kinds of risks. But as a species, we simply must take these risks, to continue advancing,
to use all available resources to their maximum.”

4. Equates resistance or caution to reactionary nostalgia: “There’s no reason to cling to our old
ways. It’s time to ask: What can science learn from Google?”

5. Starkly and matter-of-factly frames the case for assimilation: “There is a new world unfolding and
everyone will have to adapt.”

6. Announces the bleak future for those who refuse to assimilate: “Technology can greatly enhance
religious practice. Groups that restrict and fear it participate in their own demise.”

7. Expresses contemptuous disregard for past cultural achievements: “I don’t really give a shit if lit-
erary novels go away.”

8. Refers to historical antecedents solely to dismiss present concerns: “… the novel as we know it
today is only a 200-year-old construct. And now we’re getting new forms of entertainment, new forms
of popular culture.”

Is there more than one form a Borg Complex may take?

Yes. There is temperamental variation ranging from the cheery to the embittered. There is also vari-
ation regarding the envisioned future that ranges from utopian to dystopian. Finally, there are different
degrees of zeal as well ranging from resignation to militancy. Basically, this means a Borg Complex
may manifest itself in someone who thinks resistance is futile and is pissed about it, indifferently re-
signed to it, evangelistically thrilled by it, or some other combination of these options. So as an exam-
ple, take some one like Kevin Kelly. He is cheery, utopian, and not particularly militant about it. This
is, I suppose, a best case scenario.

What causes a Borg Complex?

Causes, of course, is not the right word here; but we can point to certain sources. A Borg Complex
may stem from a philosophical commitment to technological determinism, the idea that technology
drives history. This philosophical commitment to technological determinism may also at times be min-
gled with a quasi-religious faith in the envisioned techno-upotian future. The quasi-religious form of
the Borg Complex can be particularly pernicious since it understands resistance to be heretical and im-
moral. A Borg Complex may also stem from something more banal: self-interest, usually of the com-
mercial variety. Apathy may also lead to a Borg Complex, as may a supposedly hard-nosed, common-
sense pragmatism.

Aren’t Borg Complex claims usually right?

A Borg Complex diagnosis does not necessarily invalidate the claims being made; it is primarily the
identification a rhetorical stance and the uses to which it is put. That said, examining Borg Complex
rhetoric leads naturally to the question of technological determinism. It’s worth noting that historians of
technology have posed serious challenges to the notion of technological determinism. Historical contin-
gencies abound and there are always choices to be made. The appearance of inevitability is a trick
played by our tendency to make a neat story out of the past. Even if some Borg Complex claims prove
true, it is worth asking why and whether Borg Complex assumption did not act as self-fulfilling
prophecies.

What does it matter?

Marshall McLuhan once said, “There is absolutely no inevitability as long as there is a willingness
to contemplate what is happening.” The handwaving rhetoric that I’ve called a Borg Complex is reso-
lutely opposed to just such contemplation when it comes to technology and its consequences. We need
more thinking, not less, and Borg Complex rhetoric is typically deployed to stop rather than advance
discussion. What’s more, Borg Complex rhetoric also amounts to a refusal of responsibility. We can-
not, after all, be held responsible for what is inevitable. Naming and identifying Borg Complex rhetoric
matters only insofar as it promotes careful thinking and responsible action.
March 1, 2013
59. Conquering the Night: Technology, Fear, and
Anxiety
Tim Blanning begins his review of Craig Koslofsky’s Evening’s Empire: A history of the night in
early modern Europe as follows:

In 1710, Richard Steele wrote in Tatler that recently he had been to visit an old friend just come
up to town from the country. But the latter had already gone to bed when Steele called at 8 pm.
He returned at 11 o’clock the following morning, only to be told that his friend had just sat
down to dinner. “In short”, Steele commented, “I found that my old-fashioned friend religiously
adhered to the example of his forefathers, and observed the same hours that had been kept in his
family ever since the Conquest”. During the previous generation or so, elites across Europe had
moved their clocks forward by several hours. No longer a time reserved for sleep, the night time
was now the right time for all manner of recreational and representational purposes.

Given my recent borrowings from David Nye’s study of electrification, it will come as no surprise
that the title of Blanning’s review, “The reinvention of the night,” caught my eye. I was expecting a
book dealing with the process of electrification, but Koslofsky’s story, as the subtitle of his book sug-
gests, unfolds two to three hundred years before electrification.

It also features technology less prominently than I anticipated, at least Blanning doesn’t emphasize
it in his review. In fact, he writes, “This had little to do with technological progress, for until the nine-
teenth century only candles and oil lamps were available.” This suggests a rather narrow definition of
technology since candles and oil lamps are just that. It may be that Blanning’s emphasis is on the
“progress” side of “technological progress,” but immediately after this sentence he writes, “Most ad-
vanced was the oil lamp developed in the 1660s by Jan van der Heyden, which used a current of air
drawn into the protective glass-paned lantern to prevent the accretion of soot, and made Amsterdam the
best-lit city in Europe.” This would amount to technological progress, no?
in any case, what I found most interesting, and what connected directly with Nye, was the following
observation by Blanning: “At the heart of his argument is the contrariety between day and night, light
and dark. On the one hand, the sixteenth century witnessed an intensification of the association of the
night with evil …” That, and the theme of a shifting civic/public sphere (a la Habermas) that moved
not only from the town square to the aristocratic halls and coffee houses, but also from the day time to
the night time.

Take both together and we have another example of the reciprocal relationship between technology
and social structures, assuming you’re buying my hunch that this is a story in which technology, even if
it is “primitive” technology, is implicated.

A society’s symbolic tool kit can shift. We might take for granted that night always evoked fear and
dread and evil, and although there is something to that of course, the story is more complex. Perhaps
night’s identification with evil intensified in part because of the gradual conquest of the night by artifi-
cial illumination. It would be a paradoxical case of unintended, unforeseen consequences. The more we
domesticate darkness, the more darkness takes its revenge on us. Perhaps if we were more at home in
the darkness, we would be less fearful of it.

Consider the following passage cited by Nye from Henry Beston writing in the early twentieth cen-
tury:

“We of the age of the machines, having delivered ourselves of nocturnal enemies, now have a
dislike of night itself. With lights and ever more lights, we drive the holiness and beauty of the
night back to the forests and the sea; the little villages, the crossroads even, will have none of it.
Are modern folk, perhaps, afraid of night? Do they fear that vast serenity, the mystery of infi-
nite space, the austerity of stars? Having made themselves at home in a civilization obsessed
with power, which explains its whole world in terms of energy, do they fear at night for their
dull acquiescence and the pattern of their beliefs? Be the answer what it will, today’s civiliza-
tion is full of people who have not the slightest notion of the character or poetry of night, who
have never even seen the night.”

This is an interesting passage not only because it suggests how technologies enter into symbolic
ecosystems and reshape those ecosystems just as their adoption is conditioned by that same ecosystem.
It is also interesting because what Beston feared losing — the serenity, mystery, austerity — is itself a
very modern sensibility. This is the sensibility of a modern individual formed by a post-Copernican
cosmology. A medieval individual could not have written that passage, and not only because illumina-
tion was for them a still distant and unimagined phenomenon. They were still at home in a much
smaller and coherent universe. They did not look up and see “space,”they looked up and saw the vastly
populated heavens. But that is another story.

Night has been on the retreat for quite some time now, conquered by the science and technology of
illumination. Night’s retreat has had social, political, and psycho-symbolic consequences. The mystery
of the night is chased away only to allow in the terror of the night. Technology shapes and is shaped by
its semiotic environment.

One of the wonders of writing is that you’re never quite sure where you will end up when you start.
So while my initial intent was simply to note another illustration of Nye’s notion of the social construc-
tion of technology, writing’s momentum leads me to the notion that domestication may, ironically, lead
to the displacement of fear and anxiety to another register. It is as if, like Dr. Moreau’s creatures, the
realms we tame through our techno-scientific prowess remain sources of reconfigured and intensified
fear and anxiety. Dr. Moreau instills fear in his beasts through violence because he fears the violence
they may do to him. Fear begets fear — that is perhaps the core of H. G. Wells’ tale — and that princi-
ple is at work in our uneasy relationship with technology.

If we pursue technology in order to conquer what we fear, then we also create an attending anxiety
over the (inevitable?) failure of our systems of control and mastery. It would seem that what we tame,
retains its wildness veiled, yet palpable and intensified.

September 28, 2011


60. Disconnected Varieties of Augmented Experience

In a short blog post, “This is What the Future Looks Like,” former Microsoft executive Linda Stone
writes:

“Over the last few weeks, I’ve been noticing that about 1/3 of people walking, crossing streets,
or standing on the sidewalk, are ON their cell phones. In most cases, they are not just talking;
they are texting or emailing — attention fully focused on the little screen in front of them.
Tsunami warning? They’d miss it.

With an iPod, at least as the person listens, they visually attend to where they’re going. For
those walking while texting or sending an email, attention to the world outside of the screen is
absent. The primary intimacy is with the device and it’s possibilities.”

I suspect that you would be able to offer similar anecdotal evidence. I know that this kind of waking
somnambulation characterizes a good part of those making their way about the campus of the very
large university I attend.

Stone offered these comments by way of introducing a link to a video documenting Jake P. Reil-
ly’s 90 day experiment in disconnection. He called it “The Amish Project.” Riley was very pleased
with what he found on the other side of the connected life. Asked in an interview whether this experi-
ence changed his life, Reilly had this to say:

“It’s definitely different, but I catch myself doing exactly what I hated. Someone is talking to
me and I’m half-listening and reading a text under the table. For me, it’s trying to be more
aware of it. It kind of evolved from being about technology to more of just living in the mo-
ment. I think that’s what my biggest thing is: There’s not so much chasing for me now. I’m here
now, and let’s just enjoy this. You can be comfortable with yourself and not have to go to the
crutch of your phone. For me, that’s more what I will take away from this.”
Although not directly addressing Riley’s experiment, Jason Farman has written a thoughtful piece in
The Atlantic that calls into question the link between online connectivity and disconnection from lived
experience. In “The Myth of the Disconnected Life,” Farman takes as his foil William Powers’ book,
Hamlet’s Blackberry. In his work, Powers commends the practice of taking Digital Sabbaths. Farman
has good things to say about Powers’ work and technology Sabbaths (in fact, his tone throughout is re-
freshingly irenic). Nonetheless, he does find the framing of the issue problematic:

“However, using ‘disconnection’ as a reason to disconnect thoroughly simplifies the complex


ways we use our devices while simultaneously fetishizing certain ways of gaining depth.
Though the proponents of the Digital Sabbath put forth important ideas about taking breaks
from the things that often consume our attention, the reasons they offer typically miss some
very significant ways in which our mobile devices are actually fostering a deeper sense of con-
nection to people and places.”

Farman then discusses a variety of mobile apps that in his estimation deepen the experience of place
for the smartphone-equipped individual rather than severing them from physical space. His examples
include [murmur], Broadcastr, and an app from the Museum of London. The first two of these apps al-
low users to record and listen to oral histories of the place they find themselves in and the latter allows
users to overlay images of the past onto locations throughout London using their smartphones.

In Farman’s view, these kinds of apps provide a deeper experience of place and so trouble the narra-
tive that simplistically opposes digital devices to connection and authentic experience:

“Promoting this kind of deeper context about a place and its community is something these mo-
bile devices are quite good at offering. A person can live in a location for his or her whole life
and never be able to know the full history or context of that place; collecting and distributing
that knowledge – no matter how banal – is a way to extend our understanding of a place and a
gain a deeper connection to its meanings.

Meaning is, after all, found in the practice of a place, in the everyday ways we interact with it
and describe it. Currently, that lived practice takes place both in the physical and digital worlds,
often through the interface of the smartphone screen.”
My instinct usually aligns me with Stone and Powers in these sorts of discussions. Yet, Farman
makes a very sensible point. We must acknowledge complexity and resist dichotomies that blind us to
important dimensions of our experience. It is also true that debates about technology tend to gloss over
the uses to which technologies are actually put by the people who use them.

All of this calls to mind the work of Michel de Certeau on two counts. First, de Certeau made much
of the use to which consumers put their products. In his time, the critical focus had fallen on the prod-
ucts and producers; consumers were tacitly assumed to be passive and docile recipients, even victims,
of the powers of production. De Certeau made it a point, especially in The Practice of Everyday Life,
to throw light on the multifarious, and often impertinent, uses to which consumers put products. In
many respects, this also reflects the competing approaches of internalists and social constructionists
within the study of the history of technology. For the former, the logic of the device dominates analy-
sis; for the latter, the uses to which devices are put by users is key. Farman, likewise, is calling us to be
attentive to what some users, at least, are actually doing with their digital technologies.

De Certeau also had a good deal to say about the practice of place, or how we experience places and
spaces. In a chapter of The Practice of Everyday Life titled “Walking the City,” he explicitly focused
on the manner in which memories haunted places.

Places have a way of absorbing and bearing memories that they then relinquish, bidden or unbid-
den. The context of walking and moving about spaces leads de Certeau to describe memory as “a sort
of anti-museum: it is not localizable.” Where museums gather pieces and artifacts in one location, our
memories have dispersed themselves across the landscape, they colonize. Here a memory by that tree,
there a memory in that house. De Certeau develops a notion of a veiled remembered reality that lies be-
neath the visible experience of space.

Places are made up of “moving layers.” We point, de Certeau says, here and there and say things
like, “Here, there used to be a bakery” or “That’s where old lady Dupuis used to live.” We point to a
present place only to evoke an absent reality: “the places people live in are like the presences of di-
verse absences.” Only part of what we point to is there physically; but we’re pointing as well to the in-
visible, to what can’t be seen by anyone else, which begins to hint at a certain loneliness that attends to
memory.
Reality, we might then say, is already augmented. It is freighted with our memories, it comes alive
with distant echoes and fleeting images.

Digitally augmented reality functions analogously to what we might call the mentally augmented re-
ality that de Certeau invokes. Digital augmentation also reminds us that places are haunted by memo-
ries of what happened there, sometimes to very few, but often to countless many. The digital tools Far-
man described bring to light the hauntedness of places. They unveil the ghosts that linger by this place
and that.

The first observation that follows is that in contrast to mental augmentation, digital augmentation, as
represented by two of the apps Farman describes, is social. In a sense, it appears to transcend the loneli-
ness of memory that de Certeau recognized.

De Certeau elaborated on the loneliness of memory when he approvingly cited the following obser-
vation: “‘Memories tie us to that place …. It is personal, not interesting to anyone else …’” It is like
sharing a dream with another person: its vividness and pain or joy can never be recaptured and repre-
sented so as to affect another in the same way you were affected. It is not interesting to anyone else,
and so it is with our memories. Others will listen, they will look were you point, but they cannot see
what you see.

I wonder, though, if this is not also the case with the stories collected by apps such as [murmur] and
Broadcastr. Social media often seeks to make the private of public consequence, but very often it sim-
ply isn’t. Farman believes that a better understanding of a place and a deeper connection to its mean-
ings is achieved by collecting and distributing knowledge of that place “no matter how banal.” Perhaps
it is that last phrase that gives me pause. What counts as banal is certainly subjective, but that is just the
point. The seemingly banal may be deeply meaningful to the one who experienced it, but it strikes me
as rather generous to believe that the banal that takes on meaning within the context of one’s own expe-
rience could be rendered meaningful to others for whom it is not only banal, but also lacking a place
within a narrative of lived experience out of which meaning arises.

The London Museum app seems to me to be of a different sort because it links us back to a more
distant past or a past that is, in fact, of public consequence. In this case, the banality is overcome by
distance in time. What was a banal reality of early twentieth century life, for example, is now foreign
and somewhat exotic — it is no longer banal to us.

Wrapped up in this discussion, it seems to me, is the question of how we come to meaningfully ex-
perience place — how a space becomes a place, we might say. Mere space becomes a place as its par-
ticularities etch themselves into consciousness. As we walk the space again and again and learn to feel
our way around it, for example, or as we haunt it with the ghosts of our own experience.

I would not go so far as to say that digital devices necessarily lead to a disconnected or inauthentic
experience of place. I would argue, however, that there is a tendency in that direction. The introduction
of a digital device, however, does necessarily introduce a phenomenological rupture in our experience
of a place. What we do with that device, of course, matters a great deal as Farman rightly insists. But
most of what we do does divide our attentiveness and mindfulness, even when it serves to provide in-
formation.

Perhaps I am guilty, as Farman puts it, of “fetishizing certain ways of gaining depth.” But I am
taken by de Certeau’s conception of walking as a kind of enunciation that artfully actualizes a multi-
tude of possibilities in much the same way that the act of speaking actualizes the countless possibilities
latent in language. Like speaking, then, inhabiting a space is a language with its own rhetoric. Like
rhetoric proper, the art of being in a place depends upon an acute attentiveness to opportunities offered
by the space and a deft, improvised actualization of those possibilities. It is this art of being in a place
that constitutes a meaningful and memorable augmentation of reality. Unfortunately, the possibility of
unfolding this art is undermined by the manner in which our digital devices ordinarily dissolve and dis-
tribute the mindfulness that is its precondition.

February 9, 2012
61. Technology, Speed, and Power

Alvin Toffler’s 1970 book, Future Shock, popularized the title phrase and the concept which it
named, that technology was responsible for the disorienting and dizzying quality of the pace of life
in late twentieth century industrialized society. Toffler, however, was not the first to observe that tech-
nology appeared to accelerate the pace of modern life; he only put a name to an experience people had
been reporting since at least the late nineteenth century.

More recently, Tom Vanderbilt framed his essay, “The Pleasure and Pain of Speed,” this way:

“The German sociologist Hartmut Rosa catalogues the increases in speed in his recent
book, Social Acceleration: A New Theory of Modernity. In absolute terms, the speed of human
movement from the pre-modern period to now has increased by a factor of 100. The speed of
communications (a revolution, Rosa points out, that came on the heels of transport) rose by a
factor of 10 million in the 20th century. Data transmission has soared by a factor of around 10
billion.

As life has sped up, we humans have not, at least in our core functioning: Your reaction to stim-
uli is no faster than your great-grandfather’s. What is changing is the amount of things the
world can bring to us, in our perpetual now. But is our ever-quickening life an uncontrolled jug-
gernaut, driven by a self-reinforcing cycle of commerce and innovation, and forcing us to cope
with a new social and psychological condition? Or is it, instead, a reflection of our intrinsic de-
sire for speed, a transformation of the external world into the rapid-fire stream of events that is
closest to the way our consciousness perceives reality to begin with?”

Vanderbilt explores both of these possibilities without arriving at a definitive conclusion. He closes
his essay on this ambiguous note: “That we enjoy accelerated time, and pay a price for it, is clear. The
ledger of benefits and costs may be impossible to balance, or even to compute.”
He goes on to say that the most important characteristic of accelerated time may be neither pleasure
nor pain, but utility. It’s not entirely clear, though, how useful acceleration really is. Earlier, Vanderbilt
had noted the following:

“Rosa says the ‘technical’ acceleration of being able to send and receive more emails, at any
time, to anyone in the world, is matched by a ‘social’ acceleration in which people are expected
to be able to send and receive emails at any time, in any place. The desire to keep up with this
acceleration in the pace of life thus begets a call for faster technologies to stem the tide. And
faced with a scarcity of time (either real or perceived), we react with a ‘compression of episodes
of action’—doing more things, faster, or multitasking. This increasingly dense collection of
smaller, decontextualized events bump up against each other, but lack overall connection or
meaning.”

At the turn of the twentieth century, not everyone was complaining about the accelerating pace of
life. In 1909, F.T. Marinetti published “The Futurist Manifesto,” a bracing call to embrace the acceler-
ated pace of modernity. The Italian Futurists glorified speed and technologies of acceleration. They
also glorified war, violence, danger, and chauvinism, of both the male and nationalistic variety. At
the Smart Set, Morgan Meis recently reviewed the Guggenheim’s exhibition, “Italian Futurism, 1909–
1944: Reconstructing the Universe.” Early in his review, Meis cited the fourth of Marinetti’s Futurist
principles:

“We declare that the splendor of the world has been enriched by a new beauty: the beauty of
speed. A racing automobile with its bonnet adorned with great tubes like serpents with explo-
sive breath … a roaring motor car which seems to run on machine-gun fire, is more beautiful
than the Victory of Samothrace.”

The Manifesto also included the following cheery principles of action:

8. We want to glorify war — the only cure for the world — militarism, patriotism, the destruc-
tive gesture of the anarchists, the beautiful ideas which kill, and contempt for woman.
9. We want to demolish museums and libraries, fight morality, feminism and all opportunist and
utilitarian cowardice.

Seis rightly insists that Futurism and Fascism were inextricably bound together despite the efforts of
some critics to disentangle the two: “There is no point in denying it. Most of the Italian Futurists,
Marinetti most of all, were enthusiastic fascists.” From there, Seis goes on to argue that the
Futurist/Fascist embrace of speed, violence, and technology was a form of coping with the horrors of
the First World War:

“Sensible persons confronted by the calamity and destruction of WWI felt chastened and de-
pressed in the aftermath of the war. There were civilization-wide feelings of shame and regret.
“Let us never,” it was said by most, “do that again. Let the Great War be the war that ends all
wars.” The Futurists did not accept this line of thinking. They were among an odd and select
group that said, “Let us have more of this. Let us have bigger and more spectacular wars.” […]
“How does it make us feel,” they wondered, “to say yes to the bombs and the machines and the
explosions?”

In saying “yes,” in asking for more, they discovered a secret source of power. It was a way for-
ward. By choosing to embrace the most terrible aspects of the war and the industrial civilization
that had made it possible, the Futurists gave themselves a kind of immunity from the paralysis
that European civilization experienced after the war.”

In other words, there was power in embracing the forces that were disordering society and disorient-
ing individuals. There is nothing terribly profound in the realization that embracing new technologies
often puts one at an advantage over those who are slower to do so or refuse to altogether. It is only the
explicit embrace of violence in the wake of World War I that lends the Futurists the patina of radical
action.

In our own day, we hear something like the Futurist bravado in the rhetoric of the posthumanist
movement. The future belongs to the brave and the strong, to those who are untroubled by the nebulous
ethical qualms of the weak and sentimental. As Gary Marcus recently wrote about the future of bio-
technological enhancements, “The augmented among us—those who are willing to avail themselves of
the benefits of brain prosthetics and to live with the attendant risks—will outperform others in the ev-
eryday contest for jobs and mates, in science, on the athletic field and in armed conflict. These differ-
ences will challenge society in new ways—and open up possibilities that we can scarcely imagine.”

If we must speak of inevitability with regards to technology, then we must speak of the inevitability
of the human quest for power.

The French philosopher and critic, Paul Virilio, was ten years old when the Nazi war machine over-
ran his home. He went on to devote his academic life to the problems of speed, technology, and moder-
nity. Anyone who voices concerns with the character of technological modernity will eventually be
asked if they are against Progress, and, indeed, such was the case for Virilio in a 2010 interview. He re-
sponded this way:

“No. I’ve never thought we should go back to the past. But why did the positive aspect of
progress get replaced by its propaganda? Propaganda was a tool used by Nazis but also by the
Futurists. Look at the Italian Futurists. They were allies with the Fascists. Even Marinetti. I
fight against the propaganda of progress, and this propaganda bears the name of never-ending
acceleration.”

A little later in the interview, Virilio was asked if he was annoyed by those who insisted that techno-
logical progress was an unalloyed good. He responded,

“Yes, it’s very irritating. These people are victims of propaganda. Progress has replaced God.
Nietzsche talked about the death of God—I think God was replaced by progress. I believe that
you must appreciate technology just like art. You wouldn’t tell an art connoisseur that he can’t
prefer abstractionism to expressionism. To love is to choose. And today, we’re losing this. Love
has become an obligation. Progress has all the defects of totalitarianism.”

Virilio has long argued that the “accident” is an intrinsic element of technology. Virilio’s use of the
word “accident” is meant to encompass more than the unplanned happenstance, he alludes as well to
the old philosophical distinction between substance and accidents, but it does take the conventional ac-
cident as a point of departure. Virilio does not, however, see the accident as an unfortunate deviation
from the norm, but as a necessary component. In a variety of places, he has expressed some variation of
the following:

“When you invent the ship, you also invent the shipwreck; when you invent the plane you also
invent the plane crash; and when you invent electricity, you invent electrocution… Every tech-
nology carries its own negativity, which is invented at the same time as technical progress.”

Elsewhere, Virilio borrowed a theological concept to further elaborate his theory of the accident:

“Now, this idea of original sin, which materialist philosophy rejects so forcefully, comes back
to us through technology: the accident is the original sin of the technical object. Every technical
object contains its own negativity. It is impossible to invent a pure, innocent object, just as there
is no innocent human being. It is only through acknowledged guilt that progress is possible. Just
as it is through the recognized risk of the accident that it is possible to improve the technical ob-
ject.”

In yet another context, Virilio explained, “Accidents have always fascinated me. It is the intellectual
scapegoat of the technological; accident is diagnostic of technology.” This is a provocative idea. It sug-
gests not only that the accident is inseparable from new technology, but also that technological
progress needs the accident. Without the accident there may be no progress at all. But we do not ac-
knowledge this because then the happy illusion of clean and inevitable technological progress would
take on the bloody aspect of a sacrificial cult.

In the face of the quasi-religious status of the cult of technology, Virilio insists, “It is necessary to be
an atheist of technology!”

In Future Shock, Alvin Toffler commended the reading of science-fiction. Against those who held
science-fiction in low literary regard, Toffler believed “science fiction has immense value as a mind-
stretching force for the creation of the habit of anticipation.” “Our children,” he urged,

“should be studying Arthur C. Clarke, William Tenn, Robert Heinlein, Ray Bradbury and
Robert Sheckley, not because these writers can tell them about rocket ships and time machines
but, more important, because they can lead young minds through an imaginative exploration of
the jungle of political, social, psychological, and ethical issues that will confront these children
as adults.”

Bradbury, for one, marked an obsession with speed as one of the characteristic disorders of the
dystopian world of Fahrenheit 451. In that world, traveling at immense and reckless speed was the
norm; slowness was criminal. In the story, speed is just one more of the thought-stifling conditions of
life for most characters. The great danger is not only the risk of physical accidents, but also the risk of
thoughtlessness, indifference, and irresponsibility. Mandated speed and the burning of books both
worked toward the same end.

Interestingly, these imaginative aspects of Bradbury’s story were already present in the Futurists’ vi-
sion. Not only did they glorify speed, violence, and technology, they also called for an incendiary as-
sault on the educational establishment: “It is in Italy that we are issuing this manifesto of ruinous and
incendiary violence, by which we today are founding Futurism, because we want to deliver Italy from
its gangrene of professors, archaeologists, tourist guides and antiquaries.”

In Fahrenheit 451, a character named Faber is among the few to remember a time before the present
regime of thoughtlessness. When the protagonist, Guy Montag, meets with him, Faber lays out three
necessary conditions for the repair of society: quality of information, the time to digest it, and “the right
to carry out actions based on what we learn from the interaction of the first two.”

If we credit Bradbury with knowing something about what wisdom and freedom require, then we
might conclude that three disastrous substitutions tempt us. We are tempted to mistake quantity of in-
formation for quality, speed for time, and consumer choice for the freedom of meaningful action. It
seems, too, that a morbid, totemic fascination with the technical accident distracts us from the more
general social accident that unfolds around us.

It would be disingenuous to suggest “solutions” to this state of affairs. Bradbury himself steered
clear of a happy ending. In the world of Fahrenheit 451, the general, cataclysmic accident is not cir-
cumvented. We may hope and work for better. At least, we should not despair.

Another writer of imaginative fiction, J.R.R. Tolkien has his character Gandalf declare, “Despair is
only for those who see the end beyond all doubt. We do not.”
Tolkien too lived through the mechanized horrors of the First World War. He would likely sympa-
thize with Virilio’s claim, “War was my university.” But unlike the Futurists and their Fascist heirs, he
did not believe that an embrace of power and violence was the only way forward. Interestingly, the
counsel against despair cited above is taken from a point in The Fellowship of the Ring when a course
of action with regards to the ring of power is being debated. Some of those present urged that the ring
of power be used for good. The wiser among them, however, recognized that this was foolishness. The
ring was designed for dominance, and the power it yielded it would always bend toward its design and
corrupt those who wielded it.

It is worth mentioning that Tolkien once explained to a friend that “all this stuff is mainly concerned
with Fall, Mortality, and the Machine.” He elaborated on the idea of the “Machine” in this way: “By
the last I intend all use of external plans or devices (apparatus) instead of development of the inherent
inner powers or talents — or even the use of these talents with the corrupted motive of dominating:
bulldozing the real world, or coercing other wills.”

So rather than submit to the logic of power and dominance, Gandalf urges a counterintuitive course
of action:

“Let folly be our cloak, a veil before the eyes of the Enemy! For he is very wise, and weighs all
things to a nicety in the scales of his malice. But the only measure that he knows is desire, de-
sire for power; and so he judges all hearts. Into his heart the thought will not enter that any will
refuse it, that having the Ring we may seek to destroy it. If we seek this, we shall put him out of
reckoning.”

The point, I suggest, is not that we should seek to destroy our technology. It is rather that we should
refuse the logic that has animated so much of its history and development: the logic of power, domi-
nance, and thoughtless irresponsibility. That would be a start.

March 29, 2014


62. Our Little Apocalypses

An incoming link to my synopsis of Melvin Kranzberg’s Six Laws of Technology alerted me to a


piece on Quartz about a new book by an author named Michael Harris. The book, The End of Absence:
Reclaiming What We’ve Lost in a World of Constant Connection, explores the tradeoffs induced by the
advent of the internet. Having not read the book, I can’t say much about it; but I was intrigued by one
angle Harris takes that comes across in the review.

Harris’s book focuses on the generation, a fuzzy category to be sure, that came of age just before the
Internet exploded onto the scene in the early 90s. Here’s Harris:

“If you were born before 1985, then you know what life is like both with the internet and with-
out. You are making the pilgrimage from Before to After.

If we’re the last people in history to know life before the internet, we are also the only ones who
will ever speak, as it were, both languages. We are the only fluent translators of Before and Af-
ter.”

Except for a brief flirtation with Prodigy on an MS-DOS machine with a monochrome screen, the
Internet did not come into my life until I was a freshman in college. I’m one of those people Harris is
writing about, one of the Last Generation to know life before the Internet. Putting it that way threatens
to steer us into a rather unseemly romanticism, and, knowing that I’m temperamentally drawn to dying
lights, I want to make sure I don’t give way to it. That said, it does seem to me that those who’ve
known the Before and After, as Harris puts it, are in a unique position to evaluate the changes. Experi-
ence, after all, is irreducible and incommunicable.

One of the recurring rhetorical tropes that I’ve listed as a Borg Complex symptom is that of noting
that while every new technology elicits criticism and evokes fear, society always survives the so-called
moral panic or techno-panic, and thus concludes, QED, that those critiques and fears are always mis-
guided and overblown. It’s a pattern of thought I’ve complained about more than once. In fact, it fea-
tures as the tenth of my unsolicited points of advice to tech writers.

Now while it is true, as Adam Thierer has noted here, that we should try to understand how societies
and individuals have come to cope with or otherwise integrate new technologies, it is not the case that
such negotiated settlements are always unalloyed goods for society or for individuals. But this line of
argument is compelling to the degree that living memory of what has been displaced has not yet been
lost. I may know at an intellectual level what has been lost, because I read about it in a book for exam-
ple, but it is another thing altogether to have felt that loss. We move on, in other words, because we
forget the losses, or, more to the point, because we never knew or experienced the losses for ourselves
—they were always someone else’s problem.

Let me affirm that yes, of course, I certainly would’ve (and do) made many trade-offs along the
way, too. To recognize costs and losses does not mean that you always refuse to incur them, it simply
means that you might incur them in something other than a naive, triumphalist spirit.

Around this time last year, an excerpt from Jonathan Franzen’s then-forthcoming edited work on
Karl Krauss was published in the Guardian; it was panned, frequently and forcefully. But the conclu-
sion of the essay struck me then as being on to something.

“Maybe … apocalypse is, paradoxically, always individual, always personal,” Franzen wrote,

“I have a brief tenure on earth, bracketed by infinities of nothingness, and during the first part
of this tenure I form an attachment to a particular set of human values that are shaped inevitably
by my social circumstances. If I’d been born in 1159, when the world was steadier, I might well
have felt, at fifty-three, that the next generation would share my values and appreciate the same
things I appreciated; no apocalypse pending.”

But, of course, he wasn’t. He was born in the modern world, like all of us, and this has meant
change, unrelenting change. Here is where the Austrian writer Karl Kraus, whose life straddled the turn
of the twentieth century, comes in: “Kraus was the first great instance of a writer fully experiencing
how modernity, whose essence is the accelerating rate of change, in itself creates the conditions for
personal apocalypse.” Perhaps. I’m tempted to quibble with this claim. The words of John Donne, “Tis
all in pieces, all coherence gone,” come to mind. Yet, even if Franzen is not quite right about the histor-
ical details, I think he’s given honest voice to a common experience of modernity:

“The experience of each succeeding generation is so different from that of the previous one that
there will always be people to whom it seems that the key values have been lost and there can
be no more posterity. As long as modernity lasts, all days will feel to someone like the last days
of humanity. Kraus’s rage and his sense of doom and apocalypse may be the anti-thesis of the
upbeat rhetoric of Progress, but like that rhetoric, they remain an unchanging modality of
modernity.”

This is, perhaps, a bit melodramatic, and it is certainly not all that could be said on the matter, or all
that should be said. But Franzen is telling us something about what it feels like to be alive these days.
It’s true, Franzen is not the best public face for those who are marginalized and swept aside by the tides
of technological change, tides which do not lift all boats, tides which may, in fact, sink a great many.
But there are such people, and we do well to temper our enthusiasm long enough to enter, so far as it is
possible, into their experience. In fact, precisely because we do not have a common culture to fall back
on, we must work extraordinarily hard to understand one another.

Franzen is still working on the assumption that these little personal apocalypses are a generational
phenomenon. I’d argue that he’s underestimated the situation. The rate of change may be such that the
apocalypses are now intra-generational. It is not simply that my world is not my parents’ world; it is
that my world now is not what my world was a decade ago. We are all exiles now, displaced from a
world we cannot reach because it fades away just as its contours begin to materialize. This explains
why, as I have written before, nostalgia is not so much a desire for a place or a time as it is a desire for
some lost version of ourselves. We are like Margaret, who in Hopkins’ poem, laments the passing of
the seasons, Margaret to whom the poet’s voice says kindly, “It is Margaret you mourn for.”

Although I do believe that certain kinds of change ought to be resisted—I’d be a fool not to—none
of what I’ve been trying to get at in this post is about resisting change in itself. Rather, I think all I’ve
been trying to say is this: we must learn to take account of how differently we experience the changing
world so that we might best help one another as we live through the change that must come. That is all.

August 22, 2014


63. Consider the Traffic Light Camera

It looks like I may be getting a traffic citation in the mail within the next few days. A few
nights ago, while making a left into my neighborhood, I was slowed by a car that made a creeping right
ahead of me onto the same street. As I finally completed my turn, I saw a bright flash go off behind me.
While I’ve noted the proliferation of traffic light cameras around town with mildly disconcerted inter-
est, I hadn’t yet noticed the camera on this rather inconsequential intersection. A day or two later at the
same spot, I found myself coming to an abrupt stop once the light hit yellow to ensure that I wasn’t
caught completing my turn as the light turned red. Automated surveillance had done its job; I had inter-
nalized the gaze of the unblinking eye.

For some time now I’ve been unsettled by the proliferation of traffic light cameras, but I’ve not yet
been able to articulate why exactly. While Big Brother fears may be part of the concern, I’m mostly
troubled by how the introduction of automated surveillance and ticketing seems to encourage the re-
placement of human judgment, erroneous as it may often be, by unthinking, habituated behavior.

The traffic light camera knows only that you have crossed or not crossed a certain point at a certain
time. Its logic is binary: you are either out of the intersection or in it. Context matters not at all; there is
no room for deliberation. If we can imagine a limited set of valid reasons for proceeding through a red
light, automated ticketing cannot entertain them. While the intermittently monitored yellow light in-
vited judgment and practical wisdom; the unceasingly monitored yellow light tolerates only unwaver-
ing compliance.

In this way, it hints at a certain pattern in the relationship between human beings and the complex
technological systems we create. Take the work of getting from here to there as an example. Our base-
line is walking. We can walk from here to there in just about any way that the terrain will allow and at
whatever rate our needs dictate. And while the journey may have its perils, they are not inherent in
walking itself. After all, it would be a strange accident indeed if I were incapacitated as a result of
bumping into someone or even stumbling on a stone.
But walking is not necessarily the fastest or most efficient way of getting from here to there, espe-
cially if I have a sizable load to bear. Horse-drawn conveyances relieve me of the work of walking and
potentially increase my rate of speed without, it seems to me, radically increasing the risks. But they
also tend to limit my freedom of motion, a decently kept road being rather more of a necessity than it
would’ve been for the walker. Desire paths illustrate this point neatly. The walker may make his own
path, and frequently does so.

Then the train comes along. The train radically increased the rate of speed at which human beings
may travel, but it also elevated risks–a derailment, after all, is not quite the same thing as a stumble–
and restricted freedom of motion to the tracks laid out for it. It’s worth noting that the railway system
was one of the first expansive technological systems requiring, for its efficient and safe operation,
rigidly regimented and coordinated action. We even owe the time zones to the systematizing demands
of the railroads. The railway system, then, was a massive feat of system building, and would-be travel-
ers were integrated into this system.

The automobile, too, is powerful and potentially dangerous, and must also be carefully managed.
Consequently, we created an elaborate system of roads and rules to govern how we use this powerful
machine; we created a mechanistic environment to manage the machine. Interestingly, the car allows
for a bit more freedom of action than the train, illustrated nicely by the off-roading ideal, which is
a fantasy of liberation. But, for the most part, our driving, in order to be safe and efficient, is rational-
ized and systematized. Apart from this over-arching systematization, the off-roading fantasy would
have little appeal. All of this is, of course, a “good thing.” Safety is important, etc., etc. The clip below,
filmed at an intersection in Addis Ababa, illustrates what driving looks like in the absence of such ra-
tionalization and regimentation.

In an ideal world, one in which all rules and practical guidelines are punctiliously obeyed, the traffic
flows freely and safely. Of course, this is far from an ideal world; accidents happen, and they are a
leading source of inefficiency, expense, and harm. When driving is conceived of as an engineering
problem solved by the fabrication of elaborate systems, accidents are human glitches in the machinery
of automobile transportation. So, lanes, signals, signs, traffic lights, etc.–all of it is designed to disci-
pline our driving so that it may resemble the smooth operation of machines that follow rules flaw-
lessly. The more machine-like our driving, the more efficient the system.
As an illustration of the basic principle, take UPS’s deployment of Orion, a complex algorithm de-
signed to plot out the best delivery route for drivers. “Driver reaction to Orion is mixed,” according to
a WSJ piece on the software,

“The experience can be frustrating for some who might not want to give up a degree of auton-
omy, or who might not follow Orion’s logic. For example, some drivers don’t understand why
it makes sense to deliver a package in one neighborhood in the morning, and come back to the
same area later in the day for another delivery. But Orion often can see a payoff, measured in
small amounts of time and money that the average person might not see.”

Commenting on this story at Marginal Revolution, Alex Taborrok added, “Human drivers think
Orion is illogical because they can’t grok Orion’s super-logic. Perhaps any sufficiently advanced logic
is indistinguishable from stupidity.” However we might frame the matter, it remains the case that,
given the logic of the system, the driver’s judgment is the glitch that needs to be eradicated to achieve
the best results.

Let’s consider the traffic light camera from this angle. Setting aside the not insignificant function of
raising municipal funds through increased ticketing and fines, traffic light cameras are designed to miti-
gate pesky and erratic human judgment. Always-on surveillance ensures that my actions are ever more
strictly synchronized with the vast technological system that orders automobile traffic. The traffic light
camera assures that I am ever more fully assimilated into the logic of the system, to put it a bit too
grimly perhaps. The upside, of course, is the promise of ever greater efficiency and safety–the only val-
ues technological systems can recognize, of course.

Ultimately, however, we don’t make very good machines. We get drowsy and angry and drunk; we
are easily distracted and, even at our best, we can only take in a limited slice of the environment around
us. Enter self-driving cars and the promise of eliminating human error from the complex automotive
transportation system.

The trajectory that leads to self-driving cars was already envisioned years before the modern high-
way system was built. In the film that accompanied GM’s Futurama exhibit at the 1939 New York
World’s Fair, we see a model highway system of the future (1960), and we are told, beginning around
the 14:30 mark, “Traffic moves at an unreduced rates of speed. Safe distance between cars is main-
tained by automatic radio control …. The keynote of this motorway? Safety. Safety with increased
speed.”

Most pitches I’ve heard for self-driving cars trade on the same idea: increased safety through auto-
mation, i.e., the elimination of human error. That, and increased screen time, because if you’re not hav-
ing to pay attention to the road, then you’re free to dive into your device of choice. Or look at the
scenery, if you’re quaint that way, but let’s be serious. Take, for example, the self-driving car Mer-
cedes-Benz displayed at this year’s CES, the F015: “For interaction within the vehicle, the passengers
rely on six display screens located around the cabin. They also interact with the vehicle through ges-
tures, eye-tracking or by touching the high-resolution screens.”

But setting the ubiquity of screens aside, we can extract a general principle from the trajectory I’ve
just sketched out.

In a system that works best the more machine-like we become, the human component becomes ex-
pendable as soon as a machine can outperform it. Or to put it another way, any system that encourages
machine-like behavior from its human components, is a system poised to eventually eliminate the hu-
man element altogether. To give it another turn, we might frame it as a paradox of complexity. As hu-
man beings create powerful and complex technologies, they must design complex systemic environ-
ments to ensure their safe operation. These environments sustain further complexity by disciplining hu-
man actors to abide by the necessary parameters. Complexity is achieved by reducing human action to
the patterns of the system; consequently, there comes a point when further complexity can only be
achieved by discarding the human element altogether. When we design systems that work best the
more machine-like we become, we shouldn’t be surprised when the machines ultimately render us su-
perfluous.

Of course, it should be noted that, as per usual, the hype surrounding self-driving cars is just that.
Writing for Fortune, Nicholas Carr, cited Ford’s chief engineer, Raj Nair, who, following his
boss’s promise of automated cars rolling off the assembly line in five years time, “explained that ‘full
automation’ would be possible only in limited circumstances, particularly ‘where high definition map-
ping is available along with favorable environmental conditions for the vehicle’s sensors.’” Carr added,
“While it may be relatively straightforward to design a car that can drive itself down a limit-
ed-access highway in good weather, programming it to navigate chaotic city or suburban streets
or to make its way through a snowstorm or a downpour poses much harder challenges. Many
engineers and automation experts believe it will take decades of further development to build a
completely autonomous car, and some warn that it may never happen, at least not without a
massive and very expensive overhaul of our road system.”

That said, the dream of full automation will probably direct research and development for years to
come, and we will continue to see incremental steps in that direction. In retrospect, one of those steps
will have been the advent of traffic light cameras, not because it advanced the technology of self-driv-
ing cars, but because it prepared us to assent to the assumption that we would be ultimately expendable.
The point, then, of this rambling post might be put this way: Our attitude toward new technologies may
be less a matter of conscious thought than of tacit assumptions internalized through practices and
habits.

February 20, 2015


64. Technology, Moral Discourse, and Political Com-
munities
According to Langdon Winner, neither ancient nor modern culture have been able to bring politics
and technology together. Classical culture because of its propensity to look down its nose, ontologi-
cally speaking, at the mechanical arts and manual labor. Modern culture because of its relegation of
science and technology to the private sphere and its assumptions about the nature of technological
progress.

The assumptions about technological progress that Winner alludes to in his article are of the sort that
I’ve grouped under the Borg Complex. Fundamentally, they are assumptions about the inevitability and
unalloyed goodness of technological progress. If technological development is inevitable, for better or
for worse, than there is little use deliberating about it.

Interestingly, Winner elaborates his point by reference to the work of moral philosopher Alasdair
MacIntyre. In his now classic work, After Virtue: A Study in Moral Theory, MacIntyre argued that con-
temporary moral discourse consistently devolves into acrimonious invective because it proceeds in the
absence of a shared moral community or tradition.

Early in After Virtue, MacIntyre imagines a handful of typical moral debates that we are accustomed
to hearing about or participating in. The sort of debates that convince no one to change their minds, and
the sort, as well, in which both sides are convinced of the rationality of their position and the irrational-
ity of their opponents’. Part of what MacIntyre argues is that neither side is necessarily more rational
than the other. The problem is that the reasoning of both sides proceeds from incommensurable sets of
moral communities, traditions, and social practices. In the absence of a shared moral vision that contex-
tualizes specific moral claims and frames moral arguments there can be no meaningful moral discourse,
only assertions and counter-assertions made with more or less civility.

Here is how Winner brings MacIntyre into his discussion:


“Another characteristic of contemporary discussion about technology policy is that, as Alasdair
MacIntyre might have predicted, they involve what seem to be interminable moral controver-
sies. In a typical dispute, one side offers policy proposals based upon what seem to be ethically
sound moral arguments. The the opposing side urges entirely different policies using arguments
that appear equally well-grounded. The likelihood that the two (or More) sides can locate com-
mon ground is virtually nil.”

Winner then goes on to provide his own examples of how such seemingly fruitless debates play out.
For instance,

“1a. Conditions of international competitiveness require measures to reduce production costs.


Automation realized through the computerization of office and factory work is clearly the best
way to do this at present. Even though it involves eliminating jobs, rapid automation is the way
to achieve the greatest good for the greatest number in advanced industrial societies.

b. The strength of any economy depends upon the skills of people who actually do the work.
Skills of this kind arise from traditions of practice handed down from one generation to the
next. Automation that de-skills the work process ought to be rejected because it undermines the
well-being of workers and harms their ability to contribute to society.”

“In this way,” Winner adds, “debates about technology policy confirm MacIntyre’s argument that
modern societies lack the kinds of coherent social practice that might provide firm foundations for
moral judgments and public policies.”

Again, the problem is not simply a breakdown of moral discourse, it is also the absence of a political
community of public deliberation and action in which moral discourse might take shape and find trac-
tion. Again, Winner:

“[…] the trouble is not that we lack good arguments and theories, but rather that modern poli-
tics simply does not provide appropriate roles and institutions in which the goal of defining the
common good in technology policy is a legitimate project.”
The exception that proves Winner’s rule is, I think, the Amish. Granted, of course, that the scale and
complexity of modern society is hardly comparable to an Amish community. That said, it is nonethe-
less instructive to appreciate Amish communities as tangible, lived examples of what it might look like
to live in a political community whose moral traditions circumscribed the development of technology.

By contrast, as Winner put it in the title of one of his books, in modern society “technics-out-of-con-
trol” is a theme of political thought. It is a cliché for us to observe that technology barrels ahead leav-
ing ethics and law a generation behind.

Given those two alternatives, it is not altogether unreasonable for someone to conclude that they
would rather live with the promise and peril of modern technology rather than live within the con-
straints imposed by an Amish-style community. Fair enough. It’s worth wondering, however, whether
our alternatives are, in fact, quite so stark.

In any case, Winner raises, as I see it, two important considerations. Our thinking about technology,
if it is to be about more than private action, must reckon with the larger moral traditions, the sometimes
unarticulated and unacknowledged visions of the good life, that frame our evaluations of technology. It
must also find some way of reconstituting a meaningful political contexts for acting. Basically, then,
we are talking not only about technology, but about democracy itself.

February 2, 2014
65. Machines, Work, and the Value of People

In April of 2015, Microsoft released a “bot” that guesses your age based on an uploaded picture.
The bot tended to be only marginally accurate and sometimes hilariously (or disconcertingly) wrong.
What’s more, people quickly began having some fun with the program by uploading faces of actors
playing fictional characters, such as Yoda or Gandalf.

Shortly after the How Old bot had its fleeting moment of virality, Nathan Jurgenson tweeted the fol-
lowing:

interesting that the microsoft How Old bot went *more* viral because it wasn’t good. it being
right is boring, it being wrong is shareable

— nathanjurgenson (@nathanjurgenson) May 1, 2015

This was an interesting observation, and it generated a few interesting replies. Jurgenson himself
added, “much of the bigdata/algorithm debates miss how poor these often perform. many critiques pre-
suppose & reify their untenable positivism.” He summed up this line of thought with this tweet: “so
much ‘tech criticism’ starts first with uncritically buying all of the hype silicon valley spits out.”

Let’s pause here for a moment. All of this is absolutely true. Yet … it’s not all hype, not necessarily
anyway. Let’s bracket the more outlandish claims made by the singularity crowd, of course. But take
facial recognition software, for instance. It doesn’t strike me as wildly implausible that in the near fu-
ture facial recognition programs will achieve a rather striking degree of accuracy.

Along these lines, I found Kyle Wrather’s replies to Jurgenson’s tweet particularly interesting. First,
Wrather noted, “[How Old Bot] being wrong makes people more comfortable w/ facial recognition b/c
it seems less threatening.” He then added, “I think people would be creeped out if we’re totally accu-
rate. When it’s wrong, humans get to be ‘superior.'”
Wrather’s second comment points to an intriguing psychological dynamic. Certain technologies
generate a degree of anxiety about the relative status of human beings or about what exactly makes hu-
man beings “special”—call it post-humanist angst, if you like.

Of course, not all technologies generate this sort of angst. When it first appeared, the airplane was
greeted with awe and a little battiness (consider alti-man). But as far as I know, it did not result in any
widespread fears about the nature and status of human beings. The seemingly obvious reason for this is
that flying is not an ability that has ever defined what it means to be a human being.

It seems, then, that anxiety about new technologies is sometimes entangled with shifting assump-
tions about the nature or dignity of humanity. In other words, the fear that machines, computers, or ro-
bots might displace human beings may or may not materialize, but it does tell us something about how
human nature is understood.

Is it that new technologies disturb existing, tacit beliefs about what it means to be a human, or is it
the case that these beliefs arise in response to a new perceived threat posed by technology? I’m not en-
tirely sure, but some sort of dialectical relationship is involved.

A few examples come to mind, and they track closely to the evolution of labor in Western societies.

During the early modern period, perhaps owing something to the Reformation’s insistence on the
dignity of secular work, the worth of a human being gets anchored to their labor, most of which is, at
this point in history, manual labor. The dignity of the manual laborer is later challenged by mechaniza-
tion during the 18th and 19th centuries, and this results in a series of protest movements, most fa-
mously that of the Luddites.

Eventually, a new consensus emerges around the dignity of factory work, and this is, in turn, chal-
lenged by the advent of new forms of robotic and computerized labor in the mid-twentieth century.

Enter the so-called knowledge worker, whose short-lived ascendency is presently threatened by ad-
vances in computers and AI.

I think this latter development helps explain our present fascination with creativity. It’s been over a
decade since Richard Florida published The Rise of the Creative Class, but interest in and pontificating
about creativity continues apace. What I’m suggesting is that this fixation on creativity is another recal-
ibration of what constitutes valuable, dignified labor, which is also, less obviously perhaps, what is
taken to constitute the value and dignity of the person. Manual labor and factory jobs give way to
knowledge work, which now surrenders to creative work. As they say, nice work if you can get it.

Interestingly, each re-configuration not only elevated a new form of labor, but it also devalued the
form of labor being displaced. Manual labor, factory work, even knowledge work, once accorded dig-
nity and respect, are each reframed as tedious, servile, monotonous, and degrading just as they are be-
ing replaced. If a machine can do it, it suddenly becomes sub-human work.

It’s not hard to find these rhetorical dynamics at play in the countless presently unfolding discus-
sions of technology, labor, and what human beings are for. Take as just one example this excerpt from
the recent New Yorker profile of venture capitalist, Marc Andreessen (emphasis mine):

Global unemployment is rising, too—this seems to be the first industrial revolution that wipes
out more jobs than it creates. One 2013 paper argues that forty-seven per cent of all American
jobs are destined to be automated. Andreessen argues that his firm’s entire portfolio is creating
jobs, and that such companies as Udacity (which offers low-cost, online “nanodegrees” in pro-
gramming) and Honor (which aims to provide better and better-paid in-home care for the el-
derly) bring us closer to a future in which everyone will either be doing more interesting
work or be kicking back and painting sunsets. But when I brought up the raft of data sug-
gesting that intra-country inequality is in fact increasing, even as it decreases when averaged
across the globe—America’s wealth gap is the widest it’s been since the government began
measuring it—Andreessen rerouted the conversation, saying that such gaps were “a skills prob-
lem,” and that as robots ate the old, boring jobs humanity should simply retool. “My re-
sponse to Larry Summers, when he says that people are like horses, they have only their manual
labor to offer”—he threw up his hands. “That is such a dark and dim and dystopian view of hu-
manity I can hardly stand it!”

As always, it is important to ask a series of questions: Who’s selling what? Who stands to profit?
Whose interests are being served? Etc. With those considerations in mind, it is telling that leisure has
suddenly and conveniently re-emerged as a goal of human existence. Previous fears about technologi-
cally driven unemployment have ordinarily been met by assurances that different and better jobs would
emerge. It appears that pretense is being dropped in favor of vague promises of a future of jobless
leisure. So, it seems we’ve come full circle to classical estimations of work and leisure: all work is for
chumps and slaves. You may be losing your job, but don’t worry, work is for losers anyway.

So, to sum up: Some time ago, identity and a sense of self-worth got hitched to labor and productiv-
ity. Consequently, each new technological displacement of human work appears to those being dis-
placed as an affront to the their dignity as human beings. Those advancing new technologies that dis-
place human labor do so by demeaning existing work as below our humanity and promising more hu-
mane work as a consequence of technological change. While this is sometimes true–some work that
human beings have been forced to perform has been inhuman–deployed as a universal truth, it is little
more than rhetorical cover for a significantly more complex and ambivalent reality.

May 20, 2015


66. A Lost World

Human beings have two ways, generally speaking, of going about the business of living with one
another: through speech or violence. One of the comforting stories we tell each other about the modern
world is that we have, for the most part, set violence aside. Indeed, one of modernity’s founding myths
is that it arose as a rational alternative to the inevitable violence of a religious and unenlightened world.
The truth of the matter is more complicated, of course. In any case, we would do well to recall that it
was popularly believed at the turn of the twentieth century that western civilization had seen the end of
large scale conflict among nations.

Setting to one side the historical validity of modernity’s myths, let us at least acknowledge that a so-
cial order grounded in the power of speech is a precarious one. Speech can be powerful, but it is also
fragile. It requires hospitable structures and institutions that are able to sustain the possibility of intelli-
gibility, meaning, and action–all of which are necessary in order for a political order premised on
the debate and deliberation to exist and flourish. This is why emerging technologies of the word–writ-
ing, the printing press, the television, the Internet–always adumbrate profound political and cultural
transformations.

A crisis of the word can all too easily become a political crisis. This insight, which we might asso-
ciate with George Orwell, is, in fact, ancient.

Consider the following: “To fit in with the change of events, words, too, had to change their usual
meanings. What used to be described as a thoughtless act of aggression was now regarded as the
courage one would expect to find in a party member,” so wrote, not Orwell but Thucydides in the first
half of the fifth century BC. He goes on as follows:

… to think of the future and wait was merely another way of saying one was a coward; any idea
of moderation was just an attempt to disguise one’s unmanly character; ability to understand a
question from all sides meant that one was totally unfitted for action. Fanatical enthusiasm was
the mark of a real man, and to plot against an enemy behind his back was perfectly legitimate
self-defense. Anyone who held violent opinions could always be trusted, and anyone who ob-
jected to them became a suspect. To plot successfully was a sign of intelligence, but it was still
cleverer to see that a plot was hatching. If one attempted to provide against having to do either,
one was disrupting the unity of the party and acting out of fear of the opposition. In short, it was
equally praiseworthy to get one’s blow in first against someone who was going to do wrong,
and to denounce someone who had no intention of doing any wrong at all. Family relations
were a weaker tie than party membership, since party members were more ready to go to any
extreme for any reason whatever. These parties were not formed to enjoy the benefit of the es-
tablished laws, but to acquire power by overthrowing the existing regime; and the members of
these parties felt confidence in each other not because of any fellowship in a religious commu-
nion, but because they were partners in crime.”

I came across a portion of this paragraph on two separate occasions during the past week or two,
first in a tweet and then again while reading Alasdair MacIntyre’s A Short History of Ethics.

The passage, taken from Thucydides’ The History of the Peloponnesian War, speaks with arresting
power to our present state of affairs. We should note, however, that what Thucydides is describing is
not primarily a situation of pervasive deceitfulness, one in which people knowingly betray the ordinary
and commonly accepted meaning of a word. Rather, it is a situation in which moral evaluations them-
selves have shifted. It is not that some people now lied and called an act of thoughtless aggression a
courageous act. It is that what had before been commonly judged to be an act of thoughtless aggression
was now judged by some to be a courageous act. In other words, it would appear that in very short or-
der, moral judgments and the moral vocabulary in which they were expressed shifted dramatically.

It brings to mind Hannah Arendt’s frequent observation about how quickly the self-evidence of
long-standing moral principles were overturned in Nazi Germany: “… it was as though morality sud-
denly stood revealed in the original meaning of the word, as a set of mores, customs and manners,
which could be exchanged for another set with hardly more trouble than it would take to change the ta-
ble manners of an individual or a people.”

It is shortsighted, at this juncture, to ask how we can find agreement or even compromise. We do
not, now, even know how to disagree well; nothing like an argument in the traditional sense is being
had. It is an open question whether anyone can even be said to be speaking intelligibly to anyone who
does not already fully agree with their positions and premises. The common world that is both the con-
dition of speech and its gift to us is withering away. A rift has opened up in our political culture that
will not be mended until we figure out how to reconstruct the conditions under which speech can once
again become meaningful. Until then, I fear, the worst is still before us.

January 29, 2017


67. Finding A Place for Thought

Recently, Nathan Jurgenson tweeted a short thread commenting on what Twitter has become. “[I]t
feels weird to tweet about things that arent news lately,” Jurgenson noted. A sociologist with an interest
in social media and identity, he found that it felt rude to tweet about his interests when his feed seemed
only concerned with the latest political news. About this tendency Jurgenson wisely observed, “follow-
ing the news all day is the opposite of being more informed and it certainly isnt a kind ‘resistance.’”

These observations resonated with me. I’ve had a similar experience when logging in to Twitter,
only to find that Twitter is fixated on the political (pseudo-)event of the moment. In those moments it
seems if not rude then at least quixotic to link to a post that is not at all related to what everyone hap-
pens to be talking about. Sometimes, of course, it is not only political news that has this effect, it is also
the all to frequent tragedy that can consume Twitter’s collective attention or the frivolous faux-contro-
versy, etc.

In moments like these Twitter — and the same is true to some degree of other platforms — demands
something of us, but it is not thought. It demands a reaction, one that is swift, emotionally charged, and
in keeping with the affective tenor of the platform. In many respects, this entails not only an absence of
thought but conditions that are overtly hostile to thought.

Even apart from crisis, controversies, and tragedies, however, the effect is consistent: the focus is in-
exorably on the fleeting present. The past has no hold, the future does not come into play. Our time is
now, our place is everywhere. Of course, social media has only heightened a tendency critics have
noted since at least Kierkegaard’s time. To be well-informed, meaning up with current events, under-
mines the possibility of serious thinking, mature emotional responses, sound judgment, and wise ac-
tion.

It is important, in my view, to make clear that this is not merely a problem of information overload.
If it were only information we were dealing with, then we might be better able to recognize the nature
of the problem and act to correct it. It is also, as I’ve noted briefly before, an affect overload problem.
It is the emotional register that accounts for the Pavlovian alacrity with which we attend to our devices
and the digital flows for which they are a portal. These devices, then, are, in effect, Skinner boxes we
willingly inhabit that condition our cognitive and emotional lives. Twitter says “feel this,” we say “how
intensely?” Social media never invites us to step away, to think and reflect, to remain silent, to refuse a
response for now or may be indefinitely.

Under these circumstances, there is no place for thought.

For the sake of the world, thought must, at least for a time, take leave of the world, especially the
world mediated to us by social media. We must, in other words, by deliberate action, make a place for
thought.

Many within the tech industry are coming to a belated sense of responsibility for this world they
helped fashion. A recent article in the Guardian tells their story. They include Justin Rosenstein, who
helped design the “Like” button for Facebook but now realizes that it is common “for humans to de-
velop things with the best of intentions and for them to have unintended, negative consequences” and
James Williams, who worked on analytics for Google but who experienced an epiphany “when he no-
ticed he was surrounded by technology that was inhibiting him from concentrating on the things he
wanted to focus on.”

Better late than never one might say, or perhaps it is too late. As per usual, there is a bit of ancient
wisdom that speaks to the situation. In this case, the story of Pandora’s Box comes to mind. Nonethe-
less, when so many in the industry seem bent on evading responsibility for the consequences of their
work, it is mildly refreshing to read about some who are at least willing to own the consequences of
their work and even striving to somehow make ammends.

It is telling, though, that, as the article observes, “These refuseniks are rarely founders or chief exec-
utives, who have little incentive to deviate from the mantra that their companies are making the world a
better place. Instead, they tend to have worked a rung or two down the corporate ladder: designers, en-
gineers and product managers who, like Rosenstein, several years ago put in place the building blocks
of a digital world from which they are now trying to disentangle themselves.”
Tristan Harris, formerly at Google, has been especially pointed in his criticism of the tech industries
penchant for addictive design. Perhaps the most instructive part of Harris’s story is how he experienced
a promotion to ethics position within Google as, in effect, a marginalization and silencing.

Informed as my own thinking has been by the work of Hannah Arendt, I see this hostility to thought
as a serious threat to our society. Arendt believed that thinking was somehow intimately related to our
moral judgment and an inability to think a gateway to grave evils. Of course, it was a particular kind of
thinking that Arendt had in mind–thinking, one might say, for thinking’s sake. Or, thinking that was
devoid of instrumentality.

Writing in Aeon recently, Jennifer Stitt drew on Arendt to argue for the importance of solitude for
thought and thought for conscience and conscience for politics. As Stitt notes, Arendt believed that
“living together with others begins with living together with oneself.” Here is Stitt’s concluding para-
graph:

But, Arendt reminds us, if we lose our capacity for solitude, our ability to be alone with our-
selves, then we lose our very ability to think. We risk getting caught up in the crowd. We risk
being ‘swept away’, as she put it, ‘by what everybody else does and believes in’ – no longer
able, in the cage of thoughtless conformity, to distinguish ‘right from wrong, beautiful from
ugly’. Solitude is not only a state of mind essential to the development of an individual’s con-
sciousness – and conscience – but also a practice that prepares one for participation in social
and political life.

Solitude, then, is at least one practice that can help create a place for thought.

Paradoxically, in a connected world it is challenging to find either solitude or companionship. If we


submit to a regime of constant connectivity, we end up with hybrid versions of both, versions which
fail to yield their full satisfactions.

Additionally, as someone who works one and a half jobs and is also raising a toddler and an infant, I
understand how hard it can be to find anything approaching solitude. In a real sense it is a luxury, but it
is a necessary luxury and if the world won’t offer it freely then we must fight for it as best we can.
There was one thing left in Pandora’s Box after all the evils had flown irreversibly into the world: it
was hope.

October 9, 2017
68. Resisting the Habits of the Algorithmic Mind

Algorithms, we are told, “rule our world.” They are ubiquitous. They lurk in the shadows, shaping
our lives without our consent. They may revoke your driver’s license, determine whether you get your
next job, or cause the stock market to crash. More worrisome still, they can also be the arbiters of lethal
violence. No wonder one scholar has dubbed 2015 “the year we get creeped out by algorithms.” While
some worry about the power of algorithms, other think we are in danger of overstating their signifi-
cance or misunderstanding their nature. Some have even complained that we are treating algorithms
like gods whose fickle, inscrutable wills control our destinies.

Clearly, it’s important that we grapple with the power of algorithms, real and imagined, but where
do we start? It might help to disambiguate a few related concepts that tend to get lumped together when
the word algorithm (or the phrase “Bid Data”) functions more as a master metaphor than a concrete
noun. I would suggest that we distinguish at least three realities: data, algorithms, and devices. Through
the use of our devices we generate massive amounts of data, which would be useless were it not for an-
alytical tools, algorithms prominent among them. It may be useful to consider each of these separately;
at least we should be mindful of the distinctions.

We should also pay some attention to the language we use to identify and understand algorithms. As
Ian Bogost has forcefully argued, we should certainly avoid implicitly deifying algorithms by how we
talk about them. But even some of our more mundane metaphors are not without their own difficulties.
In a series of posts at The Infernal Machine, Kevin Hamilton considers the implications of the popular
“black box” metaphor and how it encourages us to think about and respond to algorithms.

The black box metaphor tries to get at the opacity of algorithmic processes. Inputs are transformed
into outputs, but most of us have no idea how the transformation was effected. More concretely, you
may have been denied a loan or job based on the determinations of a program running an algorithm, but
how exactly that determination was made remains a mystery.

In his discussion of the black box metaphor, Hamilton invites us to consider the following scenario:
“Let’s imagine a Facebook user who is not yet aware of the algorithm at work in her social me-
dia platform. The process by which her content appears in others’ feeds, or by which others’
material appears in her own, is opaque to her. Approaching that process as a black box, might
well situate our naive user as akin to the Taylorist laborer of the pre-computer, pre-war era.
Prior to awareness, she blindly accepts input and provides output in the manufacture of Face-
book’s product. Upon learning of the algorithm, she experiences the platform’s process as
newly mediated. Like the post-war user, she now imagines herself outside the system, or strives
to be so. She tweaks settings, probes to see what she has missed, alters activity to test effective-
ness. She grasps at a newly-found potential to stand outside this system, to command it. We
have a tendency to declare this a discovery of agency—a revelation even.”

But how effective is this new way of approaching her engagement with Facebook, now informed by
the black box metaphor? Hamilton thinks “this grasp toward agency is also the beginning of a new sys-
tem.” “Tweaking to account for black-boxed algorithmic processes,” Hamilton suggests, “could be-
come a new form of labor, one that might then inevitably find description by some as its own black
box, and one to escape.” Ultimately, Hamilton concludes, “most of us are stuck in an ‘opt-in or opt-out’
scenario that never goes anywhere.”

If I read him correctly, Hamilton is describing an escalating, never-ending battle to achieve a variety
of desired outcomes in relation to the algorithmic system, all of which involve securing some kind of
independence from the system, which we now understand as something standing apart and against us.
One of those outcomes may be understood as the state Evan Selinger and Woodrow Hartzog have
called obscurity, “the idea that when information is hard to obtain or understand, it is, to some degree,
safe.” “Obscurity,” in their view, “is a protective state that can further a number of goals, such as au-
tonomy, self-fulfillment, socialization, and relative freedom from the abuse of power.”

Another desired outcome that fuels resistance to black box algorithms involves what we might sum
up as the quest for authenticity. Whatever relative success algorithms achieve in predicting our likes
and dislikes, our actions, our desires–such successes are often experienced as an affront to our individ-
uality and autonomy. Ironically, the resulting battle against the algorithm often secures the their rela-
tive victory by fostering what Frank Pasquale has called the algorithmic self, constantly modulating it-
self in response/reaction to the algorithms it encounters.
More recently, Quinn Norton expressed similar concerns from a slightly different angle: “Your in-
ternet experience isn’t the main result of algorithms built on surveillance data; you are. Humans are
beautifully plastic, endlessly adaptable, and over time advertisers can use that fact to make you into
whatever they were hired to make you be.”

Algorithms and the Banality of Evil

These concerns about privacy or obscurity on the one hand and agency or authenticity on the other
are far from insignificant. Moving forward, though, I will propose another approach to the challenges
posed by algorithmic culture, and I’ll do so with a little help from Joseph Conrad and Hannah Arendt.

In Conrad’s Heart of Darkness, as the narrator, Marlow, makes his way down the western coast of
Africa toward the mouth of the Congo River in the service of a Belgian trading company, he spots a
warship anchored not far from shore: “There wasn’t even a shed there,” he remembers, “and she was
shelling the bush.”

“In the empty immensity of earth, sky, and water,” he goes on, “there she was, incomprehensible,
firing into a continent …. and nothing happened. Nothing could happen.” “There was a touch of insan-
ity in the proceeding,” he concluded. This curious and disturbing sight is the first of three such cases
encountered by Marlow in quick succession.

Not long after he arrived at the Company’s station, Marlow heard a loud horn and then saw natives
scurry away just before witnessing an explosion on the mountainside: “No change appeared on the face
of the rock. They were building a railway. The cliff was not in the way of anything; but this objectless
blasting was all the work that was going on.”

These two instances of seemingly absurd, arbitrary action are followed by a third. Walking along the
station’s grounds, Marlow “avoided a vast artificial hole somebody had been digging on the slope, the
purpose of which I found it impossible to divine.” As they say: two is a coincidence; three’s a pattern.

Nestled among these cases of mindless, meaningless action, we encounter as well another kind of
related thoughtlessness. The seemingly aimless shelling he witnessed at sea, Marlow is assured, tar-
geted an unseen camp of natives. Registering the incongruity, Marlow exclaims, “he called them ene-
mies!” Later, Marlow recalls the shelling off the coastline when he observed the natives scampering
clear of each blast on the mountainside: “but these men could by no stretch of the imagination be called
enemies. They were called criminals, and the outraged law, like the bursting shells, had come to them,
an insoluble mystery from the sea.”

Taken together these incidents convey a principle: thoughtlessness couples with ideology to abet vi-
olent oppression. We’ll come back to that principle in a moment, but, before doing so, consider two
more passages from the novel. Just before that third case of mindless action, Marlow reflected on the
peculiar nature of the evil he was encountering:

“I’ve seen the devil of violence, and the devil of greed, and the devil of hot desire; but, by all
the stars! these were strong, lusty, red-eyed devils, that swayed and drove men–men, I tell you.
But as I stood on this hillside, I foresaw that in the blinding sunshine of that land I would be-
come acquainted with a flabby, pretending, weak-eyed devil of rapacious and pitiless folly.”

Finally, although more illustrations could be adduced, after an exchange with an insipid, chatty
company functionary, who is also an acolyte of Mr. Kurtz, Marlow had this to say: “I let him run on,
the papier-mâché Mephistopheles, and it seemed to me that if I tried I could poke my forefinger
through him, and would find nothing inside but a little loose dirt, maybe.”

That sentence, to my mind, most readily explains why T.S. Eliot chose as an epigraph for his 1925
poem, “The Hollow Men,” a line from Heart of Darkness: “Mistah Kurtz – he dead.” This is likely an
idiosyncratic reading, so take it with the requisite grain of salt, but I take Conrad’s papier-mâché
Mephistopheles to be of a piece with Eliot’s hollow men, who having died are remembered “Not as lost

Violent souls, but only


As the hollow men
The stuffed men.”

For his part, Conrad understood that these hollow men, these flabby devils were still capable of im-
mense mischief. Within the world as it is administered by the Company, there is a great deal of doing
but very little thinking or understanding. Under these circumstances, men are characterized by a thor-
oughgoing superficiality that renders them willing, if not altogether motivated participants in the Com-
pany’s depredations. Conrad, in fact, seems to have intuited the peculiar dangers posed by bureaucratic
anomie and anticipated something like what Hannah Arendt later sought to capture in her (in)famous
formulation, “the banality of evil.”

If you are familiar with the concept of the banality of evil, you know that Arendt conceived of it as a
way of characterizing the kind of evil embodied by Adolph Eichmann, a leading architect of the Holo-
caust, and you may now be wondering if I’m preparing to argue that algorithms will somehow facilitate
another mass extermination of human beings.

Not exactly. I am circumspectly suggesting that the habits of the algorithmic mind are not altogether
unlike the habits of the bureaucratic mind. (Adam Elkus makes a similar correlation here, but I think
I’m aiming at a slightly different target.) Both are characterized by an unthinking automaticity, a nar-
rowness of focus, and a refusal of responsibility that yields the superficiality or hollowness Conrad,
Eliot, and Arendt all seem to be describing, each in their own way. And this superficiality or hollow-
ness is too easily filled with mischief and cruelty.

While Eichmann in Jerusalem is mostly remembered for that one phrase (and also for the
controversy the book engendered), “the banality of evil” appears, by my count, only once in the book.
Arendt later regretted using the phrase, and it has been widely misunderstood. Nonetheless, I think
there is some value to it, or at least to the condition that it sought to elucidate. Happily, Arendt returned
to the theme in a later, unfinished work, The Life of the Mind.

Eichmann’s trial continued to haunt Arendt. In the Introduction, Arendt explained that the impetus
for the lectures that would become The Life of the Mind stemmed from the Eichmann trial. She admits
that in referring to the banality of evil she “held no thesis or doctrine,” but she now returns to the nature
of evil embodied by Eichmann in a renewed attempt to understand it: “The deeds were monstrous, but
the doer … was quite ordinary, commonplace, and neither demonic nor monstrous.” She might have
added: “… if I tried I could poke my forefinger through him, and would find nothing inside but a little
loose dirt, maybe.”

There was only one “notable characteristic” that stood out to Arendt: “it was not stupidity
but thoughtlessness.” Arendt’s close friend, Mary McCarthy, felt that this word choice was unfortunate.
“Inability to think” rather than thoughtlessness, McCarthy believed, was closer to the sense of the Ger-
man word Gedankenlosigkeit.
Later in the Introduction, Arendt insisted “absence of thought is not stupidity; it can be found in
highly intelligent people, and a wicked heart is not its cause; it is probably the other way round, that
wickedness may be caused by absence of thought.”

Arendt explained that it was this “absence of thinking–which is so ordinary an experience in our ev-
eryday life, where we have hardly the time, let alone the inclination, to stop and think–that awakened
my interest.” And it posed a series of related questions that Arendt sought to address:

“Is evil-doing (the sins of omission, as well as the sins of commission) possible in default of not
just ‘base motives’ (as the law calls them) but of any motives whatever, of any particular
prompting of interest or volition?”

“Might the problem of good and evil, our faculty for telling right from wrong, be connected
with our faculty of thought?”

All told, Arendt arrived at this final formulation of the question that drove her inquiry: “Could the
activity of thinking as such, the habit of examining whatever happens to come to pass or to attract at-
tention, regardless of results and specific content, could this activity be among the conditions that make
men abstain from evil-doing or even actually ‘condition’ them against it?”

It is with these questions in mind–questions, mind you, not answers–that I want to return to the sub-
ject with which we began, algorithms.

Outsourcing the Life of the Mind

Momentarily considered apart from data collection and the devices that enable it, algorithms are
principally problem solving tools. They solve problems that ordinarily require cognitive labor–thought,
decision making, judgement. It is these very activities–thinking, willing, and judging–that structure
Arendt’s work in The Life of the Mind. So, to borrow the language that Evan Selinger has deployed so
effectively in his critique of contemporary technology, we might say that algorithms outsource the life
of the mind. And, if Arendt is right, this outsourcing of the life of the mind is morally consequential.
The outsourcing problem is at the root of much of our unease with contemporary technology. Ma-
chines have always done things for us, and they are increasingly doing things for us and without us. In-
creasingly, the human element is displaced in favor of faster, more efficient, more durable, cheaper
technology. And, increasingly, the displaced human element is the thinking, willing, judging mind. Of
course, the party of the concerned is most likely the minority party. Advocates and enthusiasts rejoice
at the marginalization or eradication of human labor in its physical, mental, emotional, and moral mani-
festations. They believe that the elimination of all of this labor will yield freedom, prosperity, and a
golden age of leisure. Critics meanwhile, and I count myself among them, struggle to articulate a com-
pelling and reasonable critique of this scramble to outsource various dimensions of the human experi-
ence.

But perhaps we have ignored another dimension of the problem, one that the outsourcing critique it-
self might, possibly, encourage. Consider this: to say that algorithms are displacing the life of the mind
is to unwittingly endorse a terribly impoverished account of the life of the mind. For instance, if I were
to argue that the ability to “Google” whatever bit of information we happen to need when we need it
leads to an unfortunate “outsourcing” of our memory, it may be that I am already giving up the game
because I am implicitly granting that a real equivalence exists between all that is entailed by human
memory and the ability to digitally store and access information. A moments reflection, of course, will
reveal that human remembering involves considerably more than the mere retrieval of discreet bits of
data. The outsourcing critique, then, valuable as it is, must also challenge the assumption that the out-
sourcing occurs without remainder.

Viewed in this light, the problem with outsourcing the life of the mind is that it encourages an im-
poverished conception of what constitutes the life of the mind in the first place. Outsourcing, then,
threatens our ability to think not only because some of our “thinking” will be done for us; it will do so
because, if we are not careful, we will be habituated into conceiving of the life of the mind on the
model of the problem-solving algorithm. We would thereby surrender the kind of thinking that Arendt
sought to describe and defend, thinking that might “condition” us against the varieties of evil that tran-
spire in environments of pervasive thoughtlessness.

In our responses to the concerns raised by algorithmic culture, we tend to ask, What can we do? Per-
haps, this is already to miss the point by conceiving of the matter as a problem to be solved by some-
thing like a technical solution. Perhaps the most important and powerful response is not an action we
take but rather an increased devotion to the life of the mind. The phrase sounds quaint, or, worse, elit-
ist. As Arendt meant it, it was neither. Indeed, Arendt was convinced that if thinking was somehow es-
sential to moral action, it must be accessible to all: “If […] the ability to tell right from wrong should
turn out to have anything to do with the ability to think, then we must be able to ‘demand’ its exercise
from every sane person, no matter how erudite or ignorant, intelligent or stupid, he may happen to be.”

And how might we pursue the life of the mind? Perhaps the first, modest step in that direction is
simply the cultivation of times and spaces for thinking, and perhaps also resisting the urge to check if
there is an app for that.

June 3, 2015
69. Facebook Doesn’t Care About Your Children

Facebook is coming for your children.

Is that framing too stark? Maybe it’s not stark enough.

Facebook recently introduced Messenger Kids, a version of their Messenger app designed for six to
twelve year olds. Antigone Davis, Facebook’s Public Policy Director and Global Head of Safety,
wrote a blog post introducing Messenger Kids and assuring parents the app is safe for kids.

“We created an advisory board of experts,” Davis informs us. “With them, we are considering im-
portant questions like: Is there a ‘right age’ to introduce kids to the digital world? Is technology good
for kids, or is it having adverse affects on their social skills and health? And perhaps most pressing of
all: do we know the long-term effects of screen time?”

The very next line of Davis’s post reads, “Today we’re rolling out our US preview of Messenger
Kids.”

Translation: We hired a bunch of people to ask important questions. We have no idea what the an-
swers may be, but we built this app anyway.

Davis doesn’t even attempt to fudge an answer to those questions. She raises them and never comes
back to them again. In fact, she explicitly acknowledges “we know there are still a lot of unanswered
questions about the impact of specific technologies on children’s development.” But you know, what-
ever.

Naturally, we’re presented with statistics about the rates at which children under 13 use the Internet,
Internet-enabled devices, and social media. It’s a case from presumed inevitability. Kids are going to
be online whether you like it or not, so they might as well use our product. More about this in a mo-
ment.
We’re also told that parents are anxious about their kid’s safety online. Chiefly, this amounts to con-
cerns about privacy or online predators. Valid concerns, of course, and Facebook promises to give par-
ents control over their kids online activity. However, safety, in this sense, is not the only concern we
should have. A perfectly safe technology may nonetheless have detrimental consequences for our intel-
lectual, moral, and emotional well-being and for the well-being of society when the technology’s ef-
fects are widely dispersed.

Finally, we’re given five principles Facebook and its advisory board developed in order to guide the
development of their suite of products for children. These are largely meaningless sentences composed
of platitudes and buzzwords.

Let’s not forget that this is the same company that “offered advertisers the opportunity to target 6.4
million younger users, some only 14 years old, during moments of psychological vulnerability, such as
when they felt ‘worthless,’ ‘insecure,’ ‘stressed,’ ‘defeated,’ ‘anxious,’ and like a ‘failure.’”

Facebook doesn’t care about your children. Facebook cares about your children’s data.
As Wired reported, “The company will collect the content of children’s messages, photos they send,
what features they use on the app, and information about the device they use.”

There are no ads on Messenger Kids the company is quick to point out. “For now,” I’m tempted to
add. Barriers of this sort tend to erode over time. Moreover, even if the barrier holds, an end game re-
mains.

“If they are weaned on Google and Facebook,” Jeffrey Chester, executive director for the Center of
Digital Democracy, warns, “you have socialized them to use your service when they become an adult.
On the one hand it’s diabolical and on the other hand it’s how corporations work.”

Facebook’s interest in producing an app for children appears to be a part of a larger trend. “Tech
companies have made a much more aggressive push into targeting younger users,” the same Wired arti-
cle noted, “a strategy that began in earnest in 2015 when Google launched YouTube Kids, which in-
cludes advertising.”
In truth, I think this is about more than just Facebook. It’s about thinking more carefully about how
technology shapes our children and their experience. It is about refusing the rhetoric of inevitability
and assuming responsibility.

Look, what if there is no safe way for seven-year-olds to use social media or even the Internet and
Internet-enabled devices? I realize this may sound like head-in-the-ground overreaction, and maybe it
is, but perhaps it’s worth contemplating the question.

I also realize I’m treading on sensitive ground here, and I want to proceed with care. The last thing
over-worked, under-supported parents need is something more to feel guilty about. Let’s forget the
guilt. We’re all trying to do our best. Let’s just think together about this stuff.

As adults, we’ve barely got a handle on the digital world. We know devices and apps and platforms
are designed to capture and hold attention in a manner that is intellectually and emotionally unhealthy.
We know that these design choices are not made with the user’s best interest in mind. We are only now
beginning to recognize the personal and social costs of our uncritical embrace of constant connectivity
and social media. How eager should we be to usher our children in to this reality?

The reality is upon them whether we like it or not, someone might counter. Maybe, but I don’t quite
buy it. Even if it is, the degree to which this is the case will certainly vary based in large part upon the
choices parents make and their resolve.

Part of our problem is that we think too narrowly about technology, almost always in terms of func-
tionality and safety. With regards to children, this amounts to safeguarding against offensive content,
against exploitation, and against would-be predators. Again, these are valid concerns, but they do not
exhaust the range of questions we should be asking about how children relate to digital media and de-
vices.

To be clear, this is not only about preventing “bad things” from happening. It is also a question of
the good we want to pursue.

Our disordered relationship with technology is often a product of treating technology as an end
rather than a means. Our default setting is to uncritically adopt and ask questions later if at all. We
need, instead, to clearly discern the ends we want to pursue and evaluate technology accordingly, espe-
cially when it comes to our children because in this, as in so much else, they depend on us.

Some time ago, I put together a list of 41 questions to guide our thinking about the ethical dimen-
sions of technology. These questions are a useful way of examining not only the technology we use but
also the technology to which we introduce our children.

What ideals inform the choices we make when we raise children? What sort of person do we hope
they will become? What habits do we desire for them cultivate? How do we want them to experience
time and place? How do we hope they will perceive themselves? These are just a few of the questions
we should be asking.

Your answers to these questions may not be mine or your neighbor’s, of course. The point is not that
we should share these ideals, but that we recognize that the realization of these ideals, whatever they
may be for you and for me, will depend, in greater measure than most of us realize, on the tools we put
in our children’s hands. All that I’m advocating is that we think hard about this and proceed with great
care and great courage. Great care because the stakes are high; great courage because merely by our de-
termination to think critically about these matters we will be setting ourselves against powerful and
pervasive forces.

December 7, 2017
70. Democracy and Technology

Alexis Madrigal has written a long and thoughtful piece on Facebook’s role in the last election. He
calls the emergence of social media, Facebook especially, “the most significant shift in the technology
of politics since the television.” Madrigal is pointed in his estimation of the situation as it now conse-
quently stands.

Early on, describing the widespread (but not total) failure to understand the effect Facebook could
have on an election, Madrigal writes, “The informational underpinnings of democracy have eroded,
and no one has explained precisely how.”

Near the end of the piece, he concludes, “The point is that the very roots of the electoral system—
the news people see, the events they think happened, the information they digest—had been destabi-
lized.”

Madrigal’s piece brought to mind, not surprisingly, two important observations by Neil Postman
that I’ve cited before.

My argument is limited to saying that a major new medium changes the structure of discourse;
it does so by encouraging certain uses of the intellect, by favoring certain definitions of intelli-
gence and wisdom, and by demanding a certain kind of content–in a phrase, by creating new
forms of truth-telling.

Also:

Surrounding every technology are institutions whose organization–not to mention their reason
for being–reflects the world-view promoted by the technology. Therefore, when an old technol-
ogy is assaulted by a new one, institutions are threatened. When institutions are threatened, a
culture finds itself in crisis.
In these two passages, I find the crux of Postman’s enduring insights, the insights, more generally,
of the media ecology school of tech criticism. It seems to me that this is more or less where we are: a
culture in crisis, as Madrigal’s comments suggest. Read what he has to say.

On Twitter, replying to a tweet from Christopher Mims endorsing Madrigal’s work, Zeynep Tufekci
took issue with Madrigal’s framing. Madrigal, in fact, cited Tufekci as one of the few people who un-
derstood a good deal of what was happening and, indeed, saw it coming years ago. But Tufekci none-
theless challenged Madrigal’s point of departure, which is that the entirety of Facebook’s role caught
nearly everyone by surprise and couldn’t have been foreseen.

Tufekci has done excellent work exploring the political consequences of Big Data, algorithms, etc.
This 2014 article, for example, is superb. But in reading Tufekci’s complaint that her work and the
work of many other academics was basically ignored, my first thought was that the similarly prescient
work of technology critics has been more or less ignored for much longer. I’m thinking of Mumford,
Jaspers, Ellul, Jonas, Grant, Winner, Mander, Postman and a host of others. They have been dismissed
as too pessimistic, too gloomy, too conservative, too radical, too broad in their criticism and too nar-
row, as Luddites and reactionaries, etc. Yet here we are.

In a 1992 article about democracy and technology, Ellul wrote, “In my view, our Western political
institutions are no longer in any sense democratic. We see the concept of democracy called into ques-
tion by the manipulation of the media, the falsification of political discourse, and the establishment of a
political class that, in all countries where it is found, simply negates democracy.”

Writing in the same special issue of the journal Philosophy and Technology edited by Langdon Win-
ner, Albert Borgmann wrote, “Modern technology is the acknowledged ruler of the advanced industrial
democracies. Its rule is not absolute. It rests on the complicity of its subjects, the citizens of the democ-
racies. Emancipation from this complicity requires first of alI an explicit and shared consideration of
the rule of technology.”

It is precisely such an “explicit and shared consideration of the rule of technology” that we have
failed to seriously undertake. Again, Tufekci and her colleagues are hardly the first to have their warn-
ings, measured, cogent, urgent as they may be, ignored.
Roger Berkowitz of the Hannah Arendt Center for Politics and the Humanities, recently drew
attention to a commencement speech given by John F. Kennedy at Yale in 1962. Kennedy noted the
many questions that America had faced throughout her history, from slavery to the New Deal. These
were questions “on which the Nation was sharply and emotionally divided.” But now, Kennedy be-
lieved we were ready to move on:

Today these old sweeping issues very largely have disappeared. The central domestic issues of
our time are more subtle and less simple. They relate not to basic clashes of philosophy or ide-
ology but to ways and means of reaching common goals — to research for sophisticated solu-
tions to complex and obstinate issues.

These issues were “administrative and executive” in nature. They were issues “for which technical
answers, not political answers, must be provided,” Kennedy concluded. You should read the rest of
Berkowitz reflections on the prejudices exposed by our current crisis, but I want to take Kennedy’s
technocratic faith as a point of departure for some observations.

Kennedy’s faith in the technocratic management of society was just the latest iteration of moderni-
ty’s political project, the quest for a neutral and rational mode of politics for a pluralistic society.

I will put it this way: liberal democracy is a “machine” for the adjudication of political differences
and conflicts, independently of any faith, creed, or otherwise substantive account of the human good.

It was machine-like in its promised objectivity and efficiency. But, of course, it would work only to
the degree that it generated the subjects it required for its own operation. (A characteristic it shares with
all machines.) Human beings have been, on this score, rather recalcitrant, much to the chagrin of the
administrators of the machine.

Kennedy’s own hopes were just a renewed version of this vision, only they had become more ex-
plicitly a-political and technocratic in nature. It was not enough that citizens check certain aspects of
their person at the door to the public sphere, now it would seem that citizens would do well to entrust
the political order to experts, engineers, and technicians.

Leo Marx recounts an important part of this story, unfolding throughout the 19th to early 20th cen-
tury, in an article accounting for what he calls “postmodern pessimism” about technology. Marx
outlines how “the simple [small r] republican formula for generating progress by directing improved
technical means to societal ends was imperceptibly transformed into a quite different technocratic com-
mitment to improving ‘technology’ as the basis and the measure of — as all but constituting — the
progress of society.” I would also include the emergence of bureaucratic and scientific management in
the telling of this story.

Presently we are witnessing a further elaboration of this same project along the same trajectory. It is
the rise of governance by algorithm, a further, apparent distancing of the human from the political. I
say apparent because, of course, the human is never fully out of the picture, we just create more elabo-
rate technical illusions to mask the irreducibly human element. We buy into these illusions, in part, be-
cause of the initial trajectory set for the liberal democratic order, that of machine-like objectivity, ratio-
nality, and efficiency. It is on this ideal that Western society staked its hopes for peace and prosperity.
At every turn, when the human element, in its complexity and messiness, broke through the facade, we
doubled-down on the ideal rather than question the premises. Initially, at least the idea was that the
“machine” would facilitate the deliberation of citizens by establishing rules and procedures to govern
their engagement. When it became apparent that this would no longer work, we explicitly turned to
technique as the common frame by which we would proceed. Now that technique has failed because
again the human manifested itself, we overtly turn to machines.

This new digital technocracy takes two, seemingly paradoxical paths. One of these paths is the in-
creasing reliance on Big Data and computing power in the actual work of governing. The other, how-
ever, is the deployment of these same tools for the manipulation of the governed. It is darkly ironic that
this latter deployment of digital technology is intended to agitate the very passions liberal democracy
was initially advanced to suppress (at least according to the story liberal democracy tells about itself). It
is as if, having given up on the possibility of reasonable political discourse and deliberation within a
pluralistic society, those with the means to control the new apparatus of government have simply de-
cided to manipulate those recalcitrant elements of human nature to their own ends.

It is this latter path that Madrigal and Tufekci have done their best to elucidate. However, my ram-
bling contention here is that the full significance of our moment is only intelligible within a much
broader account of the relationship between technology and democracy. It is also my contention that
we will remain blind to the true nature of our situation so long as we are unwilling to submit our tech-
nology to the kind of searching critique Borgmann advocated and Ellul thought hardly possible. But we
are likely too invested in the promise of technology and too deeply compromised in our habits and
thinking to undertake such a critique.

October 15, 2017


71. Superfluous People, the Ideology of Silicon Val-
ley, and The Origins of Totalitarianism
There’s a passage from Arendt’s The Origins of Totalitarianism that has been cited frequently in re-
cent months, and with good reason. It speaks to the idea that we are experiencing an epistemic crisis
with disastrous cultural and political consequences:

The ideal subject of totalitarian rule is not the convinced Nazi or the convinced Communist, but
people for whom the distinction between fact and fiction (i.e., the reality of experience) and the
distinction between true and false (i.e., the standards of thought) no longer exist.

Jay Rosen recently tweeted that this was, for him, the quote of the year for 2017, and one can see
why.

I would, however, suggest that there is another passage from the closing chapters of The Origins of
Totalitarianism, or rather cluster of passages, that we might also consider. These passages speak to a
different danger: the creation of superfluous people.

“There is only one thing,” Arendt concludes, “that seems discernible: we may say that radical evil
has emerged in connection with a system in which all men have become equally superfluous.”

“Totalitarianism strives not toward despotic rule over men,” Arendt furthermore claims, “but toward
a system in which men are superfluous.” She immediately adds, “Total power can be achieved and
safeguarded only in a world of conditioned reflexes, of marionettes without the slightest trace of spon-
taneity.”

Superfluity, as Arendt uses the term, suggests some combination of thoughtless automatism, inter-
changeability, and expendability. A person is superfluous when they operate within a system in a com-
pletely predictable way and can, as a consequence, be easily replaced. Individuality is worse than
meaningless in this context; it is a threat to the system and must be eradicated.
So just as the “ideal subject” of a totalitarian state is someone who has been overwhelmed by epis-
temic nihilism, Arendt describes the “model ‘citizen’” as the human person bereft of spontaneity:
“Pavlov’s dog, the human specimen reduced to the most elementary reactions, the bundle of reactions
that can always be liquidated and replaced by other bundles of reactions that behave in exactly the
same way, is the model ‘citizen’ of a totalitarian state.”

Arendt adds that “such a citizen can be produced only imperfectly outside of the camps.” In the
camps, the “world of the dying” as Arendt calls them, “men are taught they are superfluous through a
way of life in which punishment is meted out without connection to crime, in which exploitation is
practiced without profit, and where work is performed without product, is a place where senselessness
is daily produced anew.”

It may be obvious how Arendt’s claim regarding the inability to distinguish between truth and false-
hood, fact and fiction speaks to our present moment, but what does her discussion of superfluous peo-
ple and concentration camps have to do with us?

First, I should make clear that I do not expect to see death camps anytime soon. That said, it seems
that there are a number of developments which together tend toward rendering people superfluous. For
example: the operant conditioning to which we submit on social media, the pursuit of ever more so-
phisticated forms of automation, and the drive to outsource more and more aspects of our humanity to
digital tools.

“If we take totalitarian aspirations seriously and refuse to be misled by the common-sense assertion
that they are utopian and unrealizable,” Arendt insisted, “it develops that the society of the dying estab-
lished in the camps is the only form of society in which it is possible to dominate man entirely.”

I would suggest that having discovered another form of society in which it is possible to dominate
people entirely may be the dark genius of our age, a Huxleyan spin on an earlier Orwellian threat. I
would also suggest that this achievement has traded on the expression of individuality rather than its
suppression.

For example, social media appears to encourage the expression of individuality. In reality, it is a
Skinner box, we are being programmed, and our so-called individuality is irrelevant ephemera so far as
the network is concerned. In other words, people, insofar as they are considered as individuals, are, in
fact, superfluous.

Regarding automation, it is, from my vantage point and given my lack of expertise, impossible to
tell what will be the scale of its impact on employment. But it seems clear that there is cause for con-
cern (unless you happen to live in Sweden). I have no reason to doubt that what jobs can be automated
will be automated at the expense of workers, workers who will be rendered superfluous. What new jobs
are expected to arise will be of the micro-gig economy or tend-the-machine sort. Work, that is to say, in
which people qua individuals are superfluous.

As for the outsourcing of our cognitive, emotional, and ethical labor and our obsessive self-tracking
and self-monitoring, it amounts to being sealed in a tomb of our revealed preferences (to borrow Rob
Horning’s memorable line). Once more, spontaneous desire, serendipity, much of what Arendt classi-
fied as natality, the capacity to make a beginning at the heart of our individuality—all of it is surren-
dered to the urge for an equilibrium of programmed predictability.

___________________________________

“Over and above the senselessness of totalitarian society,” Arendt went on to observe, “is enthroned
the ridiculous supersense of its ideological superstition.” As she goes on to analyze the ideologies that
supported the senselessness of totalitarian societies, discomforting similarities to strands of the Silicon
Valley ideology emerge. Most notably, it seems to me, they share a blind adherence to a supposed Law
driving human affairs. A Law adherence to which frees a person from ordinary moral responsibility,
raises the person above the unenlightened masses, indeed, generates a barely veiled misanthropy.

Consider the following analysis:

Totalitarian lawfulness, defying legality and pretending to establish the direct reign of justice on
earth executes the law of History [as understood by Communism] or of Nature [as understood
by Nazism] without translating it into standards of right and wrong for individual behavior. It
applies the law directly to mankind without bothering with the behavior of men. The law of Na-
ture or the law of History, if properly executed, is expected to produce mankind as its end prod-
uct; and this expectation lies behind the claim to global rule of all totalitarian governments.
Now substitute the “law of Technology” for the law of History and the law of Nature. Tell me if it
does not work just as well. This law can be variously framed, but it amounts to some kind of self-serv-
ing, poorly conceived technological determinism built upon some ostensible fact like Moore’s Law,
and it dictates that humanity as it exists must be left behind in order to accommodate this deep law or-
dering the flow of time.

“What totalitarian ideologies therefore aim at is not the transformation of the outside world or the
revolutionizing of society, but the transformation of human nature itself,” Arendt recognized. And so it
is with the transhumanist strains of the ideology of Silicon Valley.

As I write these words, an excerpt from Emily Chang’s forthcoming Brotopia: Breaking Up the
Boys Club of Silicon Valley published in Vanity Fair is being shared widely on social media. It exam-
ines the “exclusive, drug-fueled, sex-laced parties” that some of the most powerful men in Silicon Val-
ley regularly attend. The scandal is not in the sexual license. Indeed, that they believe their behavior to
be somehow bravely unconventional and pioneering would be laughable were it not for its human toll.
What is actually disturbing is how this behavior is an outworking of ideology and how this ideology
generates so much more than drug-addled parties.

“[T]hey speak proudly about how they’re overturning traditions and paradigms in their private lives,
just as they do in the technology world they rule,” Chang writes. “Their behavior at these high-end par-
ties is an extension of the progressiveness and open-mindedness—the audacity, if you will—that make
founders think they can change the world. And they believe that their entitlement to disrupt doesn’t
stop at technology; it extends to society as well.”

“If this were just confined to personal lives it would be one thing,” Chang acknowledges. “But what
happens at these sex parties—and in open relationships—unfortunately, doesn’t stay there. The free-
wheeling sex lives pursued by men in tech—from the elite down to the rank and file—have conse-
quences for how business gets done in Silicon Valley.”

“When they look in the mirror,” Chang concludes, “they see individuals setting a new paradigm of
behavior by pushing the boundaries of social mores and values.”

If you’re on the vanguard of the new humanity, social mores and values are for losers.
Arendt also gives us a useful way of framing the obsession with disruption.

“In the interpretation of totalitarianism, all laws have become laws of movement,” Arendt claims.
That is to say that stability is the enemy of the execution of the law of History or of Nature or, I would
add, of Technology: “Neither nature nor history is any longer the stabilizing source of authority for the
actions of mortal men; they are movements themselves.”

Upsetting social norms, disrupting institutions, destabilizing legal conventions, all of it is a way of
freeing up the inevitable unfolding of the law of Technology. Never mind that what is actually being
freed up, of course, is the movement of wealth. The point is that the ideology gives cover for whatever
depredations are executed in its name. It engenders, as Arendt argues elsewhere, a pernicious species of
thoughtlessness that abets all manner of moral outrages.

“Terror,” she explained, “is the realization of the law of movement; its chief aim is to make it possi-
ble for the force of nature or of history to race freely through mankind, unhindered by any spontaneous
human action. As such, terror seeks to ‘stabilize’ men in order to liberate the forces of nature or his-
tory.”

Here again I would argue that we are witnessing a Huxleyan variant of this earlier Orwellian dy-
namic. Consider once more the cumulative effect of the many manifestations of the networks of sur-
veillance, monitoring, operant conditioning, automation, routinization, and programmed predictability
in which we are enmeshed. Their effect is not enhanced freedom, individuality, spontaneity, thought-
fulness, or joy. Their effect is, in fact, to stabilize us into routine and predictable patterns of behavior
and consumption. Humanity is stabilized so that the law of Technology can run its course.

Under these circumstances, Arendt goes on to add, “Guilt or innocence become senseless notions;
‘guilty’ is he who stands in the way of the natural or historical process which has passed judgment over
‘inferior races,’ over ‘individuals ‘unfit to live,’ over ‘dying classes and decadent peoples.’” All the re-
cent calls for reform of the tech industry, then, may very well fall not necessarily on deaf ears but on
uncomprehending or indifferent ears tuned only to greater ideological “truths.”

___________________________________
“Totalitarian solutions may well survive the fall of totalitarian regimes in the form of strong tempta-
tions which will come up whenever it seems impossible to alleviate political, social, or economic mis-
ery in a manner worthy of man.”

Perhaps that, too, is an apt passage for our times.

In two other observations Arendt makes in the closing pages of Origins, we may gather enough light
to hold off the darkness. She writes of loneliness as the “common ground for terror, the essence of to-
talitarian government, and for ideology and logicality, the preparation of its executioners and victims.”
This loneliness is “closely connected with uprootedness and superfluousness which have been the curse
of the modern masses since the beginning of the industrial revolution [….]

“Ideologies are never interested in the miracle of being,” she also observes.

Perhaps, then, we might think of the cultivation of wonder and friendship as inoculating measures, a
way of sustaining the light.

January 3, 2018
72. Algorithms Who Art in Apps, Hallowed Be Thy
Code
If you want to understand the status of algorithms in our collective imagination, Ian Bogost proposes
the following exercise in his recent essay in the Atlantic: “The next time you see someone talking about
algorithms, replace the term with ‘God’ and ask yourself if the sense changes any?”

If Bogost is right, then more often than not you will find the sense of the statement entirely un-
changed. This is because, in his view, “Our supposedly algorithmic culture is not a material phenome-
non so much as a devotional one, a supplication made to the computers we have allowed to replace
gods in our minds, even as we simultaneously claim that science has made us impervious to
religion.” Bogost goes on to say that this development is part of a “larger trend” whereby “Enlighten-
ment ideas like reason and science are beginning to flip into their opposites.” Science and technology,
he fears, “have turned into a new type of theology.”

It’s not the algorithms themselves that Bogost is targeting; it is how we think and talk about them
that worries him. In fact, Bogost’s chief concern is that how we talk about algorithms is impeding our
ability to think clearly about them and their place in society. This is where the god-talk comes in. Bo-
gost deploys a variety of religious categories to characterize the present fascination with algorithms.

Bogost believes “algorithms hold a special station in the new technological temple because comput-
ers have become our favorite idols.” Later on he writes, “the algorithmic metaphor gives us a distorted,
theological view of computational action.” Additionally, “Data has become just as theologized as algo-
rithms, especially ‘big data,’ whose name is meant to elevate information to the level of celestial infin-
ity.” “We don’t want an algorithmic culture,” he concludes, “especially if that phrase just euphemizes a
corporate theocracy.” The analogy to religious belief is a compelling rhetorical move. It vividly illumi-
nates Bogost’s key claim: the idea of an “algorithm” now functions as a metaphor that conceals more
than it reveals.

He prepares the ground for this claim by reminding us of earlier technological metaphors that ulti-
mately obscured important realities. The metaphor of the mind as computer, for example, “reaches the
rank of religious fervor when we choose to believe, as some do, that we can simulate cognition through
computation and achieve the singularity.” Similarly, the metaphor of the machine, which is really to
say the abstract idea of a machine, yields a profound misunderstanding of mechanical automation in the
realm of manufacturing. Bogost reminds us that bringing consumer goods to market still “requires intri-
cate, repetitive human effort.” Manufacturing, as it turns out, “isn’t as machinic nor as automated as we
think it is.”

Likewise, the idea of an algorithm, as it is bandied about in public discourse, is a metaphorical ab-
straction that obscures how various digital and analog components, including human action, come to-
gether to produce the effects we carelessly attribute to algorithms. Near the end of the essay, Bogost
sums it up this way:

“the algorithm has taken on a particularly mythical role in our technology-obsessed era, one that
has allowed it wear the garb of divinity. Concepts like ‘algorithm’ have become sloppy short-
hands, slang terms for the act of mistaking multipart complex systems for simple, singular ones.
Of treating computation theologically rather than scientifically or culturally.”

But why does any of this matter? It matters, Bogost insists, because this way of thinking blinds us in
two important ways. First, our sloppy shorthand “allows us to chalk up any kind of computational so-
cial change as pre-determined and inevitable,” allowing the perpetual deflection of responsibility for
the consequences of technological change. The apotheosis of the algorithm encourages what I’ve else-
where labeled a Borg Complex, an attitude toward technological change aptly summed by the phrase,
“Resistance is futile.” It’s a way of thinking about technology that forecloses the possibility of thinking
about and taking responsibility for our choices regarding the development, adoption, and implementa-
tion of new technologies. Secondly, Bogost rightly fears that this “theological” way of thinking about
algorithms may cause us to forget that computational systems can offer only one, necessarily limited
perspective on the world. “The first error,” Bogost writes, “turns computers into gods, the second treats
their outputs as scripture.”

______________________

Bogost is right to challenge the quasi-religious reverence sometimes exhibited toward technology. It
is, as he fears, an impediment to clear thinking. Indeed, he is not the only one calling for the seculariza-
tion of our technological endeavors. Jaron Lanier has spoken at length about the introduction of reli-
gious thinking into the field of AI. In a recent interview, Lanier expressed his concerns this way:

“There is a social and psychological phenomenon that has been going on for some decades
now: A core of technically proficient, digitally-minded people reject traditional religions and
superstitions. They set out to come up with a better, more scientific framework. But then they
re-create versions of those old religious superstitions! In the technical world these superstitions
are just as confusing and just as damaging as before, and in similar ways.”

While Lanier’s concerns are similar to Bogost’s, it may be worth noting that Lanier’s use of reli-
gious categories is rather more concrete. As far as I can tell, Bogost deploys a religious frame as
a rhetorical device, and rather effectively so. Lanier’s criticisms, however, have been aroused by reli-
giously intoned expressions of a desire for transcendence voiced by denizens of the tech world them-
selves.

But such expressions are hardly new, nor are they relegated to the realm of AI. In The Religion of
Technology: The Divinity of Man and the Spirit of Invention, David Noble rightly insisted that “modern
technology and modern faith are neither complements nor opposites, nor do they represent succeeding
stages of human development. They are merged, and always have been, the technological enterprise be-
ing, at the same time, an essentially religious endeavor.”

So that no one would misunderstand his meaning, he added,

“This is not meant in a merely metaphorical sense, to suggest that technology is similar to reli-
gion in that it evokes religious emotions of omnipotence, devotion, and awe, or that it has be-
come a new (secular) religion in and of itself, with its own clerical caste, arcane rituals, and ar-
ticles of faith. Rather it is meant literally and historically, to indicate that modern technology
and religion have evolved together and that, as a result, the technological enterprise has been
and remains suffused with religious belief.”

Along with chapters on the space program, atomic weapons, and biotechnology, Noble devoted a
chapter to the history AI, titled “The Immortal Mind.” Noble found that AI research had often been in-
spired by a curious fixation on the achievement of god-like, disembodied intelligence as a step toward
personal immortality. Many of the sentiments and aspirations that Noble identifies in figures as diverse
as George Boole, Claude Shannon, Alan Turing, Edward Fredkin, Marvin Minsky, Daniel Crevier,
Danny Hillis, and Hans Moravec–all of them influential theorists and practitioners in the development
of AI–find their consummation in the Singularity movement. The movement envisions a time, 2045 is
frequently suggested, when the distinction between machines and humans will blur and humanity as we
know it will eclipsed. Before Ray Kurzweil, the chief prophet of the Singularity, wrote about “spiritual
machines,” Noble had astutely anticipated how the trajectories of AI, Internet, Virtual Reality, and Ar-
tificial Life research were all converging on the age-old quest for the immortal life. Noble, who died in
2010, must have read the work of Kurzweil and company as a remarkable validation of his thesis
in The Religion of Technology.

Interestingly, the sentiments that Noble documented alternated between the heady thrill of creat-
ing non-human Minds and non-human Life, on the one hand, and, on the other, the equally heady thrill
of pursuing the possibility of radical life-extension and even immortality. Frankenstein meets Faust we
might say. Humanity plays god in order to bestow god’s gifts on itself. Noble cites one Artificial Life
researcher who explains, “I fee like God; in fact, I am God to the universes I create,” and another who
declares, “Technology will soon enable human beings to change into something else altogether [and
thereby] escape the human condition.” Ultimately, these two aspirations come together into a grand
techno-eschatological vision, expressed here by Hans Moravec:

“Our speculation ends in a supercivilization, the synthesis of all solar system life, constantly
improving and extending itself, spreading outward from the sun, converting non-life into mind
…. This process might convert the entire universe into an extended thinking entity … the think-
ing universe … an eternity of pure cerebration.”

Little wonder that Pamela McCorduck, who has been chronicling the progress of AI since the early
1980s, can say, “The enterprise is a god-like one. The invention–the finding within–of gods represents
our reach for the transcendent.” And, lest we forget where we began, a more earth-bound, but no less
eschatological hope was expressed by Edward Fredkin in his MIT and Stanford courses on “saving the
world.” He hoped for a “global algorithm” that “would lead to peace and harmony.” I would suggest
that similar aspirations are expressed by those who believe that Big Data will yield a God’s-eye view of
human society, providing wisdom and guidance that would be otherwise inaccessible to ordinary hu-
man forms of knowing and thinking.

Perhaps this should not be altogether surprising. As the old saying has it, the Grand Canyon wasn’t
formed by someone dragging a stick. This is just a way of saying that causes must be commensurate to
the effects they produce. Grand technological projects such as space flight, the harnessing of atomic en-
ergy, and the pursuit of artificial intelligence are massive undertakings requiring stupendous invest-
ments of time, labor, and resources. What kind of motives are sufficient to generate those sorts of ex-
penditures? You’ll need something more than whim, to put it mildly. You may need something akin to
religious devotion. Would we have attempted to put a man on the moon apart from the ideological
frame provided Cold War, which cast space exploration as a field of civilizational battle for survival?
Consider, as a more recent example, what drives Elon Musk’s pursuit of interplanetary space travel.

Without diminishing the criticisms offered by either Bogost or Lanier, Noble’s historical investiga-
tion into the roots of divinized or theologized technology reminds us that the roots of the disorder run
much deeper than we might initially imagine. Noble’s own genealogy traces the origin of the religion
of technology to the turn of the first millennium. It emerges out of a volatile mix of millenarian dreams,
apocalyptic fervor, mechanical innovation, and monastic piety. It’s evolution proceeds apace through
the Renaissance, finding one of its most ardent prophets in the Elizabethan statesman, Francis Bacon.
Even through the Enlightenment, the religion of technology flourished. In fact, the Enlightenment may
have been a decisive moment in the history of the religion of technology.

In the essay with which we began, Ian Bogost framed the emergence of techno-religious thinking as
a departure from the ideals of reason and science associated with the Enlightenment. This is not alto-
gether incidental to Bogost’s argument. When he talks about the “theological” thinking that plagues our
understanding of algorithms, Bogost is not working with a neutral, value-free, all-purpose definition of
what constitutes the religious or the theological; there’s almost certainly no such definition available. It
wouldn’t be too far from the mark, I think, to say that Bogost is working with what we might classify
as an Enlightenment understanding of Religion, one that characterizes it as Reason’s Other, i.e. as a-ra-
tional if not altogether irrational, superstitious, authoritarian, and pernicious. For his part, Lanier ap-
pears to be working with similar assumptions.
Noble’s work complicates this picture, to say the least. The Enlightenment did not, as it turns out,
vanquish Religion, driving it far from the pure realms of Science and Technology. In fact, to the degree
that the radical Enlightenment’s assault on religious faith was successful, it empowered the religion of
technology. To put this another way, the Enlightenment–and, yes, we are painting with broad strokes
here–did not do away with the notions of Providence, Heaven, and Grace. Rather, the Enlightenment
re-named these Progress, Utopia, and Technology respectively. To borrow a phrase, the Enlighten-
ment immanentized the eschaton. If heaven had been understood as a transcendent goal achieved with
the aid of divine grace within the context of the providentially ordered unfolding of human history, it
became a Utopian vision, a heaven on earth, achieved by the ministrations Science and Technology
within the context of Progress, an inexorable force driving history toward its Utopian consummation.

As historian Leo Marx has put it, the West’s “dominant belief system turned on the idea of technical
innovation as a primary agent of progress.” Indeed, the further Western culture proceeded down the
path of secularization as it is traditionally understood, the greater the emphasis on technology as the
principle agent of change. Marx observed that by the late nineteenth century, “the simple republican
formula for generating progress by directing improved technical means to societal ends was impercep-
tibly transformed into a quite different technocratic commitment to improving ‘technology’ as the basis
and the measure of — as all but constituting — the progress of society.”

When the prophets of the Singularity preach the gospel of transhumanism, they are not abandoning
the Enlightenment heritage; they are simply embracing it’s fullest expression. As Bruno Latour has ar-
gued, modernity has never perfectly sustained the purity of the distinctions that were the self-declared
hallmarks of its own superiority. Modernity characterized itself as a movement of secularization and
differentiation, what Latour, with not a little irony, labels processes of purification. Science, politics,
law, religion, ethics–these are all sharply distinguished and segregated from one another in the modern
world, distinguishing it from the primitive pre-modern world. But it turns out that these spheres of hu-
man experience stubbornly resist the neat distinctions modernity sought to impose. Hybridization un-
folds alongside purification, and Noble’s work has demonstrated how the lines between technology,
sometimes reckoned the most coldly rational of human projects, is deeply contaminated by religion, of-
ten regarded by the same people as the most irrational of human projects.

But not just any religion. Earlier I suggested that when Bogost characterizes our thinking about al-
gorithms as “theological,” he is almost certainly assuming a particular kind of theology. This is why it
is important to classify the religion of technology more precisely as a Christian heresy. It is in Western
Christianity that Noble found the roots of the religion of technology, and it is in the context of post-
Christian world that it has presently flourished.

It is Christian insofar as its aspirations that are like those nurtured by the Christian faith, such as the
conscious persistence of a soul after the death of the body. Noble cites Daniel Crevier, who referenc-
ing the “Judeo-Christian tradition” suggested that “religious beliefs, and particularly the belief in sur-
vival after death, are not incompatible with the idea that the mind emerges from physical phenomena.”
This is noted on the way to explaining that a machine-based material support could be found for the
mind, which leads Noble to quip. “Christ was resurrected in a new body; why not a machine?” Report-
ing on his study of the famed Santa Fe Institute in Los Alamos, anthropologist Stefan Helmreich ob-
served, “Judeo-Christian stories of the creation and maintenance of the world haunted my informants’
discussions of why computers might be ‘worlds’ or ‘universes,’ …. a tradition that includes stories
from the Old and New Testaments (stories of creation and salvation).”

It is a heresy insofar as it departs from traditional Christian teaching regarding the givenness of hu-
man nature, the moral dimensions of humanity’s brokenness, the gracious agency of God in the salva-
tion of humanity, and the resurrection of the body, to name a few. Having said as much, it would seem
that one could perhaps conceive of the religion of technology as an imaginative account of how God
might fulfill purposes that were initially revealed in incidental, pre-scientific garb. In other words, we
might frame the religion of technology not so much as a Christian heresy, but rather as (post-)Christian
fan-fiction, an elaborate imagining of how the hopes articulated by the Christian faith will materialize
as a consequence of human ingenuity in the absence of divine action.

Near the end of The Religion of Technology, David Noble forcefully articulated the dangers posed
by a blind faith in technology. “Lost in their essentially religious reveries,” Noble warned, “the tech-
nologists themselves have been blind to, or at least have displayed blithe disregard for, the harmful
ends toward which their work has been directed.” Citing another historian of technology, Noble added,
“The religion of technology, in the end, ‘rests on extravagant hopes which are only meaningful in the
context of transcendent belief in a religious God, hopes for a total salvation which technology cannot
fulfill …. By striving for the impossible, [we] run the risk of destroying the good life that is possible.’
Put simply, the technological pursuit of salvation has become a threat to our survival.” I suspect that
neither Bogost nor Lanier would disagree with Noble on this score.
There is another significant point at which the religion of technology departs from its antecedent:
“The millenarian promise of restoring mankind to its original Godlike perfection–the underlying
premise of the religion of technology–was never meant to be universal.” Instead, the salvation it prom-
ises is limited finally to the very few will be able to afford it; it is for neither the poor nor the weak.
Nor, would it seem, is it for those who have found a measure of joy or peace or beauty within the
bounds of the human condition as we now experience it, frail as it may be.

Lastly, it is worth noting that the religion of technology appears to have no doctrine of final judg-
ment. This is not altogether surprising given that, as Bogost warned, the divinizing of technology car-
ries the curious effect of absolving us of responsibility for the tools that we fashion and the uses to
which they are put.

I have no neat series of solutions to tie all of this up; rather I will give the last word to Wendell
Berry:

“To recover from our disease of limitlessness, we will have to give up the idea that we have a
right to be godlike animals, that we are potentially omniscient and omnipotent, ready to dis-
cover ‘the secret of the universe.’ We will have to start over, with a different and much older
premise: the naturalness and, for creatures of limited intelligence, the necessity, of limits. We
must learn again to ask how we can make the most of what we are, what we have, what we have
been given.”

February 6, 2015
73. Eight Theses Regarding the Society of the Disci-
plinary Spectacle
One of the better known aspects Michel Foucault’s work is his genealogy of prisons in Discipline
and Punish. Foucault opens with a description of the grizzly execution of a regicide in mid-eighteenth
century Paris. This public execution illustrated for Foucault a society ordered by the public spectacle of
torture. As Foucault tells the story, modern western societies gradually moved away from the practice
of the spectacle to the practice of disciplinary surveillance.

Disciplinary surveillance was best illustrated by the ideal prison proposed by Jeremy Bentham. It
was a panopticon. There was a station at the center of the prison where guards could see the prisoners
but the prisoners could not see the guards. The idea was simple: the prisoners would stay in line be-
cause they had to assume that they were always being watched. No violence was necessary, the inter-
nalized gaze of the surveillance apparatus disciplined the behavior of the prisoner.

Foucault’s point in all of this was not simply to tell a story about the evolution of prisons but to
comment on the nature of society. The prisons were a microcosm of a society that disciplined its mem-
bers by the operations of surveillance.

The emergence of digital technology has, of course, only heightened the social consequences of sur-
veillance. Never before has it been possible for a government (or corporation) to so precisely and per-
vasively surveil its citizens (or customers and/or employees). At every turn, we encounter increasingly
sophisticated instruments of surveillance that track, monitor, document, and record, with or without our
consent, a remarkable array of data about us.

So it would seem, then, that the trajectory outlined by Foucault continues apace, but I’m not sure
this is the whole story.

The machinery of the spectacle was not the machinery of disciplinary surveillance. The rack was not
the panopticon, or the actual techniques of surveillance and disciplines deployed in prisons, hospitals,
schools, etc. Presently, however, the instruments of surveillance and the instruments of the spectacle
are often identical.

Writing in 1954, well ahead of Foucault, Jacques Ellul warned about “the convergence on man of a
plurality, not of techniques, but of systems or complexes of techniques.” “The result,” he warned, “is
an operational totalitarianism; no longer is any part of man free and independent of these techniques.”

In his day, however, these complexes of techniques were still clunky: “the technical operations in-
volved do not appear to fit well together,” Ellul acknowledged, “and only by means of a new technique
of organization will it be possible to unite the different pieces into a whole.”

I’ve suggested recently that this new technique of organization has already appeared among us and,
simply put, it is digital technology, which has made it possible to interlock and synthesize the whole ar-
ray of existing techniques of surveillance and discipline while wildly improving their efficiency, scope,
and power.

Digital technology has also made possible the convergence of the spectacle with the techniques of
disciplinary surveillance. But we must acknowledge that the spectacle, too, has undergone a transfor-
mation. It is not merely a matter of public torture. Indeed, it has taken on an undeniably pleasurable
quality. What remains the same and warrants the continued use of the word spectacle is the captivating
and pervasive ocular extravagance of the phenomena in question.

In Society of the Spectacle, Guy Debord wrote, “The spectacle is not a collection of images, but a
social relation among people, mediated by images.” He also claims that in modern societies “all life
presents as an immense accumulation of spectacles. Everything that was directly lived has moved away
into a representation.” And, of course, that Debord can be supplemented with Baudrillard’s hyperreal-
ity. (I claim no deep expertise on the work of either theorist, but it seems to me that their need not be
read as mutually exclusive.)

The idea is that many features of contemporary society come into focus when filtered through the
convergence of the spectacle and disciplinary surveillance. I’m uncertain as to whether we should refer
to the product of this convergence as spectacular surveillance or the disciplinary spectacle. Perhaps,
neither. Perhaps what emerges, while sharing certain properties with both and reverse imaging others,
is, on the whole, an entirely different reality. I’ll go with the society of the disciplinary spectacle for the
time being.

In any case, I want to make clear that the key to this conjecture is the material fact of technological
convergence made possible by digital technology. While this convergence manifests itself across a va-
riety of artifacts and practices, it is the smartphone that may be the most apt image. It is through this
device that the operations of the disciplinary spectacle most evidently come to bear on the human be-
ing. It is through this one device and the applications it supports that we experience the spectacle and
that we most readily yield our data.

Consider what follows to be a set of provisional, in no way exhaustive theses regarding the conse-
quences of this convergence of spectacle and surveillance.
1. In the society of the disciplinary spectacle, the spectacle smuggles in the surveillance rendering
it all the more effective as it loses any obviously authoritarian quality.
2. In the society of the disciplinary spectacle, the spectacle, and hence the disciplinary surveil-
lance, is participatory. It is, consequently, more deeply and effectively internalized than the
panoptic gaze because we imagine ourselves not merely as consumers but as producers in our
own right. We participate in generating the spectacle, and we are conditioned by same work.
3. Insofar as it is the self that we are producing so that we might more fully participate in the spec-
tacle, we experience a double alienation from world and from ourselves. Our efforts to heal this
alienation within the context of the disciplinary spectacle only aggravates the condition and fur-
ther feeds the machinery of surveillance. The quest for authenticity is the quicksand of the dis-
ciplinary spectacle.
4. “The spectacle,” according to Debord, “is the existing order’s uninterrupted discourse about it-
self, its laudatory monologue.” In the society of the disciplinary spectacle, the discourse is no
longer a monologue (we are no longer in the age of mass media), it becomes a cacophony of
monologues sometimes bound by resurgent tribal associations. The discourse also becomes
keenly aware of itself. The laudatory tone is replaced by agonistic irony.
5. When the spectacle exists within the same infrastructure that sustains the disciplinary surveil-
lance, the discipline takes on an anti-disciplinary aspect. It appears as release rather than re-
straint. We might say it reverses the relationship between ordinary time and carnival. Restraint
becomes the safety valve. We temporarily go off the grid in order to come back to the spectacle
energized to participate more fully.
6. Debord again, emphasis mine: “The first phase of the domination of the economy over social
life brought into the definition of all human realization the obvious degradation of be-
ing into having. The present phase of total occupation of social life by the accumulated results
of the economy leads to a generalized sliding of having into appearing, from which all actual
‘having’ must draw its immediate prestige and its ultimate function.” If we understand Debord
to mean that we slide into the condition of appearing to have, in the society of the disciplinary
spectacle we slide into the condition of appearing to be. The self becomes the commodity.
7. Attention is the fuel of the machinery of the disciplinary spectacle. It is the fuel insomuch as the
spectacle is powered by our desire for attention and the tools of disciplinary surveillance often
function best when capturing our attention to feed their data collection.
8. The spectacle devours reality. If there is not a distinction between the spectacle and reality, it is
because reality can only appear as a function of the spectacle.

June 26, 2018


74. Presidential Debates and Social Media, or Neil
Postman Was Right
I’ve chosen to take my debates on Twitter. I’ve done so mostly in the interest of exploring what dif-
ference it might make to take in the debates on social media rather than on television.

Of course, the first thing to know is that the first televised debate, the famous 1960 Kennedy/Nixon
debate, is something of a canonical case study in media studies. Most of you, I suspect, have heard at
some point about how polls conducted after the debate found that those who listened on the radio were
inclined to think that Nixon had gotten the better of Kennedy while those who watched the debate on
television were inclined to think that Kennedy had won the day.

As it turns out, this is something like a political urban legend. At the very least, it is fair to say that
the facts of the case are somewhat more complicated. Media scholar, W. Joseph Campbell of American
University, leaning heavily on a 1987 article by David L. Vancil and Sue D. Pendell, has shown that
the evidence for viewer-listener disagreement is surprisingly scant and suspect. What little empirical
evidence did point to a disparity between viewers and listeners depended on less than rigorous method-
ology.

Campbell, who’s written a book on media myths, is mostly interested in debunking the idea that
viewer-listener disagreement was responsible for the outcome of the election. His point, well-taken, is
simply that the truth of the matter is more complicated. With this we can, of course, agree. It would be
a mistake, however, to write off the consequences over time of the shift in popular media. We may, for
instance, take the first Clinton/Trump debate and contrast it to the Kennedy/Nixon debate and also to
the famous Lincoln/Douglas debates. It would be hard to maintain that nothing has changed. But what
is the cause of that change?

Does the evolution of media technology alone account for it? Probably not, if only because in the
realm of human affairs we are unlikely to ever encounter singular causes. The emergence of new media
itself, for instance, requires explanation, which would lead us to consider economic, scientific, and po-
litical factors. However, it would be impossible to discount how new media shape, if nothing else, the
conditions under which political discourse evolves.

Not surprisingly, I turned to the late Neil Postman for some further insight. Indeed, I’ve taken of late
to suggesting that the hashtag for 2016, should we want one, ought to be #NeilPostmanWasRight. This
was a sentiment that I initially encountered in a fine post by Adam Elkus on the Internet culture wars.
During the course of his analysis, Elkus wrote, “And at this point you accept that Neil Postman was
right and that you were wrong.”

I confess that I rather agreed with Postman all along, and on another occasion I might take the time
to write about how well Postman’s writing about technology holds up. Here, I’ll only cite this state-
ment of his argument in Amusing Ourselves to Death:

“My argument is limited to saying that a major new medium changes the structure of discourse;
it does so by encouraging certain uses of the intellect, by favoring certain definitions of intelli-
gence and wisdom, and by demanding a certain kind of content–in a phrase, by creating new
forms of truth-telling.”

This is the argument Postman presents in a chapter aptly title “Media as Epistemology.” Postman
went on to add, admirably, that “I am no relativist in this matter, and that I believe the epistemology
created by television not only is inferior to a print-based epistemology but is dangerous and absurdist.”

Let us make a couple of supporting observations in passing, neither of which is original or particu-
larly profound. First, what is it that we remember about the televised debates prior to the age of social
media? Do any of us, old enough to remember, recall anything other than an adroitly delivered one-
liner? And you know exactly which I have in mind already. Go ahead, before reading any further, call
to mind your top three debate memories. Tell me if at least one of these is not among the three.

Reagan, when asked about his age, joking that we would not make an issue out of his opponent’s
youth and inexperience.

Sen. Bentsen reminding Dan Quayle that he is no Jack Kennedy.

Admiral Stockdale, seemingly lost on stage, wondering, “Who am I? Why am I here?”


So how did we do? Did we have at least one of those in common? Here’s my point: what is memo-
rable and what counts for “winning” or “losing” a debate in the age of television had precious little to
do with the substance of an argument. It had everything to do with style and image. Again, I claim no
great insight in saying as much. In fact, this is, I presume, conventional wisdom by now.

(By the way, Postman gets all the more credit if your favorite presidential debate memories involved
an SNL cast member, say Dana Carvey, for example.)

Consider as well an example fresh from the first Clinton/Trump debate.

.@chucktodd: #debatenight exposed Trump’s lack of preparation, but Clinton seemed over-pre-
pared at times.

— Meet the Press (@MeetThePress) September 27, 2016

You tell me what “over-prepared” could possibly mean. Moreover, you tell me if that was a charge
that you can even begin to imagine being leveled against Lincoln or Douglas or, for that matter, Nixon
or Kennedy.

Let’s let Marshall McLuhan take a shot at explaining what Mr. Todd might possibly have meant.
Appearing on The Today Show, McLuhan explained why the 1976 Carter/Ford debate was an “atro-
cious misuse of the TV medium” and “the most stupid arrangement of any debate in the history of de-
bating.” Chiefly, the content and the medium were mismatched. The style of debating both candidates
embodied was ill-suited for what television prized, something approaching casual ease, warmth, and in-
formality. Being unable to achieve that style means “losing” the debate regardless of how well you
knew your stuff. As McLuhan tells Tom Brokaw, “You’re assuming that what these people say is im-
portant. All that matters is that they hold that audience on their image.”

Incidentally, writing in Slate in 2011 about McLuhan’s appearance, David Haglund wrote, “What
seems most incredible to me about this cultural artifact is that there was ever a time when The Today
Show would spend ten uninterrupted minutes talking about the presidential debates with a media theo-
rist.” [#NeilPostmanWasRight]
So where does this leave us? Does social media, like television, present us with what Postman calls
a new epistemology? Perhaps. We keep hearing a lot of talk about post-factual politics. If that describes
our political climate, and I have little reason to doubt as much, then we did not suddenly land here after
the advent of social media or the Internet. Facts, or simply the truth, has been fighting a rear-guard ac-
tion for some time now.

I will make one passing observation, though, about the dynamics of following a debate on Twitter.
While the entertainment on offer in the era of television was the thrill of hearing the perfect zinger, so-
cial media encourages each of us to become part of the action. Reading tweet after tweet of running
commentary on the debate, from left, right, and center, I was struck by the near unanimity of tone: ei-
ther snark or righteous indignation. Or, better, the near unanimity of apparent intent. No one, it seems
to me, was trying to persuade anybody of anything. Insofar as I could discern a motive factor I might
on the one hand suggest something like catharsis, a satisfying expunging of emotions. On the other, the
desire to land the zinger ourselves. To compose that perfect tweet that would suddenly go viral and gar-
ner thousands of retweets. I saw more than a few cross my timeline–some from accounts with thou-
sands and thousands of followers and others from accounts with a meager few hundred–and I felt that it
was not unlike watching someone hit the jackpot in the slot machine next to me. Just enough incentive
to keep me playing.

A citizen may have attended a Lincoln/Douglas debate to be informed and also, in part, to be enter-
tained. The consumer of the television era tuned in to a debate ostensibly to be informed, but in reality
to be entertained. The prosumer of the digital age aspires to do the entertaining.

September 27, 2017


75. The World Will Be Our Skinner Box

Writing about two recent patents by Google, Sydney Fussell makes the following observations about
the possibility of using a “smart” environments to create elaborate architectures for social engineering:

For reward systems created by either users or companies to be possible, the devices would have
to know what you’re doing at all times. The language of these patents make it clear that Google
is acutely aware of the powers of inference it has already, even without cameras, by augmenting
speakers to recognize the noises you make as you move around the house. The auditory infer-
ences are startling: Google’s smart home system can infer “if a household member is working”
from “an audio signature of keyboard clicking, a desk chair moving, and/or papers shuffling.”
Google can make inferences on your mood based on whether it hears raised voices or crying,
when you’re in the kitchen based on the sound of the fridge door opening, your dental hygiene
based on “the sounds and/or images of teeth brushing.”

I read Fussell’s article right after I read this tweet from Kevin Bankston: “Ford’s CEO just said on
NPR that the future of profitability for the company is all the data from its 100 million vehicles (and
the people in them) they’ll be able to monetize. Capitalism & surveillance capitalism are becoming in-
creasingly indistinguishable (and frightening).”

I thought about that tweet for awhile. It really is the case that data is to the digital economy not un-
like what oil has been for the industrial economy. Jathan Sadowski notes as much in a comment on the
same tweet: “Good reminder that as the ‘operations of capital’ adapt to the digital age, they maintain
the same essential features of extraction, exploitation, and accumulation. Rather than a disruption, sur-
veillance capitalism is more like old wine poured into new ‘smart’ bottles.”

It also recalls Evgeny Morozov’s discussions of data mining or data extractivism and Nick Car-
r’s suggestion that we think rather along the lines of a factory metaphor: “The factory metaphor makes
clear what the mining metaphor obscures: ‘We work for the Facebooks and Googles of the world, and
the work we do is increasingly indistinguishable from the lives we lead.’”
Taking all of this together, I found myself asking what all of this looks like if we try to extrapolate
into the future (a risky venture, I concede). There are two interrelated trends here as far as I can see: to-
ward surveillance and conditioning. The trends, in other words, lead toward a world turned into
a Skinner box.

I think it was from Carr that I first encountered the Skinner box analogy applied to the internet a few
years ago. Now that digital connectedness has extended beyond the information on our screens to the
material environment, the Skinner box, too, has grown.

As Rick Searle, points out in his discussion of B. F. Skinner’s utopian novel, Walden Two,

We are the inheritors of a darker version of Skinner’s freedomless world—though by far not the
darkest. Yet even should we get the beneficent paternalism contemporary Skinnerites—such
as Richard Thaler and Cass R. Sunstein who wish to “nudge” us this-way-and-that, it would
harm freedom not so much by proving that our decisions are indeed in large measure deter-
mined by our environment as from the fact that the shape of that environment would be in the
hands of someone other than ourselves, individually and collectively.

This, of course, is also the theme of Frischmann and Selinger’s Reengineering Humanity. It recalls,
as well, a line from Hannah Arendt, “The trouble with modern theories of behaviorism is not that they
are wrong but that they could become true.”

But there is something else Arendt wrote that I’ve kept coming back to whenever I’ve thought about
these trajectories toward ever more sophisticated mechanisms for the extraction of data to fuel what
Frischmann and Selinger have call the project of engineered determinism. “There is only one thing,”
Arendt claimed, “that seems discernible: we may say that radical evil has emerged in connection with a
system in which all men have become equally superfluous.” What we must remember about “our” data
for the purposes of social engineering is that the least important thing about it is that it is “ours.” It mat-
ters chiefly as an infinitesimally small drop in a vast ocean of data, which fuels the tools of prediction
and conditioning. You and I, in our particularity, are irrelevant, superfluous.
Arendt described the model citizen of a totalitarian state in this way: “Pavlov’s dog, the human
specimen reduced to the most elementary reactions, the bundle of reactions that can always be liqui-
dated and replaced by other bundles of reactions that behave in exactly the same way.”

Ominously, she warned, “Totalitarian solutions may well survive the fall of totalitarian regimes in
the form of strong temptations which will come up whenever it seems impossible to alleviate political,
social, or economic misery in a manner worthy of man.”

In Arendt’s time, totalitarianism emerged in what we might think of as Orwellian guise. The threat
we face, however, will come in Huxleyan guise: we will half-knowingly embrace the regime that prom-
ises us a reasonable measure of stability, control, and predictability, what we mistake for the conditions
of happiness. Paradoxically, the technologies we increasingly turn to in order to manage the chaotic
flux of life are also the technologies that have generated the flux we seek to tame.

November 19, 2018


76. Digital Media and the Revenge of Politics

“The arc of digital media is bending toward epistemic nihilism.” That’s a line to which I frequently
resort as a way of addressing a variety of developments in the sphere of digital media that have, as I see
it, eroded our confidence in the possibility of public knowledge. I’m using the phrase public knowledge
to get at what we believe together that is also of public consequence. This is an imperfect distinction
worth teasing out, but I’m just going to let it go at that right now.

When I use that line about the arc of digital media, I have in mind phenomena like the facility with
which digital media can be manipulated and, more recently, the facility with which realistic digital me-
dia can be fabricated. I’m thinking as well of the hyper-pluralism that is a function of the way digital
media connect us, bringing conflicting claims, beliefs, and narratives into close proximity. “The global
village,” McLuhan told us, “is a place of very arduous interfaces and very abrasive situations.”

It occurs to me, though, that it might be worth making a clarification: digital media does not create
the conditions out of which the problem arises.

I’ve thought now and again about how we are recapitulating certain aspects of the early modern his-
tory of Europe. At some point last year I shot off an off the cuff tweet to this effect: “Thesis: If the digi-
tal revolution is analogous to the print revolution, then we’re entering our Wars of Religion phase.”

Although the story is more complicated than this, there is something to be said for framing the emer-
gence of the modern world as a response to an epistemic crisis occasioned by the dissolution of the
what we might think of as the medieval world picture (see Stephen Toulmin’s Cosmopolis: The Hidden
Agenda of Modernity, for example).

The path that emerged as a way toward a solution to that crisis amounted to a quest certainty that
took objectivity, abstraction, and neutrality as methodological pre-conditions for both the progress of
science and politics, that is for re-emergence of public knowledge. The right method, the proper degree
of alienation from the particulars of our situation, translations observable phenomena into the realm
mathematical abstraction—these would lead us away from the uncertainty and often violent con-
tentiousness that characterized the dissolution of the premodern world picture. The idea was to recon-
stitute the conditions for the emergence of public truth and, hence, public order.

Technology (or, better, technologies) plays an essential role in this story, but the role that it plays
varies and shifts over time. Early on, for example, in the form of the printing press it accelerates the
crisis of public knowledge, generating the pluralism of truth claims that undermine the old consensus.
The same technology also comes to play a critical role in creating the conditions under which modern
forms of public knowledge can emerge by sustaining the plausibility of a realm of cool, detached rea-
son.

Consider as well how we impute to certain technologies the very characteristics we believe essential
to public knowledge in the modern world (objectivity, neutrality, etc.). Think of photography, for ex-
ample, and the degree to which we tend to believe that a photographic image is an objective and thus
trustworthy representation of the truth of things. More recently, algorithms have been burdened with
similar assumptions. Because they are cutting edge technologies feeding off of “raw data” some believe
that they will necessarily yield unbiased and objectively true results. The problems with this view are,
of course, well documented (here and here, for example).

The general progression has been to increasingly turn to technologies in order to better achieve the
conditions under which we came to believe public knowledge could exist. Our crisis stems from the
growing realization that our technologies themselves are not neutral or objective arbiters of public
knowledge and, what’s more, that they may now actually be used to undermine the possibility of public
knowledge.

The point, then, is this: It’s not that digital media necessarily leads to epistemic nihilism, it’s that
digital media leads to epistemic nihilism given the conditions for public knowledge that have held sway
in the modern world. Seen in this light, digital media, like print before it, is helping dissolve an older
intellectual and political order. It is doing so because the trajectory we set out on 400 years ago or so
has more or less played itself out.

One last thought for now. According to Arendt, “The trouble is that factual truth, like all other truth,
peremptorily claims to be acknowledged and precludes debate, and debate constitutes the very essence
of political life. The modes of thought and communication that deal with truth, if seen from the politi-
cal perspective, are necessarily domineering; they don’t take into account other people’s opinions, and
taking these into account is the hallmark of all strictly political thinking.”

In other words, what if the technocratic strain within modern political culture, the drive to ground
politics in truth (or facts) is actually the drive to transcend the political altogether? What if the age of
electronic/mass media, the brief interregnum between the high water mark of the age of literacy and the
digital age, was in some ways merely a momentary deviation from the norm during which politics
could appear to be about consensus rather than struggle? In this light the political consequences of digi-
tal media might simply be characterized as the revenge of politics, although in a different and often dis-
concerting mode.

February 17, 2019


77. The Myth of Convenience

I once suggested that the four horsemen of the digital apocalypse will be called Convenience, Secu-
rity, Innovation, and Lulz. These were the values, so to speak, driving the production and enthusiastic
adoption of digital technologies regardless of their more dubious qualities.

I was reminded of the line while reading Colin Horgan’s recent piece, “The Tyranny of
Convenience.” Horgan rightly highlights the degree to which the value of convenience drives our
choices and informs our trade-offs.

“In the ongoing and growing opposition to the seemingly dystopian world technology companies are
building, convenience is often overlooked,” Horgan observes. “But it’s convenience, and the way con-
venience is currently created by tech companies and accepted by most of us, that is key to why we’ve
ended up living in a world we all chose, but that nobody seems to want.”

Not unlike Kara Swisher in a piece from a few weeks ago, Horgan does not absolve us of responsi-
bility for the emerging digital dystopia. Which is not to say, I hasten to add, that tech companies and
structural factors play no role. That goes without saying, but, you know, I’ll say it anyway.

I would certainly not claim that the playing field on which we make our choices about technology is
always level or fair; nonetheless, it seems to me that we have more agency than we are sometimes
given credit for, which, of course, entails a measure of responsibility. Indeed, the idea that we are basi-
cally helpless in the face of some vast and inscrutable techno-corporate machinery undermines the crit-
ical reflection and action that may be required of us.

There’s a line from Michel de Certeau’s The Practice of Everyday Life that has always stuck with
me ever since I first encountered it: “it is always good to remind ourselves that we mustn’t take people
for fools.”
But if we are not fools, by and large, and we are making choices, albeit sometimes against a stacked
deck, how is it that, in Horgan’s apt formulation, we find ourselves living “in a world we all chose, but
that nobody seems to want.” (Granted: “we all chose” is in need of qualification.)

“[C]onvenience is a value, and one we hold personally,” Horgan concludes. “Ultimately, this is why
it keeps winning, outweighing the more abstract ideas like privacy, democracy, or equality, all of which
remain merely issues for most of us.” “Convenience,” he adds, “doesn’t simply supersede privacy or
democracy or equality in many of our lives. It might also destroy them.” But this, too, requires a mea-
sure of explanation.

The Self-defeating Value of Convenience

Horgan’s piece recalled to mind Thomas Tierney’s The Value of Convenience: A Genealogy of
Technical Culture. Tierney’s book is over 25 years old now, but it remains a useful exploration of the
value of convenience and its role in shaping our technological milieu. His argument draws on an eclec-
tic set of sources and ranges over the history of technology, political theory, philosophy, and the history
of religion.

Tierney’s work supports Horgan’s claim that convenience is an often overlooked factor shaping our
technological culture, but he also tries to understand why this might be the case. What exactly is the na-
ture of the convenience we prize so highly, and why do we find it so valuable? Perhaps it seems unnec-
essary to ask such questions, as if the value of convenience were self-evident. But the questions most of
us don’t think to ask are often the most important ones we could ask. When we encounter an unasked
question we have also found an entry point into the network of assumptions and values that structure
our thinking but go largely unnoticed.

Tierney explains early on that there are two basic questions he is asking: “First, what is the value of
technology to modern individuals? And second, why do they hold this value in such high esteem that,
even when faced with technological dangers and dilemmas, they hope for solutions that will enable
them to maintain and develop technical culture?”

Nietzsche looms large in Tierney’s analysis, and he introduces the primary focus of the book with a
passage from Thus Spake Zarathustra:
“I go among this people and keep my eyes open: they have become smaller and are becoming
ever smaller: and their doctrine of happiness and virtue is the cause.

For they are modest even in virtue—for they want ease. But only a modest virtue is compatible
with ease.”

For “etymological reasons,” Tierney chooses to call this desire for ease convenience. “The value of
technology in modernity,” he will argue, “is centered on technology’s ability to provide convenience.”
He’s quick to add, though, that he is not interested in lamenting the smallness or mediocrity of modern
individuals and their virtues. Rather, he seeks “to throw some light on, and thereby loosen, the hold
which technology has on modernity. The desire for convenience seems to be an integral part of that
hold—that is, an integral part of the modern self.”

Tierney is also not interested in offering a singular and definitive account of technological culture.
Early on, he makes clear that the nature of technological culture is such that it requires multiple per-
spectives and lines of analysis, and even then it will likely elude any effort to identify its essence.

Regarding the nature of convenience, Tierney sees in the modern value a reimagining of the body’s
needs as limits to be overcome. “The distinction I would like to make between ancient and modern ne-
cessity,” Tierney writes, “is that ancient necessity was primarily concerned with satisfying the demands
of the body, while modern necessity is largely focused on overcoming the limits which are imposed by
the body …. And by the limits of the body, I mean certain features of embodiment which are perceived
as inconveniences, obstacles, or annoyances.”

Following a discussion of necessity in the context of the ancient Greek household, Tierney insists
that modern necessity, just as much as ancient necessity, “is based upon the body.” However, modern
attitudes towards the body differ from those of the ancient Greeks: “While the Greeks thought that the
satisfaction of bodily demands required careful attention and planning throughout the household,
modernity treats the body instead as the source of limits and barriers imposed upon persons. What these
limits require is not planning and attention, but the consumption of various technological devices that
allow people to avoid or overcome such limits.”
At points citing the work of Paul Virilio, Tierney adds a critical temporal dimension to this distinc-
tion. The demands of the body are seen “as inconveniences in that they limit or interfere with the use of
time.” Technology is valuable precisely as it appears to mitigate these inconveniences. “Time-saving,”
as is well known, has long been a selling point for modern household technologies.

“The need for speed,” Tierney continues, “both in conveyance and in people’s ability to satisfy the
demands of the body, is a hallmark of modern necessity.” But this is a paradoxical desire: “Unlike
purely spatial limits, as soon as a speed limit is overcome, another limit is simultaneously established.
The need to do things and get places as quickly as possible is a need that can never be satisfied. Every
advance imposes a new obstacle and creates the need for a more refined or a new form of technology.”

It brings to mind a line from Philip Rieff’s Triumph of the Therapeutic: “The ‘end’ or ‘goal’ is to
keep going. Americans, as F. Scott Fitzgerald concluded, believe in the green light.” The green light,
constant motion in whatever direction, acceleration—these are, of course, no ‘ends’ at all. They are
what you have left when you have lost sight of any true ends. It is fruitless to save time if you don’t
know why exactly your are saving it for.

There’s something rather pernicious about this. It seems clear that despite the continual adoption of
technologies that promise to save time or make things more convenient, we do not, in fact, feel as if we
have more time at all. There are a number of factors that may explain this dynamic. As Neil Postman
noted around the same time that Tierney was writing his book, the “winners” in the technological soci-
ety are wont to tell the “losers” that “their lives will be conducted more efficiently,” which is to say
more conveniently. “But discreetly,” he quickly adds, “they neglect to say from whose point of view
the efficiency is warranted or what might be its costs.” Tierney himself admits that what he has to say
is likely to be met “with a degree of self-preserving … denial” because he will argue that “a certain
value is not freely chosen by individuals, but is demanded by various facets of the technological order
of modernity.” Which is why, as Horgan put it, “we’ve ended up living in a world we all chose, but that
nobody seems to want.”

Convenience, Asceticism, and Death

Tierney notes that others have focused on the domination of nature as the guiding value of modern
technology. However, he makes a useful distinction between the value that animates the producers of
technology and the value that animates the consumers of technology. The domination of nature, ac-
cording to Tierney, “has been the value which guides the cutting edge of technology; it is the value pur-
sued by the leaders of technological progress, the scientists and technicians.” Convenience, however,
“is the value of the masses, of those who consume the products of technical culture.”

Admittedly, there is something about “the domination of nature” that seems somewhat archaic or
passé. One doesn’t imagine Bill Gates or Jack Dorsey, say, waking in the morning, taking in a whiff of
the morning air, and declaring, “I love the smell of Francis Bacon in the morning!”

However, there are a couple of interesting paths to take from here. One is presented to us by the ev-
ergreen mid-twentieth observation by C. S. Lewis in The Abolition of Man that “what we call Man’s
power over Nature turns out to be a power exercised by some men over other men with Nature as its in-
strument.” If you seek to conquer nature, you will eventually run into the realization that humanity is
just another part of nature and, thus, the last realm to be conquered.

“Human nature will be the last part of Nature to surrender to Man,” Lewis writes. “The battle will
then be won. We shall have ‘taken the thread of life out of the hand of Clotho’ and be henceforth free
to make our species whatever we wish it to be.”

“The battle will indeed be won,” Lewis reiterates, “But who, precisely, will have won it?”

Well, again: “For the power of Man to make himself what he pleases means, as we have seen, the
power of some men to make other men what they please.”

This does seem rather more familiar now than the older language about the domination of nature.
For whatever else we may say of digital technology and its purveyors, it certainly appears as if a vast,
often unseen machinery is being built in order to realize dreams of what Evan Selinger and Brett
Frischmann have called “engineered determinism.” The world, as I’ve noted before, is becoming our
very own giant, personalized Skinner Box, and we assent to it, in no small measure, because of the
promise of convenience.

So while Tierney’s claim that the technological elite are operating under the banner of the conquest
of nature may have initially seemed somewhat dated, we need only observe that, in certain cases, it has
simply morphed into its next phase. Which, to be clear, is not to say that this is the only motive at work
among those who produce the technology most of us consume. But there’s another angle that’s worth
considering, and with this we segue into something of the heart of Tierney’s claims.

In Tierney’s understanding, “the consumption of convenience in modernity reflects a certain con-


tempt for the body and the limits it imposes.” This, in his view, lends to convenience a discernible as-
cetic quality. “[T]he fetishistic attitudes toward technology and the rampant consumption of ‘conve-
niences’ which characterize modernity are a form of asceticism,” Tierney explains.

This is an intriguing observation to revisit in light of the various accounts of the rather interesting
practices that occasionally emanate out of Silicon Valley. Examples that come readily to mind include
Jack Dorsey’s practice of intermittent fasting and his meditation retreats, Elon Musk’s sleep depriva-
tion, and Ray Kurzweil’s diet- and pill-driven effort to live long enough to witness the singular-
ity. Soylent obviously qualifies as a case in point, especially in light of its creator’s motivations for
concocting the meal-replacement drink. Ultimately, of course, the apotheosis of this strand of body-
denying asceticism lies in the aspirations of the posthumanists, so many of whom demonstrate a not
even thinly veiled contempt for our bodily limits and whose eschatological visions often entail a radical
re-configuration of our bodies or else a laying aside of them altogether. What this entails, of course, is a
radical reimagining of death itself as a limit to be overcome.

Tierney already anticipated as much in the early 90s. He hints early on at how the value of conve-
nience was becoming a leading factor on the production side of technology. His closing chapter is a re-
flection on this theorizing of the death simply as a problem to be solved. In it, he cites the astronomer
Robert Jastrow’s 1981 work of futurology, The Enchanted Loom: Mind in the Universe. “At last the
human brain, ensconced in a computer, has been liberated from the weaknesses of moral flesh,” Jas-
trow writes,

Connected to cameras, instruments, and engine controls, the brain sees, feels, and responds to
stimuli. It is in control of its own destiny. The machine is its body; it is the machine’s mind.
The union of mind and machine has created a new form of existence, as well designed for life in
the future as man is designed for life on the African savanna.
It seems to me that this must be the mature form of intelligent life in the Universe. Housed in
indestructible lattices of silicon, and no longer constrained in the span of its years by the life
and death cycle of biological organism, such a kind of life could live forever.”

Tierney includes an interesting footnote on this passage. He says that he first learned about it in an
article by David Lavery, who described being on a panel on “Computers, Robots, and You” alongside
what he called a “body-snatcher,” presumably someone who exhibited a disdain of the body and wel-
comed the day he would be rid of it. When Lavery expressed a reluctance to abandon his body, the
“body-snatcher” called him a “carbon chauvinist.” (Lavery’s article, paywalled, appeared in The Hud-
son Review in 1986.)

The point, of course, is not these posthumanist fantasies—or (post-)Christian fan fiction as I’ve put
it elsewhere—will necessarily materialize, rather it is that they are symptomatic of a set of values that
do a lot of work in the conception and development of perfectly ordinary technology that many of us
use everyday.

It is worth asking ourselves to what degree we have ordered our use of technology around the value
of convenience. It is worth considering why exactly we value convenience or whether we have re-
ceived the benefits that we expected. It’s worth considering what assumptions about the body structure
our desire for convenience and whether or not we ought to reevaluate these assumptions. Would we not
do better to understand our limits as “inducements to formal elaboration and elegance, to fullness of re-
lationship and meaning,” to borrow a felicitous phrase from Wendell Berry, rather than as obstacles to
be overcome?

May 6, 2019
78. Nine Theses Regarding the Culture of Digital Me-
dia
1. The context of oral communication is one’s immediate audience characterized by precisely delin-
eated embodied presence. The context of print is a discursively constituted individual interiority. The
context of digital communication is disembodied immediacy characterized by distributed, algorithmi-
cally constituted presence.

2. Communication in oral societies is agonistically toned, pugilistic. Print fosters cool, detached ex-
pression. Digital media encourages performative, ironic combativeness.

3. Oral societies privilege honor, print privileges civility, electronic media spontaneity and insou-
ciance, digital media shamelessness.

4. In oral society, repetition is remembrance. In cultures of print and mass media, the repeatability of
content reigns. In digital culture, the repeatable form triumphs.

5. Oral media subsumes the self in the traditions of local communities. Print, later supercharged by
electronic media, lifts the self into the realm of romantic imagination and expressivist individualism.
Digital media ultimately collapses the experience of romantically inflected individuation, subsuming
the self into constantly generating and degenerating swarms of information.

6. In oral societies, freedom is conformity to communal standards. In the culture of print, to be free
is to choose for oneself. In digital culture, freedom is relief from the obligation to choose.

7. Pre-digital rhetoric aimed at persuasion and expression. Digital media ultimately undermines the
plausibility of persuasion and the desirability of expression.

8. Information scarcity encourages credulity. Information abundance encourages cynicism. Informa-


tion superabundance encourages epistemic nihilism.

9. All information is now disinformation.


November 25, 2019
79. Orality and Literacy Revisited

“‘Tis all in pieces, all coherence gone,” lamented John Donne in 1611. The line is from An Anatomy
of the World, a poem written by Donne to mark the death of his patron’s daughter at the age of sixteen.
Immediately before and after this line, Donne alludes to ruptures in the social, political, philosophical,
religious, and scientific assumptions of his age. In short, Donne is registering the sense of bewilder-
ment and disorientation that marked the transition from the premodern to the modern world.

“And new philosophy calls all in doubt,The element of fire is quite put out,The sun is lost, and
th’earth, and no man’s witCan well direct him where to look for it.And freely men confess that
this world’s spent,When in the planets and the firmamentThey seek so many new; they see that
thisIs crumbled out again to his atomies.‘Tis all in pieces, all coherence gone,All just supply,
and all relation;Prince, subject, father, son, are things forgot,For every man alone thinks he hath
gotTo be a phoenix, and that then can beNone of that kind, of which he is, but he.”

In the philosopher Stephen Toulmin’s compelling analysis of this transition, Cosmopolis: The
Hidden Agenda of Modernity, Donne is writing before the defining structures of modernity would fully
emerge to stabilize the social order. Donne’s was still a time of flux between the dissolution of an old
order and the emergence of a new one that would take its place–his time, we might say, was the time
of death throes and birth pangs.

As Alan Jacobs, among others, has noted, the story of the emergence of modernity is a story that
cannot be told without paying close attention to the technological background against which intellec-
tual, political, and religious developments unfolded. Those technologies that we tend to think of as me-
dia technologies–technologies of word, image, and sound or technologies of representation–have an es-
pecially important role to play in this story.

I mention all of this because we find ourselves in a position not unlike Donne’s: we too are caught
in a moment of instability. Traditional institutions and old assumptions that passed for common sense
are proving inadequate in the face of new challenges, but we are as of yet uncertain about what new in-
stitutions and guiding assumptions will take their place. Right now, Donne’s lament resonates with us:
“‘Tis all in pieces, all coherence gone.”

And for us no less than for those who witnessed the emergence of modernity, technological develop-
ments are inextricably bound up with the social and cultural turbulence we are experiencing, especially
new media technologies.

One useful way of thinking about these developments is provided by the work of the late Walter
Ong. Ong was a scholar of immense learning who is best known for Orality and Literacy: The
Technologizing of the Word, a study of the cultural consequences of writing. In Ong’s view, the advent
of the written word dramatically reconfigured our mental and social worlds. Primary oral cultures, cul-
tures that have no notion of writing at all, operated quite differently than literate cultures, cultures into
which writing has been introduced.

Likewise, Ong argued, the consciousness of individuals in literate cultures differs markedly from
those living in an oral culture. Writing in the late twentieth century, Ong also posited the emergence of
a state of secondary orality created by electronic media.

I’ve been pleasantly surprised to see Ong and his work invoked, directly and indirectly, in a handful
of pieces about media, politics, and the 2016 election.

In August 2015, Jeet Heer wrote a piece titled, “Donald Trump, Epic Hero.” In it, he proposed the
following: “Trump’s rhetorical braggadocio and spite might seem crude, even juvenile. But his special
way with words has a noble ancestry: flyting, a recurring trope in epic poetry that eerily resembles the
real estate magnate’s habit of self-celebration and cruel mockery.” Heer, who wrote a 2004 essay on
Ong for Books and Culture, grounded his defense of this thesis on Ong’s work.

In a post for Neiman Reports, Danielle Allen does not cite Ong, but she does invoke the distinction
between oral and literate cultures. “Trump appears to have understood that the U.S. is transitioning
from a text-based to an oral culture,” Allen concluded after discussing her early frustration with the
lack of traditional policy position papers produced by the Trump campaign and its reliance on short
video clips.
In Technology Review, Hossein Derakhshan, relying on Neil Postman rather than Walter
Ong, argues that the image-based discourse that has, in his view, come to dominate the Internet has
contributed to the rise of post-truth politics and that we do well, for the sake of our democracy, to re-
turn to text-based discourse. “For one thing,” Derakhshan writes,

we need more text than videos in order to remain rational animals. Typography, as Postman de-
scribes, is in essence much more capable of communicating complex messages that provoke
thinking. This means we should write and read more, link more often, and watch less television
and fewer videos—and spend less time on Facebook, Instagram, and YouTube.

Writing for Bloomberg, Joe Weisenthal, like Heer, cites Ong’s Orality and Literacy to help explain
Donald Trump’s rhetorical style. Building on scholarship that looked to Homer’s epic poetry for the
residue of oral speech patterns, Ong identified various features of oral communication. Weisenthal
chose three to explain “why someone like Donald Trump would thrive in this new oral context”: oral
communication was “additive, not analytic,” relied on repetition, and was aggressively polemical.
Homer gives us swift-footed Achilles, man-killing Hector, and wise Odysseus; Trump gives us Little
Marco, Lyin’ Ted, and Crooked Hillary; his speeches were jammed with mind-numbing repetition; and
his style was clearly combative.

There’s something with which to quibble in each of these pieces, but raising these questions about
oral, print, and image based discourse is helpful. As Ong and Postman recognized, innovations in me-
dia technology have far reaching consequences: they enable new modes of social organization and new
modes of thought, they reconfigure the cognitive load of remembering, they alter the relation between
self and community, sometimes creating new communities and new understandings of the self, and
they generate new epistemic ecosystems.

As Postman puts it in Technopoly,

Surrounding every technology are institutions whose organization–not to mention their reason
for being–reflects the world-view promoted by the technology. Therefore, when an old technol-
ogy is assaulted by a new one, institutions are threatened. When institutions are threatened, a
culture finds itself in crisis. This is serious business, which is why we learn nothing when edu-
cators ask, Will students learn mathematics better by computers than by textbooks? Or when
businessmen ask, Through which medium can we sell more products? Or when preachers ask,
Can we reach more people through television than through radio? Or when politicians ask, How
effective are messages sent through different media? Such questions have an immediate, practi-
cal value to those who ask them, but they are diversionary. They direct our attention away from
the serious social, intellectual, and institutional crisis that new media foster.

Given the seriousness of what is at stake, then, I’ll turn to some of my quibbles as a way of moving
toward a better understanding of our situation. Most of my quibbles involve the need for some finer
distinctions. For example, in her piece, Allen suggest that we are moving back toward an oral culture.
But it is important to make Ong’s distinction: if this is the case, then it is to something like what Ong
called secondary orality. A primary oral culture has never known literacy, and that makes a world of
difference. However much we might revert to oral forms of communication, we cannot erase our
knowledge of or dependence upon text, and this realization must inform whatever it is we mean by
“oral culture.”

Moreover, I wonder whether it is best to characterize our move as one toward orality. What about
the visual, the image? Derakhshan, for example, frames his piece in this way. Contrasting the Internet
before and after a six year stay in an Iranian prison, Derakshan observed, “Facebook and Twitter had
replaced blogging and had made the Internet like TV: centralized and image-centered, with content em-
bedded in pictures, without links.” But Alan Jacobs took exception to this line of thinking. “Much of
the damage done to truth and charity done in this past election was done with text,” Jacobs notes,
adding that Donald Trump rarely tweets images. “[I]t’s not the predominance of image over text that’s
hurting us,” Jacobs then concludes, “It’s the use of platforms whose code architecture promotes nov-
elty, instantaneous response, and the quick dissemination of lies.”

My sense is that Jacobs is right, but Derakhshan is not wrong, which means more distinctions are in
order. After all, text is visual.

December 3, 2016
PART V

Time, Memory, and Nostalgia


80. Nostalgia: The Third Wave
“If the idea of progress has the curious effect of weakening the inclination to make intelligent provision
for the future, nostalgia, its ideological twin, undermines the ability to make intelligent use of the past.”
— Christopher Lasch, The True and Only Heaven

Memory and nostalgia – both of these words name ways of being with the past. Memory generally
names what we take to be a healthy ordering of our relationship to the past, while nostalgia names
whatever is generally taken to be a disordered form of relating to the past. I’ve long sensed, however,
that some of what is casually and dismissively labeled nostalgia may in fact belong under the category
of healthy remembering or, alternatively, that the nostalgic longings in question at the very least sig-
naled a deeper disorder of which nostalgia is but a symptom.

If we step back and look at some of our behaviors and certain trends that animate popular culture,
we might conclude that we are in the thralls of some sort of madness with regards to time and the past.
We obsessively document ourselves, visually and textually, creating massive archives stuffed with
memory, virtual theaters of memory for our own lives. Facebook’s evolving architecture reflects this
obsession as it now aims to make it easier to chronologically and (geo)spatially order and access our
stored memories. Simultaneously, vintage and retro remain the stylistic order of the day. Hyperrealistic
period dramas populate our entertainment options. T-shirt design embraces the logos and artifacts of
the pre-digital past. Social critics suggest that we are aesthetically stuck, like a vinyl record (which are
incidentally quite hip again) skipping incessantly.

What do we make of it? How do we understand all of these gestures, some of them feverish, toward
remembering and the past? How can we discern where memory ends and nostalgia begins? For that
matter, how do we even define nostalgia?

Christopher Lasch, who raised many of these same sorts of questions throughout his career, particu-
larly in The True and Only Heaven, provides some helpful insights and categories to help us along the
path to understanding. But before considering Lasch’s perspective, let me take just one more pass at
clarifying the main issues that interest me here.
Approaching nostalgia we need to distinguish between the semantic and the ontological dimensions
of the issue. The semantic questions revolve around the use of the word nostalgia; the ontological ques-
tions revolve around the status of the sensibilities to which the word is applied as well as their sources
and roots. It would seem that the semantic question has been more or less resolved so that the connota-
tions of the word are uniformly negative (more on this later). Nostalgia, in other words, is typically a
word of opprobrium. This being the case, then, the question becomes whether or not the word is justly
applied and such judgments require us to define what constitutes healthy and unhealthy, ordered and
disordered modes of relating to the past. Coming back to Lasch, we can see what help he offers in
thinking through these questions.

First, for Lasch, nostalgia carries entirely negative connotations. He employs the term to name dis-
ordered relationships to the past. So, in his view, nostalgia prevents us from making intelligent use of
the past because it is an ahistorical phenomenon.

“Strictly speaking, nostalgia, does not entail the exercise of memory at all, since the past it idealizes
stands outside time, frozen in unchaining perfection. Memory too may idealize the past, but not in or-
der to condemn the present. It draws hope and comfort from the past in order to enrich the present and
to face what comes with good cheer. It sees past, present, and future as continuous. It is less concerned
with loss than with our continuing indebtedness to a past the formative influence of which lives on in
our patterns of speech, our gestures, our standards of honor, our expectations, our basic disposition to-
ward the world around us.”

In this paragraph, by contrasting it with memory, Lasch lays out the contours of nostalgia as he un-
derstands it:

a. Nostalgia is primarily interested in condemning the present.

b. It fails to offer hope or otherwise enrich the present.

c. It sunders the continuity of past, present, and future.

d. It is focused on loss.

e. It fails to recognize the ongoing significance of the past in the present.


Lasch goes on to offer a genealogy of the various sources of contemporary nostalgia beginning with
the historicizing of the pastoral sensibility and proceeding through the Romantic idealization of child-
hood, America’s romanticization of the West and later the small town, and finally nostalgia’s coming
into self-awareness as such in the 1920s.

The recurring theme in these earlier iterations of the nostalgic sensibility is the manner in which,
with the exception of childhood, an initially spatial displacement — of the countryside for example —
becomes temporalized. So, for instance, the long standing contrast between town and country that ani-
mated pastoral poetry since the classical age became, in the eighteenth and nineteenth century with the
advent of industrialization, a matter not of here and there, but then and now. The First World War had a
similar effect on the trope of childhood’s lost innocence by marking off the whole of history before the
war as a time of relative innocence compared with the generalized loss of innocence that characterized
the time after the war.

The First World War, in Lasch’s telling of nostalgia’s history, also gave a specific form to a ten-
dency that first appears in the early nineteenth century: ”In an ‘age of change,’ as John Stuart Mill
called it in his 1831 essay ‘The Spirit of the Age,’ the ‘idea of comparing one’s own age with former
ages’ had for the first time become an inescapable mental habit; Mill referred to it as the ‘dominant
idea’ of the nineteenth century.”

It would seem to me that this tendency is not entirely novel in Mill’s day, after all Renaissance cul-
ture made much of its contrast with the so-called Dark Ages and its recovery of classical civilization.
But it seems safe to credit Mill’s estimation that in his day it becomes for the first time “an inescapable
mental habit.” This would seem to correspond roughly with the emergence of historical consciousness
and the discipline of history in its modern form — which is to say as a “science” of the past rather than
as a branch of moral philosophy.

Following the First World War, this comparative impulse took on a specific form focused on the
generation as the preferred unit of analysis. First, Lasch writes, ”For those who lived through the cata-
clysm of the First World War, disillusionment was a collective experience — not just a function of the
passage from youth to adulthood but of historical events that made the prewar world appear innocent
and remote.” He then notes that it was no surprise that ”the concept of the generation first began to in-
fluence historical and sociological consciousness in the same decade, the twenties, in which people be-
gan to speak so widely of nostalgia.”

It was in the 1920s, according to Lasch that nostalgia became aware of itself. In other words, it was
not until the 1920s that the semantic problem we noted earlier appears since it was not until then that
the term nostalgia gets applied widely to the varieties of responses to loss that had long been expressed
in literature and the visual arts. Prior to the 1920s, nostalgia was mostly a medical term linked to the
psychosomatic symptoms associated with severe, literal homesickness.

According to Lasch, by the mid-twentieth century, ”History had come to be seen as a succession of
decades and also as a succession of generations, each replacing the last at approximately ten year inter-
vals. This way of thinking about the past had the effect of reducing history to fluctuations in public
taste, to a progression of cultural fashions in which the daring advances achieved by one generation be-
come the accepted norms of the next, only to be discarded in their turn by a new set of styles.”

This seems just about right. You can test it on yourself. First, consider our habit of talking about
generations: baby-boomers, Y, X, millennials. Then, think back through the twentieth century. How is
your memory of the period organized. I’m willing to bet that yours, as mine, is neatly divided up into
decades even when the decades are little more than arbitrary with regards to historical development.
And, further reinforcing Lasch’s point, what is the first decade for which you have a ready label and set
of associations? I’m again willing to bet it is the 1920s, the “Roaring Twenties” of flappers, jazz, F.
Scott Fitzgerald, and the stock market crash. Thirties: depression. Forties: World War II. Fifties: Ike
and Beaver. Sixties: sex, drugs, and rock ‘n’ roll. Seventies: Nixon and disco. Eighties: big hair and
yuppies. And so on.

But this manner of thinking evidences the chief problem Lasch identifies with nostalgia. It has the
effect of hermetically sealing off the past from the present. It represents the past as a series of discreet
eras that, once superseded inevitably and on schedule by the next, cease to effect the present. More-
over, ”Once nostalgia became conscious of itself, the term rapidly entered the vocabulary of political
abuse.” For a society still officially allied to an ideology of progress, the charge of nostalgia “had at-
tained the status of a political offense of the first order.” And here again is the semantic problem. When
a word becomes a lazy term of abuse, then it is in danger of swallowing up all sorts of realities that for
whatever reason do not sit well with the person doing the labeling.
So as Lasch begins to draw his social history of nostalgia to a close with the 1960s, “denunciations
of nostalgia had become a ritual, performed, like all rituals, with a minimum of critical reflection.” In
his 1965 The Paranoid Style in American Politics, for example, Richard Hofstadter “referred repeatedly
to the ‘nostalgia’ of the American right and the of the populist tradition from which it supposedly de-
rived.” And yet, the “nostalgia wave of the seventies” was still ahead:

“Time, Newsweek, US News and World Report, Saturday Review, Cosmopolitan, Good
Housekeeping, Ladies’ Home Journal, and the New Yorker all published reports on the ‘great
nostalgia kick.’ ‘How much nostalgia can America take?’ asked Time in 1971. The British jour-
nalist Michael Wood, citing the revival of the popular music of the fifties, the commercial ap-
peal of movies about World War II, and the saturation of the airwaves with historical dramas —
‘Upstairs, Downstairs,’ ‘The Pallisers,’ ‘The Forsyte Saga’ — declared, ‘The disease, if it is a
disease, has suddenly become universal.’”

Note for just a moment how easy it would be to update the British journalist’s comments to fit con-
temporary circumstances. Just add Hipster Revivalism and replace the television dramas with Mad
Men, Boardwalk Empire, Downton Abby, and Pan Am. More on this in just a moment, but first back
to Lasch.

Lasch concludes his analysis of memory and nostalgia by elaborating on the idea that “Nostalgia
evokes the past only to bury it alive.” From this perspective, nostalgia and the ideology of Progress
have a great deal in common. They both evince “an eagerness to proclaim the death of the past and to
deny history’s hold over the present … Both find it difficult to believe that history still haunts our en-
lightened, disillusioned maturity.”

He goes on to add that the “nostalgic attitude” and belief in progress also share “a tendency to repre-
sent the past as static and unchaining, in contrast to the dynamism of modern life … Notwithstanding
its insistence on unending change, the idea of progress makes rapid social change appear to be uniquely
a feature of modern life. (The resulting dislocations are then cited as an explanation of modern nostal-
gia.)”

Regarding that last parenthetical statement, I plead guilty as charged. I’m not sure, however, that
this is entirely off the mark, particularly when we distinguish between semantic (we might even say
rhetorical) matters and the underlying phenomenon. Lasch himself points to the connections between
industrialization, urbanization, and the First World War and the history of the nostalgic sensibility. The
psychic consequences of these phenomena were not illusory. What Lasch is concerned about, however,
is the manner in which these psychic consequences were ultimately interpreted and filtered through the
language of nostalgia.

The danger, in his view, is that we fail to reckon with the persistence of history. By way of contrast,
Lasch offers us Anthony Brandt’s comments on historical memory and nostalgia. Lasch summarizes
Brandt’s reflections on Henry Ford’s Greenfield Village, colonial Williamsburg, and Disney’s “Main
Street” this way: “the passion for ‘historical authenticity’ seeks to recapture everything except the one
thing that matters, the influence of the past on the present.” Real knowledge of the past, in Brandt’s
view, “requires something more than knowing how people used to make candles or what kind of bed
they slept in. It requires a sense of the persistence of the past: the manifold ways in which it penetrates
our lives.”

This is Lasch’s chief concern and he is certainly right about it. If we define nostalgia as a dehistori-
cizing impulse that undermines our ability to think about the past and its enduring consequences, then it
is certainly to be resisted. Lasch is advocating a way of being with the past that takes it seriously by re-
fusing to romanticize it and by recognizing its continuing influence on the present. We are, in other
words, not to live in the past, but we are to live with it. It is a position that is neatly summed up by
Faulkner’s line, “The past is never dead. It’s not even past.”

But how does Lasch’s analysis help us understand the contemporary burst of nostalgia? For one
thing, it reminds us that such bursts have a history. Ours is not necessarily novel. However, recognizing
that a variety of sensibilities and responses are grouped, sometimes indiscriminately, under the heading
of nostalgia, it is worth asking how our present fixations depart from their antecedents.

I’m going to venture this schema in response. There appear to be two prior large scale waves of sen-
sibilities that have subsequently been identified as nostalgic, the first running throughout the middle
part of the nineteenth century and another materializing subsequent to the First World War. These
waves, at the expense of mixing metaphors, also yielded subsequent ripples of nostalgia that shared
their essential orientation. The former wave and its ripples appears to have been generated by spatial or
physical displacements related to industrialization and urbanization. The latter appears to have been
generated by a temporal dislocation occasioned by the First World War that created a psychic chrono-
logical rupture.

Contemporary nostalgia is fixated on the materiality of the past. Take Mad Men, for example, it’s
not for the time period that we are nostalgic, the series after all gives us a rather bleak view of the era.
No, it is for the stuff of the era that we are nostalgic — the fedoras, the martini glasses, the furniture,
the typewriters, the ordinary accouterments of daily life. Consider all of those lingering close-ups on
the objects of the past that are characteristic of the early seasons. Remember too how often such shows
are praised for their “attention to detail” which is to say for the way they capture the material condi-
tions of the era.

The same holds true for the hipster revivalism linked above. It is focused on the equipment of the
past, not its values or its places. This is why Pottery Barn offers a faux rotary phone. It’s why vinyl
records are now on sale again at big box retailers like Best Buy and Target. Our nostalgia is neither
spatial nor temporal, it is tactile. And it is a response — conscious or not, ill-advised as it may be — to
digital/virtual culture.

Taking one last cue from Lasch, nostalgia for the material risks missing how materiality persists in
its significance in much the same way that nostalgia for the past misses how the past persists into the
present. The danger is that we begin to think about life in terms of immaterial abstractions like “the
cloud” and “Information” or misleading dichotomies such as “online/offline” while ignoring underly-
ing, persistently material realities. It also threatens to distract us from the persistence (and significance)
of technologies that do not fit neatly in the digital category. This same material nostalgia which is blind
to the materiality of the present is what leads us to myopically and misleadingly focus analysis of con-
temporary events on abstractions such as the “Twitter Revolution” or “social media campaigns.” It is
not that these are insignificant, it is rather that the rhetoric obscures the ongoing significance of mate-
rial realities. It fashions a false dichotomy between the virtual present and the material past, and our
thinking will be all the worse for it.

December 28, 2011


81. Keeping Time, Keeping Silent

What shape does a well ordered life take and how does one achieve such a thing? I certainly don’t
have a story of personal triumph on this score to share with you, but I’m fairly certain that if I did it
would focus on the re-ordering of a disordered relationship to time. Time, in fact, is the theme of a
commencement address delivered by Paul Ford to the Interaction Design MFA program at the School
of Visual Arts in New York City. The speech, titled “10 Timeframes”, addresses the changing frames
by which we measure and understand our experience of time, from the farmer whose frames are the
changing season to the computer scientist who works with nanoseconds.

Commencement addresses are difficult to do well or in any kind of original fashion, but Ford man-
aged both and my excerpts here will not convey either the feel or the insight of the whole. That said,
here are the fragments of Ford’s speech I want to bring into conversation with the second piece which
I’ll get to in just a minute. After giving a few illustrative examples, Ford reminds his listeners of the
following:

“So it’s only a few hundred years ago that people started to care about centuries, and then more
recently, decades. And of course hours and minutes. And in the last 40 years we’ve got 86 tril-
lion nanoseconds a day, and a whole industry trying to make every one of them count.”

He introduced the nanosecond by referring back to a book published in the early 1980’s on the his-
tory of the computer, The Soul of a New Machine. After quoting one engineer describing the signifi-
cance of nano-seconds, Ford then tells his audience,”One of the engineers in the book burned out and
quit and he left a note that read: ‘I am going to a commune in Vermont and will deal with no unit of
time shorter than a season.’”

Ford, who is speaking to creative types who will design digital tools that other creative types will
use to do all sorts of work, concludes: “And I want you to ask yourself when you make things, when
you prototype interactions, am I thinking about my own clock, or the user’s? Am I going to help some-
one make order in his or her life, or am I going to send that person to a commune in Vermont?”
Perhaps it would not be so bad to be in a commune in Vermont, but Ford clearly understood that one
engineer’s decision to be the product of exhaustion — exhaustion that resulted from continuous work
within a frame of time that led to a disordered experience of life.

Many of the discontents and disorders associated with modernity, discontents and disorders that are
exacerbated by the advent of digital culture, revolve around time. Ford reminds us that our experience
of time has a history, a history intimately tied to our machines for measuring time as Lewis Mumford
observed many years ago. Mumford’s observations about the mechanical clock, whose origins lie in
medieval monasticism, segue nicely (and somewhat paradoxically) to the second piece.

“How Silence Works”, a transcript of Jeremy Mesiano-Crookston’s email interviews with Trappist
monks in the Benedictine Order living in Quebec, also dwells on the shape of the well ordered life. As
the title suggests, the interviews focus on the place of silence in the monastic life. Contrary to popular
belief, the Trappists take no “vow of silence,” although silence is an integral part of their communal
life. As with Ford’s piece, I encourage you to read the whole, it is brimming with timely wisdom and
insight.

Out of the many passages that are worth noting in the email exchanges, I’ll draw your attention to
two. The first ties in nicely with Ford’s concerns. Mesiano-Crookston asks, “Out of curiosity, do the
monks in the cloister watch the daily news? Are you interested in cultural changes in the world?” In re-
sponse one monk wrote,

“I wonder if a lot of the cultural complexity you refer to [in a previous question] seems interest-
ing to people because they have lost so much consciousness of [their] ancestors and the long
view afforded by a knowledge of history. If you don’t know history, everything today can seem
quite novel. But in the larger context of the story of human history, much of what fascinates, to-
day, is quite redundant.”

The practices of the monastery, including the practice of silence, a practice that has the collateral ef-
fect of slowing down time, yield a frame of time (to borrow Ford’s terminology) quite different from
the frame of time most of us work with in our day to day life. Saying as much is probably stating the
obvious. But without suggesting that we all take up the monastic life, it would seem that with smaller
gestures we might come closer to an ordering of time that was, simply put, better for us. Perhaps taking
a cue from the monastic life, we might learn to cultivate small rituals that establish a more humane
rhythm for our daily life. Such small gestures are certainly within the realm of the possible for most of
us. We might find that such small gestures — micro-practices to borrow sociologist Pierre Bourdieu’s
wording — may have a considerable impact on the shape of our experience.

Asked whether they believed the practice of silence were beneficial for all people, one of the monks
replied,

“I would say the cultivation of silence is indispensable to being human. People sometimes talk
as if they were “looking for silence,” as if silence had gone away or they had misplaced it some-
where. But it is hardly something they could have misplaced. Silence is the infinite horizon
against which is set every word they have ever spoken, and they can’t find it? Not to worry—it
will find them.”

Perhaps. It is hard to quibble with a point so eloquently put; but while silence may indeed find us, I
think that we ought also to do a little searching for it ourselves. At the very least, we should be pre-
pared to receive it when it does find us. Perhaps then, in silence, we will find ourselves better able to
recalibrate our frame of time and achieve something more closely resembling a well-ordered life.

June 8, 2012
82. From Memory Scarcity to Memory Abundance

The most famous section in arguably the most famous book about photography, Roland Barthes’
Camera Lucida, dwells on a photograph of Barthes’ recently deceased mother taken in a winter garden
when she was a little girl. On this picture, Barthes hung his meditative reflections on death and photog-
raphy. The image evoked both the “that-has-been” reality of the subject, and the haunting “this-will-
die” realization. That one photograph of his mother is also the only image discussed by Barthes that
was not reproduced in Camera Lucida. It was too personal. It conveyed something true about his
mother, but only to him.

But what if Barthes had not a few, but hundreds or even thousands of images of his mother?

I’ve long thought that what was most consequential about social media was their status as prosthetic
memories. A site like Facebook, for example, is a massive archive of externalized memories preserved
as texts and images. For this reason, it seemed to me, it would be unbearably hard to abandon such
sites, particularly for those who had come of age with and through them. These archives bore too pre-
cious a record of the past to be simply deleted with a few clicks. I have made this argument myself.

But now I’ve realized that I had not fully appreciated the most important dynamic at play. I was op-
erating with assumptions that were formed during an age of relative memory scarcity, but digital pho-
tography and sites like Facebook have brought us to an age of memory abundance. The paradoxical
consequence of this development will be the progressive devaluing of such memories and severing of
the past’s hold on the present. Gigabytes and terabytes of digital memories will not make us care more
about those memories, they will make us care less.

We’ve seen the pattern before. Oral societies which had few and relatively inefficient technologies
of remembrance at their disposal, lived to remember. Their cultural lives were devoted to ritual and
liturgical acts of communal remembering. The introduction of writing, a comparably wondrous tech-
nology of remembrance, gradually released the individual from the burdens of cultural remembrance.
Memory that could be outsourced or offloaded could also be effectively forgotten by the individual
who was free to remember their own history. And it has been to this task that subsequent developments
in the technology of remembrance have been put to use. The emergence of cheap paper coupled with
rising rates of literacy gave us the diary and boxes of letters. Photography and film were also put to the
task of documenting our lives. But until recently, these technologies were subject to significant con-
straints. The recording devices were bulky and cumbersome and they were limited in capacity by the
number of exposures in a film and the length of ribbon in a tape. There were also important practical
constraints on storage and access. Digital technologies have burst through these constraints and they
have not yet reached their potential.

Now we carry relatively unobtrusive devices of practically unlimited recording capacity, and these
are easily linked to archives that are likewise virtually unlimited in their capacity to store and organize
these memories. If we cast our vision into the not-too-distant nor fantastical future, we can anticipate
individuals engaging with the world through devices (e.g., Google Glass) that will both augment the
physical world by layering it with information and generate a near continuous audio-visual record of
our experience.

Compared to these present and soon-to-be technologies, the 35mm camera which was at my dis-
posal through the ’80s and ’90s seems primitive. With regards to a spectrum indicating the capacity to
document and archive memories, I was then closer to my pre-modern predecessors than to the genera-
tion that will succeed me.

Roland Barthes’ near mystical veneration of his mother’s photograph, touching as it appears to those
of us who lived in the age of memory scarcity, will seem quixotic and quaint to those who have known
only memory abundance. Barthes will seem to them as those medievals that venerated the physical
book do to us. They will be as indifferent to the photograph, and the past it encodes, as we are to the
cheap paperback.

It may seem, as it did to me, that social media revived the significance of the past by reconnecting
us with friends we would have mostly forgotten and reconstituting habits of social remembering. I’d
even expressed concerns that social media might allow the past to overwhelm the present rendering
recollection rather than suppression traumatic. But this has only been an effect of novelty upon that
transitional generation who had lived without the technology and upon whom it appeared in medias res.
For those who have known only the affordances of memory abundance, there will be no reconnection
with long forgotten classmates or nostalgic reminiscences around a rare photograph of their youth cap-
turing some trivial, unremembered moment. It will all be documented and archived, but it will mean
not a thing.

It will be Barthes’ contemporary, Andy Warhol, who will appear as one of us. In his biography of
Warhol, Victor Bockris writes,

“Indeed, Andy’s desire to record everything around him had become a mania. As John Perrault, the
art critic, wrote in a profile of Warhol in Vogue: ‘His portable tape recorder, housed in a black brief-
case, is his latest self-protection device. The microphone is pointed at anyone who approaches, turning
the situation into a theater work. He records hours of tape every day but just files the reels away and
never listens to them.’”

Andy Warhol’s performance art will be our ordinary experience, and it is those last few words that
we should note: “and he never listens to them.”

Reconsider Plato’s infamous critique of writing. Critics charge Plato with shortsightedness because
he failed to see just how much writing would in fact allow us to remember. But from a different per-
spective, Plato was right. The efficient and durable externalization of memory would makes us person-
ally indifferent to remembrance. As the external archive grows, our personal involvement with the
memory it stores shrinks in proportion.

Give me a few precious photographs, a few minutes of grainy film and I will treasure them and hold
them dear. Give me one terabyte of images and films and I will care not at all.

In the future, we will float in the present untethered from the past and propelled listlessly onward by
the perpetual stream of documentary detritus we will emit.

January 25, 2013


83. If Nostalgia Is A Desire, What Does It Long For?

When we’re not using nostalgia as a term of derision, we use it to name a twinge in the gut that
somehow blends melancholic longing with happy recollection. When this experience becomes acute it
may best be described, pardon the unseemly melodrama, as an ache in the soul, but one that is not with-
out its consolations.

But first, about that derisiveness. Nostalgia is a fighting word. For instance, one of the easiest ways
to dismiss a claim is to label it nostalgic. This is why, in fact, I’m thinking of adding “knee-jerk re-
course to nostalgia as a term of derision” to the list of Borg Complex symptoms. No doubt, much of
what gets labeled nostalgic should be dismissed, although perhaps not simply by attaching a label to it.
At best, then, we allow ourselves to indulge in the distractions of nostalgia–a little Mad Men, a lit-
tle Downton–so long as we maintain the appropriate degree of ironic detachment.

I remain curious, though, about the feeling itself and what evokes it. It is one thing to critically ana-
lyze the commodification of nostalgia and its deployment as a marketing tool, for example, and it is an-
other thing to contemplate that moment when we experience the feeling we label nostalgia, particularly
in its more acute manifestations, and seek its sources.

Part of what puzzles me about such experiences is how difficult it is, for me at least, to define what
exactly one is feeling. But before I go any further, it might be useful for you to recall for yourself a
time when you experienced nostalgia. I don’t assume that everyone has had such an experience, but, if
you have, try to hold that moment in mind and think with me about how we might understand it. Here
are a few lightly structured thoughts toward that end.

The thing that arouses nostalgia is a synecdoche for the past and a signal of our dislocation; it is a
reminder of our finitude, for we cannot horde time.

But is nostalgia for the past? We know that nostalgia was originally a word that described intense
homesickness, but today we think of it more as a longing for a time rather than a place. So which is it?
Is it really or always “pastness” that evokes the feeling we call nostalgia? I wonder, too, if these are not
always present together. I think of Wordsworth’s “Lines Written a Few Miles above Tintern Abbey,”
which begins with

“Five years have past; five summers, with the length


Of five long winters!”

and continues by describing the place around the abbey. It would be hard to abstract, in more than a
theoretical way, the aspects of time and place from Wordsworth’s nostalgic meditations.

Maybe it is really neither place nor time that is the essence of the experience of nostalgia. Maybe
place and time have been proxies for some other object of desire whose presence radiates from them
and through our memory to us.

Nostalgia, then, is just another word for desire, or, perhaps better, it is one of the shapes desire takes.
That’s what gives it its ache-in-the-soul quality. But what is it a longing for, if not for a time or for a
place?

Wordsworth gives us a hint when he notes that he was

“changed, no doubt, from what I was when first


I came among these hills.”

I am scattered through time, as are you. One way of thinking about the divided, de-centered, shat-
tered self is to see it as a temporal phenomenon, as a function of our time-stretched nature. I have been;
I am; I will be. The self may feel most coherent in the present moment, on the leading edge of time, but
it is scattered throughout the past.

We can imagine it as a boat and its wake. Seen from above, the wake is widest, and most diffuse, at
its farthest from the boat. We are that boat making its way inexorably onward on the sea of time, but
we are also the wake with the detritus of the self strewn upon it.

Nostalgia, from this perspective, is a desire for the self, for self-possession, for all of the selves we
have been. The desire is kindled when our present self tunes in to traces of our past self. That tuning
might be occasioned by an object, an image, a melody, a smell, and, yes, a place. But it is not any of
those things which we desire, it is that part of our self that existed with them. The objects and experi-
ences that evoke nostalgia are synecdoches not for a time but for the self.

Nostalgia then is ultimately a desire for wholeness. It is a desire to horde, not time, but rather the
fullness of our self, scattered across time; a desire to lose nothing of our selves, to hold, at once, all that
we have been. That, of course, is an impossible object of desire, which is why nostalgia vocalized is a
sigh.

But Wordsworth once more:

“That time is past,


And all its aching joys are now no more,
And all its dizzy raptures. Not for this
Faint I, nor mourn nor murmur; other gifts
Have followed; for such loss, I would believe,
Abundant recompense.”

January 31, 2014


84. Google Photos and the Ideal of Automated Docu-
mentation
I’ve been thinking, recently, about the past and how we remember it. That this year marks the 20th
anniversary of my high school graduation accounts for some of my reflective reminiscing. Flipping
through my senior yearbook, I was surprised by what I didn’t remember. Seemingly memorable events
alluded to by friends in their notes and more than one of the items I myself listed as “Best Memories”
have altogether faded into oblivion. “I will never forget when …” is an apparently rash vow to make.

But my mind has not been entirely washed by Lethe’s waters. Memories, assorted and varied, do
persist. Many of these are sustained and summoned by stuff, much of it useless, that I’ve saved for
what we derisively call sentimental reasons. My wife and I are now in the business of unsentimentally
trashing as much of this stuff as possible to make room for our first child. But it can be hard parting
with the detritus of our lives because it is often the only tenuous link joining who we were to who we
now are. It feels as if you risk losing a part of yourself forever if you were to throw away that last deli-
cate link.

“Life without memory,” Luis Bunuel tells us, “is no life at all.” “Our memory,” he adds, “is our co-
herence, our reason, our feeling, even our action. Without it, we are nothing.” Perhaps this accounts for
why tech criticism was born in a debate about memory. In the Phaedrus, Plato’s Socrates tells a cau-
tionary tale about the invention of writing in which writing is framed as a technology that undermines
the mind’s power to remember. What we can write down, we will no longer know for ourselves–or so
Socrates worried. He was, of course, right. But, as we all know, this was an incomplete assessment of
writing. Writing did weaken memory in the way Plato feared, but it did much else besides. It would not
be the last time critics contemplated the effects of a new technology on memory.

I’ve not written nearly as much about memory as I once did, but it continues to be an area of deep
interest. That interest was recently renewed not only by personal circumstances but also by the rollout
of Google Photos, a new photo storage app with cutting edge sorting and searching capabilities. Ac-
cording to Steven Levy, Google hopes that it will be received as a “visual equivalent to Gmail.” On the
surface, this is just another digital tool designed to store and manipulate data. But the data in question
is, in this case, intimately tied up with our experience and how we remember it. It is yet another tool
designed to store and manipulate memory.

When Levy asked Bradley Horowitz, the Google executive in charge of Photos, what problem does
Google Photos solve? Horowitz replied,

“We have a proliferation of devices and storage and bandwidth, to the point where every single
moment of our life can be saved and recorded. But you don’t get a second life with which to cu-
rate, review, and appreciate the first life. You almost need a second vacation to go through the
pictures of the safari on your first vacation. That’s the problem we’re trying to fix — to auto-
mate the process so that users can be in the moment. We also want to bring all of the power of
computer vision and machine learning to improve those photos, create derivative works, to
make suggestions…to really be your assistant.”

It shouldn’t be too surprising that the solution to the problem of pervasive documentation enabled
by technology is a new technology that allows you to continue documenting with even greater aban-
don. Like so many technological fixes to technological problems, it’s just a way of doubling down on
the problem. Nor is it surprising that he also suggested this would help users “be in the moment” with-
out of a hint of irony.

But here is the most important part of the whole interview, emphasis mine:

“[…] so part of Google photos is to create a safe space for your photos and remove any stigma
associated with saving everything. For instance, I use my phone to take pictures of receipts,
and pictures of signs that I want to remember and things like that. These can potentially pollute
my photo stream. We make it so that things like that recede into the background, so there’s no
cognitive burden to actually saving everything.”

Replace saving with remembering and the potential significance of a tool like Google Photos be-
comes easier to apprehend. Horowitz is here confirming that users will need to upload their photos to
Google’s Cloud if they want to take advantage of Google Photos’ most impressive features. He antici-
pates that there will be questions about privacy and security, hence the mention of safety. But the really
important issue here is this business about saving everything.

I’m not entirely sure what to make of the stigma Horowitz is talking about, but the cognitive burden
of “saving everything” is presumably the burden of sorting and searching. How do you find the one
picture you’re looking for when you’ve saved thousands of pictures across a variety of platforms and
drives? How do you begin to organize all of these pictures in any kind of meaningful way? Enter
Google Photos and its uncanny ability to identify faces and group pictures into three basic categories–
People, Places, and Things–as well as a variety of sub-categories such as “food,” “beach,” or “cars.”
Now you don’t need that second life to curate your photos. Google does it for you. Now we may docu-
ment our lives to our heart’s content without a second thought about whether or not we’ll ever go back
to curate our unwieldy hoard of images.

I’ve argued elsewhere that we’ve entered an age of memory abundance, and the abundance of mem-
ories makes us indifferent to them. When memory is scarce, we treasure it and care deeply about pre-
serving it. When we generate a surfeit of memory, our ability to care about it diminishes proportion-
ately. We can no longer relate to how Roland Barthes treasured his mother’s photograph; we are more
like Andy Warhol, obsessively recording all of his interactions and never once listening to the record-
ings. Plato was, after all, even closer to the mark than we realized. New technologies of memory recon-
figure the affections as well as the intellect. But is it possible that Google Photos will prove this judge-
ment premature? Has Google figured out how we may have our memory cake and eat it too?

I think not, and there’s a historical precedent that will explain why.

Ivan Illich, in his brilliant study of medieval reading and the evolution of the book, In the Vineyard
of the Text, noted how emerging textual technologies reconfigured how readers related to what they
read. It is a complex, multifaceted argument and I won’t do justice to it here, but the heart of it is
summed up in the title of Illich’s closing chapter, “From Book to Text.” After explaining what Illich
meant by the that formulation, I’m going to suggest that we consider an analogous development: from
photograph to image.

Like the photography, writing is, as Plato understood, a mnemonic technology. The book or codex is
only one form the technology has taken, but it is arguably the most important form owing to its storage
capacity and portability. Contrast the book to, for instance, a carved stone tablet or a scroll and you’ll
immediately recognize the brilliance of the design. But the matter of sorting and searching remained a
significant problem until the twelfth century. It is then that new features appeared to improve the
book’s accessibility and user-friendliness, among them chapter titles, pagination, and the alphabetized
index. Now one cloud access particular passages without having to either read the whole work or, more
to the point, either memorize the passages or their location in the book (illuminated manuscripts were
designed to aide with the latter).

My word choice in describing the evolution of the book above was, of course, calculated to make us
see the book as a technology and also to make certain parallels to the case of digital photography more
obvious. But what was the end result of all of this innovation? What did Illich mean by saying that the
book became a text?

Borrowing a phrase Katherine Hayles deployed to describe a much later development, I’d say that
Illich is getting at one example of how information lost its body. In other words, prior to these develop-
ments it was harder to imagine the text of a book as a free-floating reality that could be easily lifted and
presented in a different format. The ideas, if you will, and the material that conveyed them–the mes-
sage and medium–were intimately bound together; one could hardly imagine the two existing indepen-
dently. This had everything to do with the embodied dimensions of the reading experience and the
scarcity of books. Because there was no easy way to dip in and out of a book to look for a particular
fragment and because one would likely encounter but one copy of a particular work, the work was ex-
perienced as a whole that lived within the particular pages of the book one held in hand.

The book had then been read reverentially as a window on the world; it yielded what Illich termed
monastic reading. The text was later, after the technical innovations of the twelfth century, read as a
window on the mind of the author; it yielded scholastic reading. We might also characterize these as
devotional reading and academic reading, respectively. Illich summed it up this way:

“The text could now be seen as something distinct from the book. It was an object that could be
visualized even with closed eyes [….] The page lost the quality of soil in which words are
rooted. The new text was a figment on the face of the book that lifted off into autonomous exis-
tence [….] Only its shadow appeared on the page of this or that concrete book. As a result, the
book was no longer the window onto nature or god; it was no longer the transparent optical de-
vice through which a reader gains access to creatures or the transcendent.”

Illich had, a few pages earlier, put the matter more evocatively: “Modern reading, especially of the
academic and professional type, is an activity performed by commuters or tourists; it is no longer that
of pedestrians and pilgrims.”

I recount Illich’s argument because it illuminates the changes we are witnessing with regards to pho-
tography. Illich demonstrated two relevant principles. First, that small technical developments can have
significant and lasting consequences for the experience and meaning of media. The move from analog
to digital photography should naturally be granted priority of place, but subsequent developments such
as those in face recognition software and automated categorization should not be underestimated. Sec-
ondly, that improvements in what we might today call retrieval and accessibility can generate an order
of abstraction and detachment from the concrete embodiment of media. And this matters because the
concrete embodiment, the book as opposed to the text, yields kinds and degrees of engagement that are
unique to it.

Let me try to put the matter more directly and simultaneously apply it to the case of photography.
Improving accessibility meant that readers could approach the physical book as the mere repository of
mental constructs, which could be poached and gleaned at whim. Consequently, the book was some-
thing to be used to gain access to the text, which now appeared for the first time as an abstract reality; it
ceased to be itself a unique and precious window on the world and its affective power was compro-
mised.

Now, just as the book yielded to the text, so the photograph yields to the image. Imagine a 19th cen-
tury woman gazing lovingly at a photograph of her son. The woman does not conceive of the photo-
graph as one instantiation of the image of her son. Today, however, we who hardly ever hold photo-
graphs anymore, we can hardly help thinking it terms of images, which may be displayed on any of a
number of different platforms, not to mention manipulated at whim. The image is an order of abstrac-
tion removed from the photograph and it would be hard to imagine someone treasuring it in the same
way that we might treasure an old photograph. Perhaps a thought experiment will drive this home. Try
to imagine the emotional distance between the act of tearing up a photograph and deleting an image.
Now let’s come back to the problem Google Photos is intended to solve. Will automated sorting and
categorization along with the ability to search succeed in making our documentation more meaningful?
Moreover, will it overcome the problems associated with memory abundance? Doubtful. Instead, the
tools will facilitate further abstraction and detachment. They are designed to encourage the production
of even more documentary data and to further diminish our involvement in their production and stor-
age. Consequently, we will continue to care less not more about particular images.

Of course, this hardly means the tools are useless or that images are meaningless. I’m certain that
face recognition software, for instance, can and will be put to all sorts of uses, benign and otherwise
and that the reams of data users will feed Google Photos will only help to improve and refine the soft-
ware. And it is also true that images can be made use of in ways that photographs never could. But per-
haps that is the point. A photograph we might cherish; we tend to make use of images. Unlike the use-
less stuff around which my memories accumulate and that I struggle to throw away, images are all use-
value and we don’t think twice about deleting them when they have no use.

Finally, Google’s answer to the problem of documentation, that it takes us out of the moment as it
were, is to encourage such pervasive and continual documentation that it is no longer experienced as a
stepping out of the moment at all. The goal appears to be a state of continual passive documentation in
which case the distinction between experience and documentation blurs so that the two are indistin-
guishable. The problem is not so much solved as it is altogether transcended. To experience life will
be to document it. In so doing we are generating a second life, a phantom life that abides in the Cloud.

And perhaps we may, without stretching the bounds of plausibility too far, reconsider that rather
ethereal, heavenly metaphor–the Cloud. As we generate this phantom life, this double of ourselves con-
stituted by data, are we thereby hoping, half-consciously, to evade or at least cope with the unremitting
passage of time and, ultimately, our mortality?

June 8, 2015
85. Digital Media and Our Experience of Time

Early in the life of this site, which is to say about eight years ago, I commented briefly on a story
about five scientists who embarked on a rafting trip down the San Juan River in southern Utah in an ef-
fort to understand how “heavy use of digital devices” was affecting us. The trip was the brainchild of a
professor of psychology at the University of Utah, David Strayer, and Matt Ritchel wrote about it for
the Times.

I remember this story chiefly because it introduced me to what Strayer called “third day syndrome,”
the tendency, after about three days of being “unplugged,” to find oneself more at ease, more calm,
more focused, and more rested. Over the past few years, Strayer’s observation has been reinforced
by my own experience and by similar unsolicited accounts I’ve heard from several others.

As I noted back in 2010, the rafting trip led one of the scientists to wonder out loud: “If we can find
out that people are walking around fatigued and not realizing their cognitive potential … What can we
do to get us back to our full potential?”

I’m sure the idea that we are walking around fatigued will strike most as entirely plausible. That
we’re not realizing our full cognitive potential, well, yes, that resonates pretty well, too. But, that’s not
what mostly concerns me at the moment.

What mostly concerns me has more to do with what I’d call the internalized pace at which we expe-
rience the world. I’m not sure that’s the most elegant formulation for what I’m trying to get at. I have
in mind something like our inner experience of time, but that’s not exactly right either. It’s more like
the speed at which we feel ourselves moving across the temporal dimension.

Perhaps the best metaphor I can think of is that of walking on those long motorized walkways we
might find at an airport, for example. If you’re not in a hurry, maybe you stand while the walkway car-
ries you along. If you are in a hurry or don’t like standing, you walk, briskly perhaps. Then you step off
at the end of the walkway and find yourself awkwardly hurtling forward for a few steps before you re-
sume a more standard gait.
So that’s our experience of time, no? Within ordinary time, modernity has built the runways, and we
jump on and hurry ourselves along. Then, for whatever reason, we get dumped out into ordinary time
and our first experience is to find ourselves disoriented and somehow still feeling ourselves propelled
forward by the some internalized temporal inertia.

I feel this most pronouncedly when I take my two toddlers out to the park, usually in the late after-
noons after a day of work. I delight in the time, it is a joy and not at all a chore, yet I frequently find
something inside me rushing me along. I’ve entered into ordinary time along with two others who’ve
never known any other kind of time, but I’ve been dumped into it after running along the walkway all
day, day in and day out, for weeks and months and years. On the worst days, it takes a significant effort
of the will to overcome the inner temporal inertia and that only for a few moments at a time.

This state of affairs is not entirely novel. As Harmut Rosa notes in Social Acceleration, Georg Sim-
mel in 1897 had already remarked on how “one often hears about the ‘pace of life,’ that it varies in dif-
ferent historical epochs, in the regions of the contemporary world, even in the same country and among
individuals of the same social circle.”

Then I recall, too, Patrick Leigh Fermor’s experience boarding at a monastery in the late 1950s. Fer-
mor relates the experience in A Time to Keep Silence:

“The most remarkable preliminary symptoms were the variations of my need of sleep. After initial
spells of insomnia, nightmare and falling asleep by day, I found that my capacity for sleep was becom-
ing more and more remarkable: till the hours I spent in or on my bed vastly outnumbered the hours I
spent awake; and my sleep was so profound that I might have been under the influence of some hyp-
notic drug. For two days, meals and the offices in the church — Mass, Vespers and Compline — were
almost my only lucid moments. Then began an extraordinary transformation: this extreme lassitude
dwindled to nothing; night shrank to five hours of light, dreamless and perfect sleep, followed by
awakenings full of energy and limpid freshness. The explanation is simple enough: the desire for talk,
movements and nervous expression that I had transported from Paris found, in this silent place, no re-
sponse or foil, evoked no single echo; after miserably gesticulating for a while in a vacuum, it lan-
guished and finally died for lack of any stimulus or nourishment. Then the tremendous accumulation of
tiredness, which must be the common property of all our contemporaries, broke loose and swamped ev-
erything. No demands, once I had emerged from that flood of sleep, were made upon my nervous en-
ergy: there were no automatic drains, such as conversation at meals, small talk, catching trains, or the
hundred anxious trivialities that poison everyday life. Even the major causes of guilt and anxiety had
slid away into some distant limbo and not only failed to emerge in the small hours as tormentors but
appeared to have lost their dragonish validity.”

I read this, in part, as the description of someone who was re-entering ordinary time, someone
whose own internal pacing was being resynchronized with ordinary time.

So I don’t think digital media has created this phenomenon, but I do think it has been a powerful ac-
celerating agent. How one experiences time is complex matter and I’m not Henri Bergson, but one way
to account for the experience of time that I’m trying to describe here is to consider the frequency with
which we encounter certain kinds of stimuli, the sort of stimuli that assault our attention and redirect it,
the kinds of stimuli digital media is most adept at delivering. I suppose the normal English word for
what I’ve just awkwardly described is distraction. Having become accustomed to a certain high fre-
quency rate of distraction, our inner temporal drive runs at a commensurate speed. The temporal inertia
I’ve been trying to describe, then, may also be characterized as a withdrawal symptom once we’re de-
prived of the stimuli or their rate dramatically decreases. The total effect is both cognitive and affec-
tive: the product of distraction is distractedness but also agitation and anxiety.

Along these same lines, then, we might say that our experience of time is also structured by desire.
Of course, we knew this from the time we were children: the more you await something the slower
time seems to pass. Deprived of stimuli, we crave it and so grow impatient. We find ourselves in a “fre-
netic standstill,” to borrow a phrase from Paul Virilio. In this state, we are unable to attend to others or
to the world as we should, so the temporal disorder is revealed to have moral as well as cognitive and
affective dimensions.

It’s worth mentioning, too, how digital tools theoretically allow us to plan our days and months in
fine grain detail but how they have also allowed us to forgo planning. Constant accessibility means that
we don’t have to structure our days or weeks ahead of time. We can fiddle with plans right up to the
last second, and frequently do so. The fact that the speed of commerce and communication has dramat-
ically increased also means that I have less reason to plan ahead and I am more likely to make just-in-
time purchases and just-in-time arrangements. Consequently, our experience of time amounts to the ex-
perience of frequently repeated mini-dashes to beat a deadline, and there are so many deadlines.
“From the perspective of everyday life in modern societies,” Harmut Rosa writes, “as everyone
knows from their own experience, time appears as fundamentally paradoxical insofar as it is saved in
ever greater quantities through the ever more refined deployment of modern technology and organiza-
tional planning in almost all everyday practices, while it does not lose its scarce character at all.” Rosa
cites two researchers who conclude, “American Society is starving,” not for food, of course, “but for
the ultimate scarcity of the postmodern world, time.” “Starving for time,” they add, “does not result in
death, bur rather, as ancient Athenian philosophers observed, in never beginning to live.”

December 9, 2018
86. Don’t Romanticize the Present

Steven Pinker and Jason Hickel have recently engaged in a back-and-forth about whether or not
global poverty is decreasing. The first salvo was an essay by Hickel in the Guardian targeting claims
made by Bill Gates. Pinker responded here, and Hickel posted his rejoinder at his site.

I’ll let you dive in to the debate if you’re so inclined. The exchange is of interest to me, in part, be-
cause evaluations of modern technology are often intertwined with this larger debate about the relative
merits of what, for brevity’s sake, we may simply call modernity (although, of course, it’s
complicated).

I’m especially interested in a rhetorical move that is often employed in these kinds of debates: it
amounts to the charge of romanticizing the past.

So, for example, Pinker claims, “Hickel’s picture of the past is a romantic fairy tale, devoid of cita-
tions or evidence.” I’ll note in passing Hickel’s response, summed up in this line: “All of this violence,
and much more, gets elided in your narrative and repackaged as a happy story of progress. And you say
I’m the one possessed of romantic fairy tales.” Hickel, in my view, gets the better of Pinker on this
point.

In any case, the trope is recurring and, as I see it, tiresome. I wrote about it quite early in the life of
this blog when I explained that I did not, in fact, wish to be a medieval peasant.

More recently, Matt Stoller tweeted, “When I criticize big tech monopolies the bad faith response is
often a variant of ‘so you want to go back to horses and buggies?!?’” Stoller encountered some variant
of this line so often that he was searching for a simple term by which to refer to it. It’s a Borg
Complex symptom, as far as I’m concerned.

At a forum about technology and human flourishing I recently attended, the moderator, a fine
scholar whose work I admire, explicitly cautioned us in his opening statements against romanticizing
the past.
It would take no time at all to find similar examples, especially if you expand “romanticizing the
past” to include the equally common charge of reactionary nostalgia. Both betray a palpable anxious-
ness about upholding the superiority of the present.

I understand the impulse, I really do. I think it was from Alan Jacobs that I first learned about the
poet W. H. Auden’s distinction between those whose tendency is to look longingly back at some better
age in the past and those who look hopefully toward some ideal future: Arcadians and Utopians re-
spectively, he called them. Auden took these to be matters of temperament. If so, then I would read-
ily admit to being temperamentally Arcadian. For that reason, I think I well understand the temptation
and try to be on guard against it.

That said, stern warnings against romanticizing the past sometimes reveal a susceptibility to another
temptation: romanticizing the present.

This is not altogether surprising. To be modern is to define oneself by one’s location in time, specif-
ically by being on the leading edge of time. Novelty becomes a raison d’être.

As the historian Michael Gillespie has put it,

… to think of oneself as modern is to define one’s being in terms of time. This is remarkable. In
previous ages and other places, people have defined themselves in terms of their land or place,
their race or ethnic group, their traditions or their gods, but not explicitly in terms of time …
To be modern means to be “new,” to be an unprecedented event in the flow of time, a first be-
ginning, something different than anything that has come before, a novel way of being in the
world, ultimately not even a form of being but a form of becoming.

Within this cultural logic, the possibility that something, anything, was better in the past is not only
a matter of error, it may be experienced as a threat to one’s moral compass and identity. Over time, per-
haps principally through the nineteenth century, progress displaced providence and, consequently, opti-
mism displaced hope. The older theological categories were simply secularized. Capital-P Progress,
then, despite its many critics, still does a lot of work within our intellectual and moral frameworks.

Whatever its sources, the knee-jerk charge of romanticizing the past or of succumbing to reactionary
nostalgia often amounts to a refusal to think about technology or take responsibility for it.
As the late Paul Virilio once put it, “I believe that you must appreciate technology just like art. You
wouldn’t tell an art connoisseur that he can’t prefer abstractionism to expressionism. To love is to
choose. And today, we’re losing this. Love has become an obligation.”

We are not obligated to love technology. This is so not only because love, in this instance, ought not
to be an obligation but also because there is no such thing as technology. By this I mean simply
that technology is a category of dubious utility. If we allow it to stand as an umbrella term for every-
thing from modern dentistry to the apparatus of ubiquitous surveillance, then we are forced to either ac-
cept modern technology in toto or reject it in toto. We are thus discouraged from thoughtful discrimina-
tion and responsible judgment. It is within this frame that the charge romanticizing the past as a rejoin-
der to any criticism of technology operates. And it is this frame that we must reject. Modern technology
is not good by virtue of its being modern. Past configurations of the techno-social milieu are not bad by
virtue of their being past.

We should romanticize neither the past nor the present, nor the future for that matter. We should
think critically about how we develop, adopt, and implement technology, so far as it is in our power to
do so. Such thinking stands only to benefit from an engagement with the past as, if nothing else, a point
of reference. The point, however, is not a retrieval of the past but a better ordering of the present and
future.

February 6, 2019
87. Time, Self, and Remembering Online

Very early on in the life of this blog, memory became a recurring theme. I write less frequently
about memory these days, but I’m no less convinced that among the most important consequences of
digital media we must count its relationship to memory. After all, as the filmmaker, Louis Bunuel once
put it, “Our memory is our coherence, our reason, our feeling, even our action. Without it, we are noth-
ing.”

“What anthropologists distinguish as ‘cultures,’” Ivan Illich has written, “the historian of mental
spaces might distinguish as different ‘memories.’” This strikes me as being basically right, and, as Il-
lich knew, different memories arise from different mnemonic technologies.

It seems tricky to quantify this sort of thing or provide precise descriptions of causal mechanisms,
etc. but I’d lay it out like this:
1. We are what we remember
2. What we remember is a function of how we remember
3. How we remember, in turn, is a function of our technological milieu.
4. So, technological restructuring of how we remember is also a restructuring of consciousness, of
the self.

So, that said, I recently stumbled upon this tweet from Aaron Lewis: “what if old tweets were dis-
played with the profile pic you had at the time of posting. a way to differentiate between past and
present selves.”

This tweet was provocative in the best sense, it called forth thinking.

I’ll start by noting that there seems to be an assumption here that doesn’t quite hold in practice: that
people are frequently changing profile picks in a way that straightforwardly mirrors how they are
changing over time, or even that their profile picture is an image of their face. But the practical feasibil-
ity is beside the point for my purposes. Two things interested me: the problem to which Lewis’s specu-
lative proposal purports to be a solution, and, consequently, what it tells us about older forms of re-
membering that were not digitally mediated.

So, what is the problem to which Lewis’s proposal is a solution? It seems to be a problem arising
from an overabundance of memory, on the one hand, and, on the other, from how that memory relates
to our experience of identity. In a follow-up tweet, Lewis added, “it’s disorienting when one of my old
tweets resurfaces, wearing the digital mask i’m using here in 2019.”

I’m going to set aside for now an obviously and integrally related matter: to what degree should our
present self be held responsible for the utterances of an older iteration of the self that resurface through
the operations of our new memory machines? This is a serious moral question that gets to the heart of
our emerging regimes of digital justice, and one that is hotly debated every time that an old tweet or
photograph is dug up and used against someone in the present. This is what I’ve taken to referring to as
the weaponization of memory (this means that we can both imagine a host of morally distinguishable
uses and that environments are restructured whether the weapon is deployed or not). In short, I think
the matter is complicated, and I have no cookie-cutter solution. It seems to me that society will need to
develop something like a tradition of casuistry to adjudicate such matters equitably and that we still
have a long way to go.

Lewis’s observations suggest that our social media platforms, whatever else they may be, are
volatile archives of the self. They are archives, and I use the term loosely, because they store slices of
the self. Of course, we should acknowledge the fact that the platforms invite performances of the self,
which requires us to think more closely about what exactly they are storing: uncomplicated representa-
tions of the self as it is at that point? representations of the self as it wants to be perceived? tokens of
the self as it wants to be perceived which are thus implicitly reminders of the self we were via its aspi-
rations? Etc.

They are volatile in that they are active, social archives whose operations trouble the relationship be-
tween memory and the self by more widely distributing agency over the memories that constitute the
self. Our agency over our self-presentation is distributed among the algorithms which structure the plat-
forms and other users on the platform who have access to our memories and whose intentions toward
us will vary wildly.
What I’m reading into Lewis’s proposal then is an impulse, not at all unwarranted, to reassert a mea-
sure of agency over the operations of digitally mediated memory. The need to impose this order in turn
tells us something about how digitally mediated memory differs from older forms of remembering.

For one thing, the scale and structure of pre-digital memory did not ordinarily generate the same ex-
perience of a loss of agency over memory and its relation to the self. We did not have access to the vol-
ume of externalized memories we now do, and, more importantly, neither did anyone else. With
Lewis’s specific proposal in mind, I’d say that the ratio of remembering and forgetting, and thus of
continuity and discontinuity of the self, was differently calibrated, too. To put it another way, what I’m
suggesting is that we remembered and forgot in a manner that accorded with a relatively stable experi-
ence of the evolving self. As Derrida once observed, “They tell, and here is the enigma, that those con-
sulting the oracle of Trophonios in Boetia found there two springs and were supposed to drink from
each, from the spring of memory and from the spring of forgetting.”

And, even more specifically to Lewis’s point, I’d say that his proposal makes explicit the ordinary
and humane rhythms of change and continuity, remembering and forgetting implicit in the co-evolution
of self and body over time. “When I was a child,” the Apostle wrote, “I spoke like a child, I thought
like a child, I reasoned like a child.” And, we may add, I looked like a child. Thus the appropriateness
of my childishness was evident in my appearance. Yes, that was me as I was, but that is no longer me
as I now am, and this critical difference was implicit in the evolution of my physical appearance, which
signaled as much to all who saw me. No such signals are available to the self as it exists online.

Indeed, we might say that the self that exists online is in one important respect a very poor represen-
tation of the self precisely because of its tendency toward completeness of memory. Digital media, par-
ticularly social media platforms, condense the rich narrative of the self’s evolution over time into a
chaotic and perpetual moment. We might think of it as the self stripped of its story.* In any case, suf-
fice it to note that we find ourselves once more needing to compensate, with little success it would ap-
pear, for the absence of the body and the meaning it carries.

Lastly, thinking back to the obviously self-serving push in the last decade by social media compa-
nies like Facebook for users to maintain one online identity as a matter of integrity and authenticity, we
may now see that demand as paradoxical at best. The algorithmically constituted identity built upon its
archives of the self that the platforms impose upon us is a self we never have been nor ever will be.
More likely, we will find that it is a self we find ourselves often chasing and sometimes fleeing.

June 20, 2019


PART VI

Being Online
88. The Treadmill Always Wins

I have suggested that it’s good to occasionally ask ourselves, “Why do we read?” That question was
prompted in part by the unhealthy tendencies that I find myself struggling to resist in the context of on-
line reading. These tendencies are best summed up by the phrase “reading to have read,” a phrase I bor-
rowed from Alan Jacobs’ excellent The Pleasures of Reading in an Age of Distraction.

As it turns out, Jacobs revisited his comments on this score in a post discussing a relatively new
speed reading app called Spritz. The app sells itself as “reading reimagined,” rather brutally so if you
ask me. The app flashes words at rates up to 600 words per minute featuring its “patent-pending Redi-
cle” technology. In any case, you can follow the links to learn more about it if you’re so inclined. Near
the close of his post, after citing some trenchant observations by Julie Sedivy, Jacobs observed,

“It’s all too easy to imagine people who are taken with Spritz making decisions about what to
read based on what’s amenable to ‘spritzing.’ But that’s inevitable as long as [they are think-
ing] of reading as something to have done, something to get through.”

Sedivy and Jacobs both are pointing out one of the more insidious temptations technology poses, the
temptation to fit ourselves to the tool. In this case, the temptation is to prefer the kind of reading that
lends itself to the questionable efficiencies afforded by Spritz. As Sedivy puts it, these are “texts with
simpler sentences, sentences in which the complexity of information is evenly distributed, sentences
that avoid unexpected twists or turns, and sentences which explicitly lay out their intended meanings,
without requiring readers to mentally fill in the blanks through inference.”

In his post, Jacobs also linked to Ian Bogost’s insightful take on Spritz which was titled, interest-
ingly enough, “Reading to Have Read.” Bogost questions the supposedly scientific claims made
for Spritz by its creators. More importantly, though, he takes Spritz to be symptomatic of a larger prob-
lem:
“In today’s attention economy, reading materials (we call it ‘content’ now) have ceased to be
created and disseminated for understanding. Instead, they exist first (and primarily) for mere en-
counter. This condition doesn’t necessarily signal the degradation of reading; it also arises from
the surplus of content we are invited and even expected to read. But it’s a Sisyphean task. We
can no longer reasonably hope to read all our emails, let alone our friends’ Facebook updates or
tweets or blog posts, let alone the hundreds of daily articles and listicles and quizzes and the
like. Longreads may offer stories that are best enjoyed away from your desk, but what good are
such moments when the #longreads queue is so full? Like books bought to be shelved, articles
are saved for a later that never comes.”

Exactly. And a little further on, Bogost adds,

“Spritzing is reading to get it over with. It is perhaps no accident that Spritze means injection in
German. Like a medical procedure, reading has become an encumbrance that is as necessary as
it is undesirable. “Oh God,” we think. “Another office email thread. Another timely tumblr. An-
other Atlantic article.” We want to read them—really to read them, to incorporate them—but
the collective weight of so much content goes straight to the thighs and guts and asses of our
souls. It’s too much to bear. Who wouldn’t want it to course right through, to pass unencum-
bered through eyeballs and neurons just to make way for the deluge behind it?”

That paragraph eloquently articulates, better than I could, the concerns that motivated my ear-
lier post. I have nothing to add to what Sedivy, Jacobs, and Bogost have already said about Spritz ex-
cept to mention that I’m surprised no one, to my knowledge, has alluded to Bob Brown’s Readies. In
his 1930 manifesto, Brown declared, “The written word hasn’t kept up with the age,” and he developed
a mechanical reading device to meet that challenge. Brown’s reading machine, which you can read
about here, was envisioned as an escape from the page, not unlike Spritz. But as Abigail Thomas puts
it, “It is evident that through the materiality of the page acting as the imagined machine, that the reader
becomes the machine themselves.” Of course, I wouldn’t know of Brown were it not that one of my
grad school profs, Craig Saper, was deeply interested in Brown’s work.

That said, I do have one more thing to add. Spritz illustrates yet another temptation posed by modern
technologies. We might call it the challenge of the treadmill. When I was in my early twenties and still
in my more athletic phase, I took a stress test on a treadmill. The cardiologist told me to keep pace as
long as I could, but, he added, “the treadmill always wins.” Of course, being modestly competitive and
not a little prideful, I took that as a challenge. I ran hard on that machine, but, no surprise, the treadmill
won.

So much of our response to the quickening pace induced by modern technologies is to quicken our
own stride in response or to find other technologies that will help us do things more quickly, more effi-
ciently. But again, the treadmill always wins. Maybe the answer to the challenge of the treadmill is
simply to get off the thing.

But that decision doesn’t come easily for us. We have a hard time acknowledging our limitations. In
fact, so much of the rhetoric surrounding technology in the western tradition involves precisely the
promise of transcending our bodily limitations. Exhibit A, of course, is the transhumanist project.

In response, however, I submit the more humane vision of the agrarian and poet Wendell Berry.
In “Faustian Economics,” Berry, speaking of the “fantasy of human limitlessness” that animates so
much of our political and economic life, reminds us that we are “coming under pressure to understand
ourselves as limited creatures in a limited world.” But this, he adds, should not be cause for despair:

“[O]ur human and earthly limits, properly understood, are not confinements but rather induce-
ments to formal elaboration and elegance, to fullness of relationship and meaning. Perhaps our
most serious cultural loss in recent centuries is the knowledge that some things, though limited,
are inexhaustible. For example, an ecosystem, even that of a working forest or farm, so long as
it remains ecologically intact, is inexhaustible. A small place, as I know from my own experi-
ence, can provide opportunities of work and learning, and a fund of beauty, solace, and plea-
sure — in addition to its difficulties — that cannot be exhausted in a lifetime or in generations.”

I would suggest that Berry’s wisdom is just as applicable to the realm of reading and the intellectual
life as it is to our economic life.

May 8, 2014
89. Unplugged

I know that reflection pieces on technology sabbaths, digital detoxes, unplugging, and disconnecting
are a dime a dozen. Slightly less common are pieces critical of the disconnectionists, as Nathan
Jurgenson has called them, but these aren’t hard to come by either. Others, like Evgeny Morozov, have
contributed more nuanced evaluations. Not only has the topic been widely covered, if you’re reading
this I would guess that you’re likely to be more or less sympathetic to these practices, even if you har-
bor some reservations about how they are sometimes presented and implemented. All of that to say,
I’ve hesitated to add yet another piece on the experience of disconnection, especially since I’d be
(mostly) preaching to the choir. But … I’m going to try your patience and offer just a few thoughts for
your consideration.

First, I think the week worked well because its purpose wasn’t to disconnect from the Internet or
digital devices; being disconnected was simply a consequence of where I happened to be. I suspect that
when one explicitly sets out to disconnect, the psychology of the experience works against you. You’re
disconnecting in order to be disconnected because you assume or hope it will yield some beneficial
consequences. The potential problem with this scenario is that “being connected” is still framing, and
to some degree defining, your experience. When you’re disconnected, you’re likely to be thinking
about your experience in terms of not being connected. Call it the disconnection paradox.

This might mean, for example, that you’re overly aware of what you’re missing out on, thus dis-
tracted from what you hoped to achieve by disconnecting. It might also lead to framing your experience
negatively in terms of what you didn’t do–which isn’t ultimately very helpful–rather than positively in
terms of what you accomplished. In the worst cases, it might also lead to little more than self-congratu-
latory or self-loathing status updates.

In my recent case, I didn’t set out to be disconnected. In fact, I was rather disappointed that I’d be
unable to continue writing about some of the themes I’d been recently addressing. So while I was car-
rying on with my disconnected week, I didn’t think at all about being connected or disconnected; it was
simply a matter of fact. And, upon reflection, I think this worked in my favor.

This observation does raise a practical problem, however. How can one disconnect, if so de-
sired, while avoiding the disconnection paradox? Two things come to mind. As Morozov pointed out in
his piece on the practice of disconnection, there’s little point in disconnecting if it amounts to coming
up for breath before plunging back into the digital flood. Ultimately, then, the idea is to so order our
digital practices that enforced periods of disconnection are unnecessary.

But what if, for whatever reason, this is not a realistic goal? At this point we run up against the lim-
its of individual actions and need to think about how to effect structural and institutional changes.
Alongside those longterm projects, I’d suggest that making the practice of disconnection regular and
habitual will eventually overcome the disconnection paradox.

Second consideration, obvious though it may be: it matters what you do with the time that you gain.
For my part, I was more physically active than I would be during the course of an ordinary week, much
more so. I walked, often; I swam; and I did a good bit of paddling too. Not all of this activity was plea-
surable as it transpired. Some of it was exhausting. I was often tired and sore. But I welcomed all of it
because it relieved the accumulated stress and tension that I tend to carry around on my back, shoul-
ders, neck, and jaw, much of it a product of sitting in front of a computer or with a book for ex-
tended periods of time. It was a good week because at the end of it, my body felt as good as it had in a
long time, even if it was a bit battered and ragged.

The feeling reminded me of what the Patrick Leigh Fermor wrote about his stay in a monastery
early in the late 1950s, a kind of modernity detox. Initially, he was agitated, then he was overwhelmed
for a few days by the desire to sleep. Finally, he emerged “full of energy and limpid freshness.” Here is
how he described the experience in A Time to Keep Silence:

“The explanation is simple enough: the desire for talk, movements and nervous expression that
I had transported from Paris found, in this silent place, no response or foil, evoked no single
echo; after miserably gesticulating for a while in a vacuum, it languished and finally died for
lack of any stimulus or nourishment. Then the tremendous accumulation of tiredness, which
must be the common property of all our contemporaries, broke loose and swamped everything.
No demands, once I had emerged from that flood of sleep, were made upon my nervous energy:
there were no automatic drains, such as conversation at meals, small talk, catching trains, or the
hundred anxious trivialities that poison everyday life. Even the major causes of guilt and anxi-
ety had slid away into some distant limbo and not only failed to emerge in the small hours as
tormentors but appeared to have lost their dragonish validity.”

“[T]he tremendous accumulation of tiredness, which must be the common property of all our con-
temporaries”–indeed, and to that we might add the tremendous accumulation of stress and anxiety. The
Internet, always-on connectivity, and digital devices have not of themselves caused the tiredness,
stress, and anxiety, but they haven’t helped either. In certain cases they’ve aggravated the problem.
And, I’d suggest, they have done so regardless of what, specifically, we have been doing. Rather the
aggravation is in part a function of how our bodies engage with these tools. Whether we spend a day in
front of a computer perusing cat videos, playing Minecraft, writing a research paper, or preparing fi-
nancial reports makes little difference to our bodies. It is in each case a sedentary day, and these are, as
we all know, less than ideal for our bodies. And, because so much of our well-being depends on our
bodies, the consequences extend to the whole of our being.

I know that countless critics since the dawn of industrial society have lamented the loss of regular
physical activity, particularly activity that unfolded in “nature.” Long before the Internet, such com-
plaints were raised about the factory and the cubicle. It is also true that many of these calls for robust
physical activity have been laden with misguided assumptions about the nature of masculinity and
worse. But none of this changes the stubborn, intractable fact that we are embodied creatures and the
concrete physicality of our nature is subject to certain limits and thrives under certain conditions and
not others.

One further point about my experience: some of it was moderately risky. Not extreme sports-risky
or risky bordering on foolish, you understand. More like “watch where you step there might be a rattle
snake” risky (I avoided one by two feet or so) or “take care not to slip off the narrow trail, that’s a 300
foot drop” risky (I took no such falls, happily). I’m not sure what I can claim for all of this, but I would
be tempted to make a Merleau-Ponty-esque argument about the sort of engagement with our surround-
ings that navigating risk requires of us. I’d modestly suggest, on a strictly anecdotal basis, that there is
something mentally and physically salubrious about safely navigating the experience of risk. While
we’re at, it plug-in the “troubles” (read, sometimes risky, often demanding activities) that philosopher
Albert Borgmann encourages us to accept in principle.

Of course, it must immediately be added that this is a first-world-problem par excellence. Around
the globe there are people who have no choice but to constantly navigate all sorts of risks to their well-
being, and not of the moderate variety either. It must then seem perverse to suggest that some of us
might need to occasionally elect to encounter risk, but only carefully so. Indeed, but such might none-
theless be the case. Certainly, it is also true that all of us are at risk everyday when walking a city
street, or driving a car, or flying in a plane, and so on. My only rejoinder is again to lean on my experi-
ence and suggest that the sort of physical activity I engaged in had the unexpected effect of calling
on and honing aspects of my body and mind that are not ordinarily called into service by my typi-
cal day-to-day experience, and this was a good thing. The accustomed risks we thoughtlessly take,
crossing a street say, precisely because they are a routinized part of our experience do not call forth the
same mental and bodily resources.

A final thought. Advocating disconnection sometimes raises the charges of elitism—Sherry Turkle
strolling down Cape Cod beaches and what not. I more or less get where this is coming from, I think.
Disconnection is often construed as a luxury experience. Who gets to placidly stroll the beaches of
Cape Cod anyway? And, indeed, it is an unfortunate feature of modernity’s unfolding that what we
eliminate from our lives, often to make room for one technology or another, we then end up compen-
sating for with another technology because we suddenly realized that what we eliminated might have
been useful and health-giving.

It was Neil Postman, I believe, who observed that having eliminated walking by the adoption of the
automobile and the design of our public spaces, we then invented a machine on which we could simu-
late walking in order to maintain a minimal level of fitness. Postman’s chief focus, if I remember the
passage correctly, was to point out the prima facie absurdity of the case, but I would add an economic
consideration: in this pattern of technological displacement and replacement, the replacement is always
a commodity. No one previously paid to walk, but the treadmill and the gym membership are bought
and sold. So it is now with disconnection, it is often packaged as a commodified experience that must
be bought, and the costs of disconnection (monetary and otherwise) are for many too high.
But it seems to me that the answer is not to dismiss the practice of disconnecting as such or efforts
to engage more robustly with the wider world. If these practices are, even in small measure, steps to-
ward human flourishing, then our task is to figure out how we can make them as widely available as
possible.

July 28, 2014


90. Vows of Digital Poverty

Maybe deleting Facebook is something akin to taking monastic vows in medieval society.

Stay with me.

Here’s the background: In the aftermath of the latest spate of revelations confirming Facebook’s sta-
tus as a blight on our society and plague upon our personal lives, many have finally concluded that it is
time to #DeleteFacebook.

This seems like an obviously smart move, but some have pushed back against the impulse to aban-
don the platform.

[Self-disclosure: I have a Facebook account. I used to have a Facebook page for this blog. I’ve re-
cently deleted the page for the blog because it struck me as being inconsistent with my work here. I
have maintained my personal profile for some of the usual reasons: I thought I might do some good
there and for the sake of maintaining a few relationships that would likely dissolve altogether were it
not for the platform. The former now appears to be rather naive and the latter not quite worth the cost.
I’ve begun to gradually delete what I’ve posted over the years. I may leave a skeleton profile in place
for awhile for the sake of those weak ties, we’ll see. Update: I’ve deleted the profile.]

Here is how April Glaser presents the case against #DeleteFacebook:

I understand this reaction, but it’s also an unfair one: Deleting Facebook is a privilege. The
company has become so good at the many things it does that for lots of people, leaving the ser-
vice would be a self-harming act. And they deserve better from it, too. Which is why the initial
answer to Facebook’s failings shouldn’t be to flee Facebook. We need to demand a better Face-
book.

Siva Vaidhyanathan, no friend of Facebook, makes a similarly compelling case in the New York
Times:
So go ahead and quit Facebook if it makes you feel calmer or more productive. Please realize,
though, that you might be offloading problems onto those who may have less opportunity to
protect privacy and dignity and are more vulnerable to threats to democracy. If the people who
care the most about privacy, accountability and civil discourse evacuate Facebook in disgust,
the entire platform becomes even less informed and diverse. Deactivation is the opposite of ac-
tivism.

From a slightly different but overlapping perspective, a post at Librarian Shipwreck likewise com-
plicates the impulse to delete Facebook. The post draws on Lewis Mumford to frame our use of Face-
book as the acceptance of a technological bribe: “Deleting Facebook is an excellent way of refusing a
bribe. Yet it must be remembered that the bribe has been successful because it has offered people
things which seemed enticing, and the bribe sustains itself because people have now become reliant on
it.” The post also cites Neil Postman, a vociferous critic of television, to the effect that suggesting
Americans do away with television—as Jerry Mander, for example, argued—amounts to making “no
suggestion at all.”

I confess that much of the foregoing analysis seems more or less right to me, yet something does not
quite sit well.

None of these writers argue that there is a moral duty to remain on the platform, Vaidhyanathan
comes closest to this position, but they all imply that the best path may be to remain and fight for a bet-
ter Facebook, if not for yourself then for the sake of those, in the United States and abroad, who do not,
for a variety of reasons, have the luxury of abandoning Facebook.

But what exactly is the relevant temporal horizon of moral action? If deleting Facebook has some
unfortunate short term consequences, is it not still the better option in the long run? Can’t I find other
ways to support the class of people who might be hurt in the short run by a mass exodus of people from
Facebook? If I don’t think that any good or even better version of Facebook is possible, isn’t it best to
abandon the platform and encourage others to do so as well?

For my part, I find it increasingly useful, if disturbingly so, to refer to Ursula Le Guin’s “The Ones
Who Walk Away From Omelas” as a way of framing our situation and the choices that confront us.
Sometimes the only right response to the moral compromises that are foisted on us is to walk away, re-
gardless of what it might cost us. However … it is also true that when we consider walking away, by
deleting Facebook in this case, we should also consider who we leave behind.

So here is a suggestion. What if we imagine the decision to delete Facebook, or to abandon social
media altogether, as something like a vocation, a calling not unlike the calling to the monastic life.

The monastic life was not for everyone. For one thing, executed faithfully it required a great deal of
sacrifice. For another, society could not function if everyone decided to take vows and join a religious
order. Rather, those who took vows lived a life of self-denial for their own sake and for the sake of the
social order. Because the ordinary man and woman, the ruler, the solider, the artisan, etc. could not take
vows and so devote themselves to the religious ideal, those who could take vows prayed on their be-
half. They also, for a time, nurtured the intellectual life and preserved the materials upon which it de-
pended. And they embodied an ideal in their communities knowing that this ideal could not be realized
or pursued by most people. But their embodiment of the ideal benefited the whole. They withdrew
from society in order to do their part for society.

We can usefully frame the choice to delete Facebook or abstain from social media or any other act
of tech refusal by (admittedly loose) analogy to the monastic life. It is not for everyone. The choice can
be costly. It will require self-denial and discipline. Not everyone is in a position to make such a choice
even if they desired it. And maybe, under present circumstances, it would not even be altogether desir-
able for most people to make that choice. But it is good for all of us that some people do make that
choice.

In this way we can create a legitimate space for refusal, while acknowledging that such a choice is
only one way of fighting the good fight.

Those who choose to walk away will, if nothing else, be a sign to us, they will embody an ideal that
many may desire but few will be able to pursue. They will preserve an alternative way of being in the
world with its attendant memories and practices. And by doing so they will play their part in working
for the good of society.

Of course, as it was with medieval monasticism, not all who pursue such a choice will do so in good
faith, but those who do will be marked chiefly by humility.
March 27, 2018
91. Audience Overload

Information overload is a concept that has long been used to describe the experience of digital me-
dia, although the term and the problem itself predate the digital age.

In a 2011 blog post, Nicholas Carr distinguished between two kinds of information overload: situa-
tional overload and ambient overload.

“Situational overload is the needle-in-the-haystack problem: You need a particular piece of in-
formation – in order to answer a question of one sort or another – and that piece of information
is buried in a bunch of other pieces of information. The challenge is to pinpoint the required in-
formation, to extract the needle from the haystack, and to do it as quickly as possible. Filters
have always been pretty effective at solving the problem of situational overload …

“Situational overload is not the problem. When we complain about information overload, what
we’re usually complaining about is ambient overload. This is an altogether different beast. Am-
bient overload doesn’t involve needles in haystacks. It involves haystack-sized piles of needles.
We experience ambient overload when we’re surrounded by so much information that is of im-
mediate interest to us that we feel overwhelmed by the never ending pressure of trying to keep
up with it all.”

Relatedly, Eli Pariser coined the term “filter bubble” around 2010 to describe a situation generated
by platforms that deploy sophisticated algorithms to serve users information they are likely to care
about. These algorithms are responsive to a user’s choices and interactions with information on the
platform. The fear is that we will be increasingly isolated in bubbles that feed us only what we already
are inclined to believe.

Last month, Zeynep Tufekci published a sharp essay in MIT’s Technology Review titled, “How
social media took us from Tahrir Square to Donald Trump.” If you didn’t catch it when it came out, I
encourage you to give it a read. In it she briefly discussed the filter bubble problem and offered an ex-
cellent analysis:

“The fourth lesson has to do with the much-touted issue of filter bubbles or echo chambers—the
claim that online, we encounter only views similar to our own. This isn’t completely true.
While algorithms will often feed people some of what they already want to hear, research
shows that we probably encounter a wider variety of opinions online than we do offline, or than
we did before the advent of digital tools.

“Rather, the problem is that when we encounter opposing views in the age and context of social
media, it’s not like reading them in a newspaper while sitting alone. It’s like hearing them from
the opposing team while sitting with our fellow fans in a football stadium. Online, we’re con-
nected with our communities, and we seek approval from our like-minded peers. We bond with
our team by yelling at the fans of the other one. In sociology terms, we strengthen our feeling of
‘in-group’ belonging by increasing our distance from and tension with the ‘out-group’—us ver-
sus them. Our cognitive universe isn’t an echo chamber, but our social one is. This is why the
various projects for fact-checking claims in the news, while valuable, don’t convince people.
Belonging is stronger than facts.”

While the problems associated with information overload are worth our consideration, if we want to
critically examine the consequences of social media, particularly on our political culture, we need to
look elsewhere.

For one thing, the problem of information overload is not a distinctly digital problem, although it is
true that it has been greatly augmented by the emergence of digital technology.

Moreover, the focus on information overload also trades on an incomplete, and thus inadequate un-
derstanding of the human person. We are not, after all, merely information processing machines. As
I’ve suggested before, affect overload is a more serious problem than information overload. We are not
quite the rational actors we imagine ourselves to be. Affect not information is the coin of the realm in
the world of social media.
This is implicit in Tufekci’s analysis of the real problem related to the encounter with opposing
views online. It is also why the preoccupation with fact-checking is itself a symptom of the problem
rather than a solution. People do not necessarily share “fake news” because they believe it. Emotion
and social gamesmanship play an important role as well.

Finally, and this is the point I set out to make, we’ve been focusing on our information inputs when
we ought to have also paid attention to audience effect. That’s what’s different about the digital media
environment. Print gave us information overload, digital media gave all of us who would never have
had a way to reach an audience beyond our small social circle in the pre-digital world the means to do
so—it gave us audience overload. The audience is always with us, it is on demand whenever we want
it. And the audience can talk back to us instantaneously. We will become who we think we need to be
to get what we want from this audience.

And while it is impossible to fine-tune that audience in the same way we might work to fine-tune
our information flows, we nonetheless can customize it to a significant degree, and, more importantly,
we have some ideal image of that audience in our mind. It is important that this audience is not a defi-
nite, tangible audience before us. The indefinite shape of the audience allows us to give it shape in our
minds, which is to say that it can all the more effectively mess with us for its being, in part, an implicit
projection of our psyche.

It is this virtual audience which we desire, this audience we want to please, this audience from
whom we seek a reaction to satisfy our emotional cravings—cravings already manipulated by the struc-
ture of the platforms that connect us with our audience—it is this audience and the influence it exerts
over us that has played an important role in disordering our public discourse.

It is not just that our attention is fractured by the constant barrage of information, it is also that our
desire for attention has deformed our intellectual and emotional lives. The ever-present audience has
proven too powerful a temptation, too heavy a burden.

September 9, 2017
92. Digital Asceticism and Pascalian Angst

Writing in the Times, Kevin Roose describes how he recently arrived at a better working relation-
ship with his smartphone. It is not unlike similar narratives that you might have read at some point in
the last few years. Roose realizes that he is spending way too much time checking his smartphone, and
he recognizes that it’s taking a toll on him. A series of unambitious measures—using grayscale, in-
stalling app-blockers, etc.—can’t quite get the job done, so he turns to Catherine Price, the author of
“How to Break Up With Your Phone,” a 30-day guide to getting a better grip on your phone use.

You can read the piece to find out how things go for him. I’m just going to make two or three obser-
vations.

First, it’s always interesting to me to note the preemptive framings such pieces feel they must de-
ploy. They reveal a lot about their rhetorical context. For example, “I confess that entering phone rehab
feels clichéd, like getting really into healing crystals or Peloton.” Or, more pointedly, this: “Sadly,
there is no way to talk about the benefits of digital disconnection without sounding like a Goop sub-
scriber or a neo-Luddite. Performative wellness is obnoxious, as is reflexive technophobia.”

The implicit fear that commending tech temperance might earn one the label of neo-Luddite is espe-
cially telling. Of course, the fear itself already cedes too much ground to the Luddite bashers and to
the Borgs, who use the term as an a-historical slur.

Preemptive framings notwithstanding, there’s still some fodder for another, socio-economic strand
of criticism: taking recourse and gaining access to a life-coach of sorts, pottery classes, an unplugged
weekend in the Catskills framed as a Waldenesque experience.

Who has the time and resources for such measures? And, of course, this is the point of the socio-e-
conomic critique of testimonials of this sort: it’s a luxury experience available to very few. Mind you,
Roose himself notes the even more luxurious variants: multi-thousand dollar getaways at luxury detox
destinations, etc.
This line of criticism is understandable. The point-scoring, out-of-hand, performative dismissals are
less so, of course. The unfortunate result of all of this is that in certain quarters it is difficult to speak
about the problems associated with digital devices without getting some serious blowback. We must
all, to some degree, necessarily speak out of our own experience, and this requires a measure of humil-
ity not only in the speaking but also in the hearing. Neither is particularly encouraged by the structures
of social media.

Secondly, I was reminded about a post I wrote back in 2011 about the possible virtues of what I then
called digital asceticism. I used Google to find the post with “digital asceticism” as my search term. I
expected to have to wade through a few pages to find my post, but I was surprised to discover that it
was, at least for me, the third result. The first result was a blog titled “Digital Ascetics,” which appears
to have been launch and abandoned in 2009 (practicing what it preached, I suppose). I was surprised by
this because I was sure that in the eight or so years since I wrote the post, especially in light of the
rolling tech backlash over the last couple of years, there would be plenty of references to digital asceti-
cism. Apparently not.

It seems like an obvious framing for the wide variety of efforts that we deploy to get at least a sem-
blance of control over our use of digital devices. It’s even suggested by some of the language we use to
talk about such measures: fasts and Sabbaths, for example. Frankly, I find that it may be a better option
than the more popular alternative: the clinical language of addiction.

Part of what the language of asceticism captures is the aspect of self-denial that is necessarily in-
volved, at least for many who would practice it. We won’t really be able to grapple with the personal
consequences of digital devices unless we recognize that doing without them now entails what is expe-
rienced as real deprivation. Indeed, the connection between the word privation and privacy is sugges-
tive on this score. Privacy in the world of digital media is privation; the measures we might take to
achieve privacy require privations we find barely tolerable. The self-denial entailed by digital asceti-
cism also takes on a slightly different hue: it is not simply a matter of denying certain experiences to
the self, it is denying the self the very experiences by which it is constituted in the present psycho-so-
cial technical milieu.

It is also useful to consider that asceticism is never properly for its own sake. It is for the sake of
some greater good than that which we deny ourselves. But this valuation is itself dependent on the psy-
cho-social milieu. What higher goods do we find plausible or compelling? The answer to this question
is already informed by the structures within which the self takes shape. Such goods are always a prod-
uct of what Charles Taylor has called a social imaginary, and we might think of media as the material
scaffolding of the social imaginary. We might expect, then, that the pursuit of alternative goods to
those sustained by the dominant media structures will always appear renegade, deviant, or, at best,
quixotic. Yet another reason why we might feel pressured to preemptively hedge our decision to pursue
them.

Lastly, I can’t quite keep from echoing Pascalian notes whenever I encounter them. “Mostly,”
Roose observed, “I became aware of how profoundly uncomfortable I am with stillness.” “It’s an un-
nerving sensation,” he added, “being alone with your thoughts in the year 2019. Catherine had warned
me that I might feel existential malaise when I wasn’t distracting myself with my phone.”

“All of man’s misfortune comes from one thing,” Pascal noted in the 17th century, “which is not
knowing how to sit quietly in a room.” “Nothing,” he wrote, “could be more wretched than to be intol-
erably depressed as soon as one is reduced to introspection with no means of diversion.” Or, more to
the point still,

“Being unable to cure death, wretchedness, and ignorance, men have decided, in order to be
happy, not to think about such things …. What people want is not the easy peaceful life that al-
lows us to think of our unhappy condition, nor the dangers of war, nor the burdens of office, but
the agitation that takes our mind off it and diverts us.”

Has there ever been a more perfect instrument of what Pascal called diversion than the smartphone?
I’m hard pressed to think of a counter example.

It occurs to me that these three observations are not unrelated. The difficulty in speaking earnestly
about digital asceticism involves the erosion of a social imaginary that would render the goods for the
sake of which such asceticism might be undertaken plausible or desirable. Instead we are left with the
less satisfying and less compelling language of wellness with which to speak about such things. But
this itself reminds us that whatever the dominant social imaginary may tell us about the virtues and
wonders of the preternaturally connected life, there is a world against which such visions rub up and in
this world the recalcitrance of our bodies signals to us the inadequacy of the vision. Then, it may be
that, heeding the challenge of the body, we undertake the practices of digital asceticism and as a conse-
quence stumble upon these signs of an alternative, possibly disconcerting understanding of our situa-
tion, as Pascal suggested. And we might even find that we really are better for it and that wellness, as
we imagined it, had, finally, very little to do with it.

February 25, 2019


PART VII

Miscellaneous
93. Shared Sensibilities

Rochelle Gurstein captures in lovely prose a handful of thoughts I have attempted, with less elo-
quent results, to express myself. “The Perils of Progress”, a brief essay appearing in The New Repub-
lic, opens with a story about “a lecture by an exquisitely sensitive, painfully alert poet friend of ours
about how we live today” which elicits tired labels contemptuously applied. As Gurstein puts it:

These days, even a few well-considered, measured reservations about digital gadgetry appar-
ently cannot be tolerated, and our poet friend was informed by forward-looking members of the
audience that she was fearful of change, nostalgic, in short, reactionary with all its nasty politi-
cal connotations.

And this presumably from a learned and sophisticated audience.

Gurstein goes on to challenge the same NY Times editorial by Steven Pinker which drew some of
my own comments some time ago. She observes that in …

… disputes about the consequences of innovation, those on the side of progress habitually see
only gains. They have no awareness that there are also losses—equally as real as the gains
(even if the gain is as paltry as “keeping us smart”)—and that no form of bookkeeping can ever
reconcile the two.

Gurstein concludes with some poignant reflections on the materiality of the book and the difference
it makes to the experience of reading and the reader’s relationship to the author. The essay truly is
worth a few minutes of your time to read. Also reading the few comments posted in response to
Gurstein’s essay tends to reinforce her concerns.

At one point in the essay Gurstein spoke of Pinker’s “stacking the deck against” her sensibility.
That word, sensibility, struck me. This is I think near to the heart of matter. What Gurstein and others
like her attempt to defend and preserve is not merely a point of view or a particular truth. It is more
subjective than that, but is not merely preference. It is not at all like a preference, which, I suspect, is
precisely what those who do not understand it will try to label it. It is, well, a sensibility — a certain
disposition or way of being in the world. It is an openness and a sensitivity to certain kinds of experi-
ence and to certain dimensions of reality. Because of this it resists description and facile reduction to
the terms of a cost/benefit analysis. Consequently, it can be difficult to convincingly defend a sensibil-
ity to those who know nothing of it. Maybe it is best described as a “seeing the world as” or, perhaps
better still, a “feeling the world as.” A sensibility is a posture toward life, a way of inhabiting the
world.

What all of this groping for words may have at its center is the experiential quality of a sensibility,
and experience is, after all, incommunicable. Unless, that is, two people share the sensibility and then
words may even seem superfluous. In this sense, those who share a sensibility, share the world. Those
who lack or fail to appreciate the sensibility Gurnstein articulates know only to shake their heads in
condescending bemusement. What those, like Gurnstein and her poet friend, who grieve the passing of
a culture that nurtured their sensibility fear may be the onset of a long loneliness.

November 4, 2011
94. After Stories

In the Old Testament, or the Hebrew Bible if you prefer, there is a story about a king and his ex-
cesses and a prophet who, as we would say today, spoke truth to power. The story is found in the book
attributed to Samuel and the king was David, the most famous and revered of ancient Israel’s rulers. As
is almost to be expected of men in power, David was infected with the notion that he might, with im-
punity, take all that his eyes desired, including the wife of another man — a good and loyal man who
served David honorably. The king sleeps with Bathsheeba, the man’s wife, and she becomes pregnant.
Hoping to cover up his rapaciousness, David recalls the husband, Uriah, from the battlefield and allows
him the night with his wife expecting that he will do what all soldiers home from war would do given a
night with the woman they love. Uriah would suppose the baby his, and all would be hidden from sight.
Unfortunately for David, Uriah cannot bear the unfair advantage he has been granted over his comrades
at the front and refuses to sleep with his wife. Getting Uriah drunk made little difference; he was a
rock. So David had him killed.

Again, following an all too familiar pattern, David refused to acknowledge his guilt and his power
shielded him from consequences and shame. That is until he meets with a prophet named Nathan.
Nathan claims to bring news of a great injustice that had been perpetrated in the land. He tells David of
a poor shepherd whose lone sheep was seized by a wealthy man in order to feed his guests, and this de-
spite owning a great number of his own sheep. David is outraged; he demands to know who this man is
that he may be brought to justice. Nathan, having artfully laid the trap, replies, “You are that man.”
With that simple story Nathan bypassed David’s arrogant blindness and brought him to a startled
recognition of the vileness of his actions.

I recount this well-known story because I have, in recent conversations, found myself expressing the
need to gracefully articulate the virtue and necessity of making what would be very hard and unpopular
choices for the sake our own personal well-being and the health of our society. Much of what I write,
whether on matters relating to technology or in my occasional ramblings on other diverse topics, is
premised on the assumption that human flourishing demands the recognition and acceptance of certain
limits. I assume that the highest form of freedom is not the ability to pursue whatever whim or fancy
may strike us at any given moment, but rather the freedom to make choices which will promote our
well being and the well being of our communities. And such choices often involve sacrifice and the
curtailment of our own autonomy. To put this another way, happiness, that elusive state which accord-
ing to Aristotle is the highest good we all pursue, lies not at the end of a journey at which every turn we
have chosen for ourselves, but along the path marked by choices for others and in accord with a moral
order that may at times require the reordering rather than immediate satisfaction of our desires.

Put more practically, perhaps, the health of our society may now rest on our learning to live within
constraints — economic, political, natural — that we have spent the last few decades ignoring or other-
wise refusing. But no sooner do those words cross my lips or appear before my eyes as I type them,
than I realize that they are likely to be unwelcome and unappealing words. And concurrently I realize
that the language of limits may be misconstrued to mean that we must not pursue legitimate forms of
material and social progress. On this point I endorse once more the distinction made by Albert
Borgmann between troubles (read limits) that we accept in practice but oppose in principle, and those
troubles (limits) we accept both in practice and in principle because we are ultimately better for accept-
ing them. But this is all a hard sell.

On more than one occasion I have referred to an essay by Wendell Berry that appeared
in Harper’s three years ago. The essay was titled “Faustian Economics: Hell Hath No Limits.” I refer
to it often because I believe there are few writers who articulate the case for limits so well as he. Berry
succeeds because he is able not only to criticize the ideology of limitlessness and point to its often dis-
astrous consequences, but also to make a positive case for the possibilities of beauty and flourishing
that arise from a life that embraces rather than refuses certain kinds of limits. Berry frames our limits as
“inducements to formal elaboration and elegance, to fullness of relationship and meaning.” And it is
this framing that is essential to the public case for any reorientation of our thinking and living in and
with this world.

With her recent essay in The Nation, “Night Thoughts of a Baffled Humanist,” Marilynne Robinson
matches Berry’s gift for speaking hard words with a grace that allows them to be heard, even if they are
finally rejected. As I read Robinson’s words I marveled at what was unfolding line by line. Here she
was dismantling our idols and stripping our altars, speaking to us with a seriousness and gravity that is
wholly absent from our political and cultural discourse, and yet it was all done with mesmerizing art-
fulness. It was pungent medicine going down with sweet delight.

We need more writers, thinkers, and leaders in the mold of Berry and Robinson. It is a testament to
their winsomeness and wisdom that both articulated essentially conservative (although not Republican)
and religiously intoned visions which were published in decidedly left-of-center publications. It is, of
course, also a testimony to the poverty of our categories.

It occurred to me, then, that it was little wonder they were able to make their case so well since both
were novelists and one a poet as well. Little wonder because it seems to me that the case for limits is
best shown rather than told. In other words, it is best conveyed by a story rather than a lecture. Like
David, we need our prophets to weave their critique of our deeply entrenched disorders into a narrative
that would bypass our self-righteous defenses. Moreover, these narratives need also to capture, in the
manner that only a story can capture, the beauty and love that attend to lives lived by the counterintu-
itive logic of restraint, moderation, self-sacrifice, and regard for neighbor and place.

That it is the novelist and the poet that is best positioned to make such a case is also not surprising
since their work is a constant affirmation of the inexhaustible beauty that arises from the formal elabo-
ration of endless possibilities within a field of real and imposed limitations. Consider language itself as
the primordial model of a limited and bounded but inexhaustible resource. The use of language is
bounded by the grammar that allows for intelligibility and poets have since times immemorial bound
themselves to structures that have called forth rather than foreclosed boundless creativity. Little wonder
then that daily finding and making beauty within the limits of language, novelists and poets are best po-
sitioned to articulate the fulfillment and joy that may arise from the refusal to prioritize personal auton-
omy and the unencumbered life. After all, just as the frictionless life is also a life without traction, the
life that refuses all burdens and attachments is, to borrow a phrase, unbearably light.

My hope is that we have not altogether lost our taste for stories and poems, that the sun has not yet
set on literary sensibility. It would be tragic if for clarity and simplicity’s sake we sought our answers
from technocrats with bullet-points and found that we could not hear or be moved to action by what
they had to say. Although, perhaps that would be for the better since the technocratic logic that refuses
complexity is more a part of the problem than of any credible solution. Worse still would be to find that
our habits of attention, as some of our more pessimistic critics have warned, had become so attenuated
that we could not follow an artful plot nor give a poem the loving, patient care that it demands before it
will yield its wisdom.

Reviewing Robert Bellah’s “Religion in Human Evolution,” sociologist David Martin summarizes
the book’s central message as follows: “‘We’ are inveterate story tellers as well as theoreticians … As
ever in Bellah, his rigorous commitment to objectivity emits a normative aura: it is not a matter of
putting stories behind us as childish but of telling the best stories to frame our collective existence.”

Indeed, and we might even put the matter more urgently. “It is difficult to get the news from
poems,” William Carlos Williams admitted in a line from “Asphodel,” “yet men die miserably every
day for lack of what is found there.”

Alasdair MacIntyre famously concluded his ground breaking After Virtue by leaving us waiting “not
for a Godot, but for another — doubtless very different — St. Benedict.” It would seem, however, that
we would do better to wait for another, doubtless very different, Nathan to penetrate through our blind-
ness and awaken us to the possibilities offered by St. Benedict.

November 18, 2011


95. Suffering, Joy, and Presence

Yet, John did pen his letter. There were things the medium would not convey well, but he said all
that could be said with pen and ink. He recognized the limits of the medium and used it accordingly,
but he did not disparage the medium for its limits. Pen and ink were no less authentic, no less real, nor
were they deemed unnatural. They were simply inadequate given whatever it was that John wanted to
communicate. For that, the fullness of embodied presence was deemed necessary. It was, I think, a
practical application of a theological conviction which John had elsewhere memorably articulated.

In the first chapter of his Gospel, John wrote, “The Word became flesh and made his dwelling
among us.” It is a succinct statement of the doctrine of the incarnation, what Christians around the
world celebrate at Christmas time. The work of God required the embodiment of divine presence.
Words were not enough, and so the Word became flesh. He wept with those who mourned, he took the
hand of those no others would touch, he broke bread and ate with outcasts, and he suffered. All of this
required the fullness of embodied presence. John understood this, and it became a salient feature of his
theology.

For my part, these thoughts have been passing in and out of mind inchoately and inarticulately since
the Newtown shooting, and specifically as I thought about the responses to the shooting throughout our
media environment. I was troubled by the urge to post some reaction to the shooting, but, initially, I
don’t think I fully understood what troubled me. At first, it was the sense that I should say something,
but I’ve come to believe that it was rather that I should say something.

Thinking about it as a matter of I saying something struck me as an unjustifiably self-indulgent. I


still believe this to be part of the larger picture, but there was more. Thinking about it as a matter of
I saying something pointed to the limitations of the media through which we have been accustomed to
interacting with the world. As large as images loom on digital media, the word is still prominent. For
the most part, if we are to interact with the world through digital media, we must use our words.
We know, however, that our words often fail us and prove inadequate in the face of the most pro-
found human experiences, whether tragic, ecstatic, or sublime. And yet it is in those moments, perhaps
especially in those moments, that we feel the need to exist (for lack of a better word), either to comfort
or to share or to participate. But the medium best suited for doing so is the body, and it is the body that
is, of necessity, abstracted from so much of our digital interaction with the world. With our bodies we
may communicate without speaking. It is a communication by being and perhaps also doing, rather
than by speaking.

Of course, embodied presence may seem, by comparison to its more disembodied counterparts, both
less effectual and more fraught with risk. Embodied presence enjoys none of the amplification that
technologies of communication afford. It cannot, after all, reach beyond the immediate place and time.
And it is vulnerable presence. Embodied presence involves us with others, often in unmanageable,
messy ways that are uncomfortable and awkward. But that awkwardness is also a measure of the power
latent in embodied presence.

Embodied presence also liberates us from the need to prematurely reach for rational explanation and
solutions — for an answer. If I can only speak, then the use of words will require me to search for
sense. Silence can contemplate the mysterious, the absurd, and the act of grace, but words must search
for reasons and fixes. This is, in its proper time, not an entirely futile endeavor; but its time is usually
not in the aftermath. In the aftermath of the tragic, when silence and “being with” and touch may be the
only appropriate responses, then only embodied presence will do. Its consolations are irreducible. This,
I think, is part of the meaning of the Incarnation: the embrace of the fullness of our humanity.

Words and the media that convey them, of course, have their place, and they are necessary and
sometimes good and beautiful besides. But words are often incomplete, insufficient. We cannot content
ourselves with being the “disincarnate users” of electronic media that McLuhan worried about, nor can
we allow the assumptions and priorities of disincarnate media to constrain our understanding of what it
means to be human in this world.

At the close of the second epistle that bears his name, John also wrote, “I have much to write to you,
but I do not want to use paper and ink.” But in this case, he added one further clause. “Instead,” he con-
tinued, “I hope to visit you and talk with you face to face, so that our joy may be complete.” Joy com-
pleted. Whatever it might mean for our joy to be completed, it is a function of embodied presence with
all of its attendant risks and limitations.

December 23, 2012


96. The Tourist and the Pilgrim

What does it mean to be a tourist?

I thought about this often while spending two weeks out of the country being just that, a tourist. Sit-
ting in front of one more cathedral, I think it was, having assaulted the structure with my camera and
waiting to move on to the next target, I wondered whether someone had written something like a phi-
losophy of tourism. I was certain someone must have; Alain de Botton surely (as it turns out, he had).
Every book I’ve ever imagined turned out already to have been written, often numerous times over. I
was sure this one was no exception. In any case, I didn’t plan to write that book. It was only a question
occasioned by the nagging feeling that there was something fundamentally disordered about the experi-
ence we call tourism.

We know, of course, that there are many kinds of tourists ranging from the obnoxiously oblivious
who seem to gleefully embrace the worst elements of the stereotype, to the ironically self-conscious
who take pains to avoid appearing as a tourist at all. Most of us, as tourists that is, probably fall some-
where in between. For my part, I had little interest in pretending to be other than I was. And I certainly
was not going to stop taking pictures in order to avoid appearing as a tourist.

In fact, I suffer from a severe case of camera eye. I would not claim to be an amateur photographer,
as that might imply too high a level of photographic savvy, but I do enjoy taking pictures — many,
many pictures. My wife tells me that she knows before I pull out the camera that I am about to do so
because I get a certain look on my face that says, “That’s a good shot I’ve got to have.” I have no doubt
that this is the case. I’ve frequently used the experience of walking around with a digital camera as an
illustration of the way a technology can alter our experience merely by having it in hand. I use this ex-
ample principally because I know its existential force all too well.

This photographic compulsion led me to think of being a tourist as a spectrum of activity defined by
the degree to which the eye dominates the experience. On one end, seeing is all; the other is multi-sen-
sory. Perhaps it is toward the visual end that most tourists naturally gravitate. You go to see sights. You
are told that you must see this, that, and the other thing. You haven’t really been to X if you haven’t
seen Y. And so on it goes. It is, more often than not, sight that first mediates our experience of any
place. Further, if what there is to see is new or strange or majestic or stunning, we will continue to
equate being there with seeing. And wanting to render the ephemeral visual experience durable we will
seek to capture it with photographs burdened all the while by the realization that our pictures will al-
ways disappoint.

Clearly, then, I tend toward this end of spectrum. But even I recognize that seeing is not the only
way to experience a place, or even the best way. And so I try to listen and to smell. From time to time I
will touch a building to feel the place. And, of course, there is the tasting. The camera captures none of
this, and so there is nothing to do but to put it away and sit and observe, with all the senses, this place
and these people and the dynamic reality we call culture that emerges from their interaction. To “take it
all in” as is it is sometimes put.

But even at this multi-sensory end of the spectrum, there is something that did not quite sit right
with me. I kept thinking that in the end it is all still driven by the impulse to consume, to take in and
take away. It was as those tribesmen feared; with the camera I was hunting for the soul of the place,
somehow to disassociate it from the material space and absorb it into myself. And even when I set the
camera aside and sought to capture the full sensory experience, the impulse was still the same. How
can it be otherwise? The essence of tourism is not merely spatial; it is also temporal. A tourist is not
simply someone who goes to a different place, but someone whose experience of that place will be
temporary. The experience of tourism is always defined by the nearness of its end. And so always con-
scious that I can be in this place only so much longer, I try to hard to take it in, which is to say, to con-
sume it.

In this mode, there is little thought for what one might give to the place or how one might spend
themselves in the place/for the place, or for the people of the place. There is little thought for how the
place might transform the traveler either. The place is assimilated to the self and it becomes another ve-
hicle of self-expression and self-fulfillment.

While thinking about what a book on the philosophy of tourism might encompass and what histori-
cal antecedents it might survey, it seemed obvious that it would have to reckon with pilgrimage. Pil-
grimage had already been on my mind. In fact, pilgrimage is never far from my mind as a resonant
metaphor for the religious life. But the idea of pilgrimage was nearer than usual after having serendipi-
tously watched The Way just two days before embarking on my own less symbolically fraught journey.

The Way is a 2010 film written and directed by Emilio Estevez and starring Martin Sheen. The fa-
ther and son pair play a father and son. Early in the film, Sheen’s character travels to France to recover
the body of his son played by Estevez. Upon arriving in France, he discovers that his son died while
having just begun the famed pilgrimage to Santiago de Compostela in northwestern Spain where the
body of the Apostle James is reportedly buried. This discovery propels Sheen’s character to undertake
the same journey with the ashes of his son in tow. Along the way he is joined by an unlikely set of
three fellow pilgrims and the film tells the story of their transformation on the way to Santiago de
Compostela.

It is not a profound film, but that is the worst one can say about it I think. It is an earnest film that
manages to blend lovely scenery and charming characters in its gesture toward the profound. Along
their premodern pilgrim way, the characters flash their postmodern sensibilities by engaging in a run-
ning debate about what constitutes authentic pilgrimage. Does making the trek on a bicycle negate the
authenticity of the journey? Does recourse to credit cards? The modern hiking gear?

As the film makes plain, many now undertake the pilgrimage with very different motives than their
medieval predecessors thus raising the question of authenticity. We can imagine that those most eager
to define the authentic pilgrim experience might formulate their concern as an effort to protect the pu-
rity of the pilgrimage against the tourist ethos that animates so many that are now on the way. The zeal
for purity stems from a desire to shield the experience from the encroachment of commodification and
the dynamic of consumerism.

In the end, the film seems to suggest that whatever one’s motives, the road will have its own way.
None of the pilgrims whose paths the film follows receive what they expected or desired, but each is
transformed. We might say that it is they who have been consumed by the journey. Thinking about the
film, it occurred to me that the better, more interesting spectrum placed tourism on one end and pil-
grimage on the other.

The way of the tourist is to consume; the way of the pilgrim is to be consumed. To the tourist the
journey is a means. The pilgrim understands that it is both a means and an end in itself. The tourist and
the pilgrim experience time differently. For the former, time is the foe that gives consumption its ur-
gency. For the latter, time is a gift in which the possibility of the journey is actualized. Or better, for the
pilgrim time is already surrendered to the journey that, sooner or later, will come to its end. The tourist
bends the place to the shape of the self. The pilgrim is bent to shape of the journey.

Finally, it seemed to me that this was all about more than the literal trips we take for we are all, in a
different sense, on the way. In our time of abandonment, home for most must now be a mythic place
touched only by hope. We are untethered, unencumbered, uprooted. Under these conditions we have
only to decide whether we are on the way as tourists or as pilgrims.

June 6, 2012
97. The Tech-Savvy Amish

It would be impossible to find human cultures that were not also tool-using cultures. For as long as
there have been human beings there has also been technology. This is undeniable – it is also a rather
banal observation too often deployed to dismiss any criticism of technology.

Since humans have always been technological creatures, cyborgs as it is fashionable to say, then we
shouldn’t bother with criticism — or so the implicit argument goes. Excepting the Unabomber, how-
ever, most critics of technology are not interested in abolishing “technology”; nor are they so unsophis-
ticated that they do not recognize the necessary entanglement of the human and the technological. They
are, however, searching for an elusive harmony or equilibrium between the technological and their vi-
sion of the life well lived. This vision will vary from person to person and from community to commu-
nity, and so the elusive harmony is elusive in part because there is no default configuration that is time-
lessly and universally applicable.

In earlier times when technological change proceeded at a less frenetic pace, equilibrium might have
been achieved and sustained by a particular society for the span of several generations. In such cases, it
would not have been incumbent on each individual to work out their relationship to technology for
themselves. In fact, they could hardly have conceived of the need to so. It would have been for them as
taken for granted a facet of life as the rising and setting of the sun. There would have been no experi-
ence of future shock.

But when the pace of technological change precludes the possibility of arriving at a settled so-
cial-technical configuration that may be passed down from one generation to the next, then technology
becomes a thing to be thought of and fretted over. It presents itself as a problem to be addressed. We
wonder about its consequences and we worry about it effects. And this is as one would expect. It is a
symptom of the liquidity of modern life.

Even the Amish, often and mistakenly taken to be Luddites par excellence, are not exempt from this
state of perpetual negotiation. In fact, the Amish are paradigmatically modern in that they have made
the need to think about technology a defining feature of their culture. That they do so with extreme de-
liberateness and with so strong a preference for the conservation of their way of life only superficially
distinguishes them from the rest of American society. In their consciousness of technology and its con-
sequences, the Amish have more in common with the rest of us than any of us do with members of pre-
modern society.

What does distinguish the Amish from the rest of American society is their unwillingness to refuse
responsibility for the deployment of technology and their willingness to pay the necessary costs re-
quired to realize their vision of human flourishing.

The Amish live in the same world that we do and are as aware of new technologies as the rest of us.
But while technological momentum has taken root within most of American culture rendering the no-
tion of technological determinism plausible, the Amish have succeeded in creating philosophical mo-
mentum. That is, they have institutionalized technological criticism which has substituted for the ab-
sence of change as a stabilizing factor. And it seems to me that this makes the Amish just about the
most tech-savvy group of people around.

August 4, 2012
98. Freedom From Authenticity

My thinking about authenticity is sporadic and owes more to serendipity than to any conscientious
scholarly endeavor. For example, most recently, from no particular quarter, the following question for-
mulated itself in my head: “What is the problem to which authenticity is the answer?” There is nothing
particularly insightful about this question, but it did get me thinking about authenticity from a different
angle. The meandering mental path that subsequently unfolded led me to identify this problem as some
sort of psychic rupture or dissonance. We don’t think of authenticity at all unless we think of it as a
problem, and it presents itself as a problem at the very time it enters our conscious awareness. It is a
problem tied to our awareness of ourselves as selves.

As William Deresiewicz put it, “the search for authenticity is futile. If you have to look for it, you’re
not going to find it.” Authenticity, like happiness and love and probably everything else that is truly
significant in life, partakes of this dynamic whereby the sought after thing can be attained only by not
consciously seeking it. Think of it, and now it is a problem; seek it, and you will not find it; focus on it,
and it becomes elusive.

So authenticity is the sort of thing that vanishes the moment you become aware of it. It’s what you
have only when you’re not thinking of it. And what you’re not thinking of when you have it is yourself.
Authenticity is a crisis of self invoked by a hypertrophied self-awareness that makes it impossible not
think of oneself. I don’t think this is a matter of being a horribly selfish or arrogant person. No, in fact,
I think this kind of self-awareness is more often than not burdened with insecurity and fear and anxiety.
It’s a voice most people want to silence and, hence, the Sisyphean quest for authenticity.

Few people have written as incisively about the discontents of the modern self as Walker Percy.
Percy, a medical doctor turned philosophical novelist, went from being a diagnostician of physical mal-
adies to one of existential maladies. With his acute Pascalian eye, Percy made a literary career of diag-
nosing the modern self’s inability to understand itself. This was the theme of his send-off of the self-
-help genre, Lost in the Cosmos: The Last Self-Help Book.
Percy chose the following passage from Nietzsche as an epigraph for Lost in the Cosmos:

“We are unknown, we knowers, to ourselves … Of necessity we remain strangers to ourselves,


we understand ourselves not, in our selves we are bound to be mistaken, for each of us holds
good to all eternity the motto, ‘Each is the farthest away from himself’—as far as ourselves are
concerned we are not knowers.”

A little further on, in his inventory of possible “selfs” (or should that be “sevles”), Percy offered this
description of the lost self:

“With the passing of the cosmological myths and the fading of Christianity as a guarantor of the
identity of the self, the self becomes dislocated, … is both cut loose and imprisoned by its own
freedom, yet imprisoned by a curious and paradoxical bondage like a Chinese handcuff, so that
the very attempts to free itself, e.g., by ever more refined techniques for the pursuit of happi-
ness, only tighten the bondage and distance the self ever farther from the very world it wishes to
inhabit as its homeland …. Every advance in an objective understanding of the Cosmos and in
its technological control further distances the self from the Cosmos precisely in the degree of
the advance—so that in the end the self becomes a space-bound ghost which roams the very
Cosmos it understands perfectly.”

Percy was writing in 1983. Centuries earlier, St. Augustine wrote, “I have been made a question to
myself.” The problem of authenticity is much older than we sometimes realize. Perhaps we might say
that it is a perpetually possible problem that is more or less actualized given certain historical or psy-
chological conditions. There may be a few reasons, then, why this perpetually possible problem has
been actualized more frequently in recent history.

Modernity is one long identity crisis. In traditional societies, identity was given. It was grounded in
the relative solidity of pre-modern life. Individuals inhabited an identity that was given by time, place,
the structures and institutions of daily life. In modernity, all that is solid melts … and choice is the sol-
vent. A multiplicity of choices arise where once there were few or none – choices regarding vocation,
home, spouse, religion, and more. Consumer society is simply the apotheosis of a very long trajectory
and Luther was her prophet.
Crisis of identity used to be the province of exiles and their children. Modernity generalizes the con-
dition of exile. Where there is choice there is freedom. There is also uncertainty, anxiety, regret, and
self-consciousness. Choice foregrounds the choosing self. Choices, because they could have been oth-
erwise, become signals to be read. They disclose and they reveal. When this dynamic is embraced, hap-
pily or despondently, identity becomes performance and a performed identity – relative to an inhabited,
given identity – feels inauthentic. Performance is knowing, cool, detached, ironic. Performance feels
inauthentic because it is rehearsed action.

Media is also implicated in this story. Until recently the power to produce media – whether textual,
photographic, or audio-visual – has been in the hands of a relative few. Under these circumstances indi-
viduals were the medium and culture was the message. When the power to produce media is democra-
tized the relationship is reversed. Culture becomes the medium and the self is the message.

In the early twentieth century, Walter Benjamin wrote, “Any person today can lay claim to being
filmed.” Any person today can lay claim to being a filmmaker. In the same essay, “The Work of Art in
the Age of Mechanical Reproduction,” Benjamin added, “The distinction between author and public is
about to lose its axiomatic character … At any moment the reader is ready to become a writer.” Digital
media has democratized the production of media even further. We are not only actors, but also direc-
tors of our own lives as we perform them for an audience, imagined or real. Authenticity, if it is taken
to mean either non-performative action or immediate action that is not self-reflexive, is no longer an
option. In the world created by the expansion of choices, we have no choice in the matter.

The poet W. H. Auden, although not addressing the question of authenticity directly, arrived at a
corollary conclusion:

“The old pre-industrial community and culture are gone and cannot be brought back. Nor is it
desirable that they should be. They were too unjust, too squalid, and too custom-bound. Virtues
which were once nursed unconsciously by the forces of nature must now be recovered and fos-
tered by a deliberate effort of the will and the intelligence. In the future, societies will not grow
of themselves. They will be either made consciously or decay.”
We might likewise say that the given self for whom authenticity was not a problem cannot now be
brought back. Whatever virtues we attributed to it, if they can be had still, must be had by “a deliberate
effort of the will and the intelligence.”

In his 2005 Kenyon College Commencement address, David Foster Wallace offers a few helpful
considerations toward this end. First, there’s this: “Here is just one example of the total wrongness of
something I tend to be automatically sure of: everything in my own immediate experience supports my
deep belief that I am the absolute centre of the universe; the realest, most vivid and important person in
existence. We rarely think about this sort of natural, basic self-centeredness because it’s so socially re-
pulsive.”

This is what he calls our default setting. Our default setting is to think about the world as if we were
its center, to process every situation through the grid of our own experience, to assume “that my imme-
diate needs and feelings are what should determine the world’s priorities.” This is our default setting in
part because from the perspective of our own experience, the only perspective to which we have imme-
diate access, we are literally the center of the universe.

Wallace also issued this warning: “Worship power you will end up feeling weak and afraid, and you
will need ever more power over others to numb you to your own fear. Worship your intellect, being
seen as smart, you will end up feeling stupid, a fraud, always on the verge of being found out.”

So then, worship authenticity and …

But, Wallace also tells us, it doesn’t have to be this way. The point of a liberal arts education — this
is a commencement address after all — is to teach us how to exercise choice over what we think and
what we pay attention to. And Wallace urges us to pay attention to something other than the mono-
logue inside our head. Getting out of our own heads, what Wallace called our “skull-sized kingdoms”
— this is the only solution to the problem of authenticity.

Serendipitously, around the same time I read Wallace’s speech, I stumbled on a video about glass-
blowing in which a glass-blower is talking about his craft when he says this: “When you’re blowing
glass, there really isn’t time to have your mind elsewhere – you have to be 100% engaged.” This sug-
gests that certain kinds of practices can so focus our attention on themselves, that we stop, for a time,
paying attention to ourselves.
Now, I know, we can’t all run off and take up glass blowing. That would be silly and potentially
dangerous. The point is that this practice (and many others) has the magical side effect of taking a per-
son out of their own head by acutely focusing our attention. The leap I want to make now is to suggest
that this skill is transferable. Learn the mental discipline of focusing your attention in one particular
context and you will be better able to deploy it in other circumstances.

It’s like the ascetic practice of fasting. The point is not that food is bad or that denying yourself food
is somehow itself virtuous or meritorious. It’s about training the will and learning how to temper desire
so as to direct and deploy it toward more noble ends. You train your will with food so that you can ex-
ercise it meaningfully in other, more serious contexts.

In any case, Wallace is right. It’s hard work not yielding to our default self-centeredness. “The re-
ally important kind of freedom,” Wallace explained, “involves attention and awareness and discipline,
and being able truly to care about other people and to sacrifice for them over and over in myriad petty,
unsexy ways every day.” That is a crucial, challenging truth.

Freedom is not about being able to do whatever we want, when we want. It has nothing to do with
listening to our heart or following our dreams or whatever else we put on greeting cards and bumper
stickers. Real freedom comes from learning to get out of our “skull-sized kingdoms” long enough to
pay attention to the human being next us so that we might treat them with decency and kindness and re-
spect. Then perhaps we’ll have our authenticity, but we’ll have it because we’ve stopped caring about
it.

October 10, 2012


99. What Do We Want, Really?

I was in Amish country last week. Several times a day I heard the clip-clop of horse hooves and the
whirring of buggy wheels coming down the street and then receding into the distance–a rather sooth-
ing Doppler effect. While there, I was reminded of an anecdote about the Amish relayed by a reader in
the comments to a recent post:

I once heard David Kline tell of Protestant tourists sight-seeing in an Amish area. An Amish-
man is brought on the bus and asked how Amish differ from other Christians. First, he ex-
plained similarities: all had DNA, wear clothes (even if in different styles), and like to eat good
food.

Then the Amishman asked: “How many of you have a TV?”


Most, if not all, the passengers raised their hands.
“How many of you believe your children would be better off without TV?”
Most, if not all, the passengers raised their hands.
“How many of you, knowing this, will get rid of your TV when you go home?”
No hands were raised.
“That’s the difference between the Amish and others,” the man concluded.

I like the Amish. As I’ve said before, the Amish are remarkably tech-savvy. They understand that
technologies have consequences, and they are determined to think very hard about how different tech-
nologies will affect the life of their communities. Moreover, they are committed to sacrificing the bene-
fits a new technology might bring if they deem the costs too great to bear. This takes courage and re-
solve. We may not agree with all of the choices made by Amish communities, but it seems to me that
we must admire both their resolution to think about what they are doing and their willingness to make
the sacrifices necessary to live according to their principles.
The Amish are a kind of sign to us, especially as we come upon the start of a new year and consider,
again, how we might better live our lives. Let me clarify what I mean by calling the Amish a sign. It is
not that their distinctive way of life points the way to the precise path we must all follow. Rather, it is
that they remind us of the costs we must be prepared to incur and the resoluteness we must be prepared
to demonstrate if we are to live a principled life.

It is perhaps a symptom of our disorder that we seem to believe that all can be made well merely by
our making a few better choices along the way. Rarely do we imagine that what might be involved in
the realization of our ideals is something more radical and more costly. It is easier for us to pretend that
all that is necessary are a few simple tweaks and minor adjustments to how we already conduct our
lives, nothing that will makes us too uncomfortable. If and when it becomes impossible to sustain that
fiction, we take comfort in fatalism: nothing can ever change, really, and so it is not worth trying to
change anything at all.

What is often the case, however, is that we have not been honest with ourselves about what it is that
we truly value. Perhaps an example will help. My wife and I frequently discuss what, for lack of a bet-
ter way of putting it, I’ll call the ethics of eating. I will not claim to have thought very deeply, yet,
about all of the related issues, but I can say that we care about what has been involved in getting food
to our table. We care about the labor involved, the treatment of animals, and the use of natural re-
sources. We care, as well, about the quality of the food and about the cultural practices of cooking and
eating. I realize, of course, that it is rather fashionable to care about such things, and I can only hope
that our caring is not merely a matter of fashion. I do not think it is.

But it is another thing altogether for us to consider how much we really care about these things. Act-
ing on principle in this arena is not without its costs. Do we care enough to bear those costs? Do we
care enough to invest the time necessary to understand all the relevant complex considerations? Are we
prepared to spend more money? Are we willing to sacrifice convenience? And then it hits me that what
we are talking about is not simply making a different consumer choice here and there. If we really care
about the things we say we care about, then we are talking about changing the way we live our lives.

In cases like this, and they are many, I’m reminded of a paragraph in sociologist James Hunter’s
book about varying approaches to moral education in American schools. “We say we want the renewal
of character in our day,” Hunter writes,
“but we do not really know what to ask for. To have a renewal of character is to have a renewal
of a creedal order that constrains, limits, binds, obligates, and compels. This price is too high
for us to pay. We want character without conviction; we want strong morality but without the
emotional burden of guilt or shame; we want virtue but without particular moral justifications
that invariably offend; we want good without having to name evil; we want decency without the
authority to insist upon it; we want moral community without any limitations to personal free-
dom. In short, we want what we cannot possibly have on the terms that we want it.”

You may not agree with Hunter about the matter of moral education, but it is his conclusion that I
want you to note: we want what we cannot possibly have on the terms that we want it.

This strikes me as being a widely applicable diagnosis of our situation. Across so many different do-
mains of our lives, private and public, this dynamic seems to hold. We say we want something, often
something very noble and admirable, but in reality we are not prepared to pay the costs required to ob-
tain the thing we say we want. We are not prepared to be inconvenienced. We are not prepared to re-
order our lives. We may genuinely desire that noble, admirable thing, whatever it may be; but we want
some other, less noble thing more.

At this point, I should probably acknowledge that many of the problems we face as individuals and
as a society are not the sort that would be solved by our own individual thoughtfulness and resolve, no
matter how heroic. But very few problems, private or public, will be solved without an honest reckon-
ing of the price to be paid and the work to be done.

So what then? I’m presently resisting the temptation to now turn this short post toward some happy
resolution, or at least toward some more positive considerations. Doing so would be disingenuous.
Mostly, I simply wanted to draw our attention, mine no less than yours, toward the possibly unpleasant
work of counting the costs. As we thought about the new year looming before us and contemplated
how we might live it better than the last, I wanted us to entertain the possibility that what will be re-
quired of us to do so might be nothing less than a fundamental reordering of our lives. At the very least,
I wanted to impress upon myself the importance of finding the space to think at length and the courage
to act.

December 31, 2014


100. The Wonder of What We Are

I recently caught a link to a brief video showing a robotic hand manipulating a cube. Here is a
longer video from which the short clip was taken, and here is the article describing the technology that
imbued this robotic hand with its “remarkable new dexterity.” MIT’s Technology Review tweeted a link
with this short comment: “This robot spent the equivalent of a hundred years learning how to manipu-
late a cube in its hand.”

Watching the robotic hand turn the cube this way and that, I was reminded of those first few months
of a child’s life when they, too, learn how to use their hands. I remembered how absurdly proud I felt
as a new father watching my baby achieve her fine motor skill milestones. I’m not sure who was more
delighted when, after several failed attempts, she finally picked up her first puff and successfully
brought it to her mouth.

This, in turn, elicited a string of loosely related reflections.

I imagined the unlikely possibility that one unintended consequence of these emerging technologies
might be renewed wonder at the marvel that is the human being.

After all, the most sophisticated tools we are currently capable of fashioning are only haltingly de-
veloping the basic motor skills that come naturally to a six-month-old child. And, of course, we have
not even touched on the acquisition of language, the capacity for abstract thought, the mystery of con-
sciousness, etc. We’re just talking about turning a small cube about.

It seemed, then, that somewhere along the way our wonder at what we can make appears to have
displaced our wonder at what we are.

Ultimately, I don’t think I want to oppose these two realities. Part of the wonder of what we are is,
indeed, that we are the sort of creatures who create technological marvels.
Perhaps there’s some sort of Aristotelian mean at which we ought to aim. It seems, at least, that if
we marvel only at what we can make and not also at what we are, we set off on a path that leads ulti-
mately toward misanthropic post-humanist fantasies.

Or, as Arendt warned, we would become “the helpless slaves, not so much of our machines as of our
know-how, thoughtless creatures at the mercy of every gadget which is technically possible, no matter
how murderous it is.”

It is odd that there is an impulse of sorts to create some of these marvels in our own image as it
were, or that we seek to replicate not only our own capacities but even our physiology.

Yet, it is precisely this that also makes us anxious, fearful that we will be displaced or uncertain
about our status in the great chain of being, to borrow an old formulation.

But our anxieties tend to be misplaced. More often than not, the real danger is not that our machines
will eclipse us but that we will conform ourselves to the pattern of our machines.

In this way we are entranced by the work of our hands. It is an odd spin on the myth of Narcissus.
We are captivated not by our physical appearance but by our ingenuity, by how we are reflected in our
tools.

But this reflection is unfaithful, or, better, it is incomplete. It veils the fullness of the human person.
It reduces our complexity. And perhaps in this way it reinforces the tendency to marvel only at what we
can make by obscuring the full reality of what we are.

This full reality ultimately escapes our own (self-)understanding, which may explain why it is so
tempting to traffic in truncated visions of the self. This creative self that has come to know so much of
the world, principally through the tools it has fashioned, remains a mystery to itself.

We could do worse, then, than to wonder again at what we are: the strangest phenomenon in the
cosmos, as Walker Percy was fond of saying.

June 6, 2019
Appendix: Writing Elsewhere

Over the last ten years, I’ve also written occasionally for other publications. Here are a few selec-
tions.

Essays

“The Easy Way Out”: On the myth of convenience and modern technology for Real Life.

“The Inescapable Town Square” – Part of The New Atlantis’s symposium on “the crisis of digital
discourse.”

“Always On” – Technologies of the self and heightened self-consciousness at Real Life Magazine.

“Personal Panopticons” – On the subjective experience of pervasive surveillance at Real Life Maga-
zine.

“The Tech Backlash We Really Need” – Analysis of the so-called tech backlash. In the summer
2018 issue of The New Atlantis.

“Technology in America” – A Tocquevillian analysis of technology in America written for The


American. This piece was featured on Arts & Letters Daily and The Browser’s Best of the Web.

“Dead and Going to Die” – Thinking about consciousness and identity in an age of pervasive docu-
mentation with the help a Civil War-era photograph. At The New Inquiry.

“Circle of Presence” – At The New Inquiry, this essay examines the role our devices play in our in-
teractions with others in light of Merleau-Ponty’s philosophy of embodiment.

“What Do I ‘Like’ When I Like On Facebook” – An Augustinian reflection on social media and
identity at Cyborgology.
Reviews

“The Power of Silence by Robert Cardinal Sarah” – Review at Mere Orthodoxy.

“How Facebook Deforms Us” – Review of Siva Vaidhayanathan’s Antisocial Media for The New
Atlantis.

You might also like