Download as pdf or txt
Download as pdf or txt
You are on page 1of 398

Salve Regina University

The Digital Incunabula:


The Future of Storytelling in the Digital Age

A dissertation submitted to
the faculty of the Humanities program
in candidacy for the degree of
Doctor of Philosophy

By
Michael Scully

Newport, Rhode Island


March 2, 2018
For Alison

ii
Michael Scully©2018

iii
Salve Regina University

A dissertation submitted in partial fulfillment of


the requirements for the degree of
Doctor of Philosophy

The Digital Incunabula: The Future of Storytelling in the Digital Age

Michael Scully

Approved:

________________________
Daniel Cowdin, Ph.D.

________________________
Matthew Ramsey, Ph.D.

________________________
Trischa Goodnow, Ph.D.

________________________
Michael Budd, Ph.D.

iv
TABLE OF CONTENTS

INTRODUCTION ……………………………………………………. ix

Part I: Literal Landscape …………………………..………... 1

Chapter 1: Publishing Revolution


Moveable Type
The 42-line Bible
Incunabula
Mechanical Reproduction
Burned in Anger
The Commodity of Thought
The English Author
The Pamphlet
‘Hack’ Writers of Grub Street
Newspapers
The Birth of Narrative
Nonfiction Narrative Structure
Literary Journalism
Literacy
The Sacred Space
Writing and Daydreaming
Five Centuries of Silence
Summary of Literary Media

Part II: Return to Orality ……….………….………………..….. 89

Chapter 2: The Return of ‘Hot’ Media


Photochemical
The Moving Image
Capturing Sound
Radio Fills the Air
Pocket Radio
Television Takes Over
Farnsworth’s Gift
Narrative in Oral Media
Summary of Oral Media

Chapter 3: The Digital Revolution


The Computer + the Internet
Telephony
The Telephone

v
The Internet
Web Browsers and Search Engines
Social media
The Rise of Amateur Video
Digital Content
Are We Ready?
The Digital Orality
Narrative in the Digital Age
Summary of the Digital Revolution

Chapter 4: The Perfect Thing


Sony TR-63
Birth of Apple
Beyond the Paradigm
iPod Culture
Future of Digital Commerce
The iPhone Revolution
The iPad Arrives
The Third Screen
Multimedia Experiments for the Third Screen
iTunes App Store
iBook Author
Augmented Reality/Virtual Reality
Summary of Multimedia

Part III: Multimedia .…………………………………………… 219

Chapter 5: ‘Total Work of Art’


Gesamtkunstwerk
Experiments with Content
‘The Crossing’
‘Snow Fall’
‘The Jockey’
‘Firestorm’
Review of Experimental Multimedia
The Internet is ‘Disappearing’
The List

Chapter 6: Media Observations


Digital Economies
Public Media
Amateurism
The Digital Disruption
Summary of Digital Economies
The Digital Age

vi
Chapter 7: Dystopia
Digital News Anchors
The Treacherous Turn
Crumbling Capitalism

Part IV: Conclusions .………………...…....……..……………… 320

Chapter 8: The Digital Incunabula


Content Creation
Packaging: the Next ‘Black Box’
The Future of Storytelling
The Digital ‘Master Printer’
The Recipe for Digital Storytelling

Bibliography …………………………………….….….....……………… 352

vii
Abstract

The term “incunabula” refers to the transition period that took place 50 years after
Johannes Gutenberg introduced his printing press to the publishing world (1455 to
1505). In this thesis, I compare the first incunabula to the current “Digital
Incunabula,” which I believe is the 50-year transition underway as we assimilate
traditional storytelling practices to digital platforms (1996 to 2046). To illustrate this, I
review the histories of the literal age, the secondary orality and the digital orality
before making some observations about our transition into multimedia storytelling.
The paper reviews several key experiments in multimedia storytelling produced by
The New York Times, The Rocky Mountain News and The Guardian among others.
The paper reviews the influences of consumer electronic devices including tablet
computers, augmented reality, virtual reality and artificial intelligence. It also reflects
upon nonfiction narrative forms for written and oral communication media. The paper
also considers aspects of copyright policy, public media policy and the influences of
the “digital disruption.” The paper concludes with some observations about the future
of long-form, nonfiction storytelling as we move through the 21st century.

viii
Introduction

The Great Hall

In Washington, D.C., the Library of Congress remains a popular spot for

tourists. The library is actually a series of three buildings linked by underground

pathways but the most popular of these—the Thomas Jefferson Building—is the

museum and the destination most vacationers frequent. Like many of the other

buildings on Capitol Hill, this building has a stone façade and the Beaux-Arts style

structure itself covers an entire city block; atop the building, there is a copper cupola

dome, which is the signature flourish defining the exterior of this structure. Inside, the

Jefferson Building has to be one of the most ornate buildings in America: the hallways

and gathering spaces are tiled and the walls and ceilings host murals, mosaics and

paintings; statues abound.1 After all, this is the world’s largest library.

There are many entrances into the library complex, but the main one faces west

towards the Capitol Building; visitors must walk up a sweeping series of stone

staircases to approach the main doors. Once through security, tourists enter The Great

Hall, which is a gorgeous two-story expanse and the showplace of the building. The

entrance to the Main Reading Room is located at the back of the space and to get

there, visitors will often walk between two seemingly identical and innocuous wooden

showcases. The cases themselves are roughly each two meters long, a meter in depth

and roughly a half-meter at its highest point; and the interiors are meticulously climate

controlled. The Library of Congress hosts an impressive collection: there are nearly 32

1
Cole, John Young, Henry Hope. Reed, and Herbert Small. The Library of Congress: The Art
and Architecture of the Thomas Jefferson Building. New York: Norton, 1997.

ix
million books in the LOC system but the books housed inside each of these showcases

must be two of the most valuable.2

On the south side of the hall is the Giant Bible of Mainz; opposite, and just 20

steps away, is the Gutenberg Bible. To the laymen, these books appear similar; in fact,

they look nearly identical. They are made of roughly the same materials, the texts are

similar, the page designs mirror each other and both are “Illuminated,” or ornamented

with gold leaf lettering and painted illustrations. Also, as it happens, both Bibles were

produced in the same region of Germany, possibly in the same city, at roughly the

same time: each dates to about 1455, give or take 18 months.3

Here is the remarkable difference: The Giant Bible of Mainz is a manuscript—

or a book written by hand—and it took one scribe 18 months to produce; while the

Gutenberg Bible was manufactured on a printing press and is one of the original 180

copies Johannes Gutenberg ran off during his year-and-a-half-long fledging

experiment with his new invention.4

So, on the south side of the hall is the manmade instrument; on the north side,

the machine made. This transition marks the wholesale introduction of

industrialization setting in motion the 600-year-long struggle between man and

machine; then as now, the struggle continues, the machine is winning; and of late,

there is a new paradigm shift underway in world of publishing: with the

2
Cole, John Young, Henry Hope. Reed, and Herbert Small. The Library of Congress: The Art
and Architecture of the Thomas Jefferson Building. New York: Norton, 1997.
3
Eisenstein, Elizabeth L. The Printing Revolution in Early Modern Europe. Cambridge:
Cambridge University Press, 1983.
4
Ibid.

x
commercialization of the Internet, printed materials have been moving (slowly) from

the industrialized printing press system and into the realm of the electronic, the digital.

Today, we are living at the dawn of the age of digital storytelling. Books and

associated media are now being digitized, published online and delivered to our

computers and cellphones and tablets via the Internet.

Since the commercialization of the Internet in 1996, we have seen a lot of

major changes in the way we do things. The Internet has transformed the way we

communicate, bank, shop, travel and consume media. It is this last relationship that

interests me most. We are certainly consuming media much differently but I suspect

there are many more changes ahead and I’d like to explore that future looking at what

may be up ahead of us.

In this treatise, my purpose is to write about the two major transitions in the

history of publishing: the first began in the middle of the 15th century—from

manuscript to print—and the second—the movement from print to digital

publishing—is underway now. In an effort to better understand what lay ahead in

storytelling in the digital age, I would like to look at the first period of transition—

known as the “Incunabula”—from handwritten works to the printed word, and then

attempt to project an idea of the future of storytelling as we move forward exploring

its application in a digital world.

About the Author

As you read through this, please understand that I am a seasoned, associate

professor of communication teaching digital journalism at a small, liberal arts college

xi
in New England. I am also a graduate student working on a PhD in Humanities at

Salve Regina University. My path to this moment in time has been long and winding.

I started out on an obituary desk in upstate New York, moved to Washington DC to

write about technology issues on Capitol Hill and I earned a masters degree at the

Columbia University Graduate School of Journalism. In 2004, after two decades

practicing journalism, an “informational interview” earned me a teaching position at a

state school in New York and my life changed substantially. The craft of journalism is

an internal one (or literal), it’s an exercise in information gathering and writing that is

done intuitively; the art of teaching journalism is the opposite: it is an external

exercise (or oral). During my opening days in the classroom, I realized that the things I

took for granted as a journalist—how to speak to people, how to shape a story, when

to start writing—suddenly had to be explained to young aspiring journalists and that

set me down a path of reflection. As this was going on, elements of the Digital Age

began making themselves apparent: It was my students who introduced me to

Facebook and later YouTube and Wikipedia. When I was at Columbia University in

the late 1990s, we were aware of the Internet but the latest innovation at that time was

email. So much more was yet to come.

In 2007, after three years in the classroom, I accepted a new position teaching

“New Media” at a small, private university in Rhode Island. When I interviewed for

the job, I told the committee that if anyone claimed to understand the future of

journalism in the pending Digital Age chances are they were lying. And then I told

them that, while I didn’t have any firm solutions, I did have some advantages.

xii
In 1993, I became a technology reporter and was assigned to Capitol Hill just

as the Congress was taking up the idea of privatizing the Internet. During the next two

years, I met all the legislative players and lobbyists and struggled, as everyone did, to

grasp the uncertain future of the emerging Digital Age. At the time, the chatter was

about “convergence,” which was one of the first of many modern terms for

multimedia, but it was clear, after months of hearing people use the word, no one

actually knew what “convergence” meant. Finally, during a conversation with a

lobbyist, I asked the following desperate question: “What does it all mean?” And, after

a pause, the she looked at me and said something very lucid: “Right now, everyone is

fighting like mad to build the plumbing but no one seems to be concerned about what

will actually be in the system.” For the next 20 years, no one knew, and during that

time, I explored the various news enterprises. I wrote for newspapers and trade

journals; I spent four years writing for cable news; and I spent three more years

working for a division of Fortune magazine. On paper, my resume was an

agglomeration of media enterprises but, in actual fact, I had a sampling of most of the

media: newspapers, television, magazines… and for a time, I even worked at a digital

startup called New Century Networks. These were my advantages.

By the time I’d reached the classroom, I was ready to reflect and contemplate.

I realized the following things: First, if journalism was going to survive, it needed to

respect the traditions that had come before; next, there needed to be an appreciation

for all the individual forms of media because each has strengths and weaknesses; and

finally, there needed to be a new way to share information in the Digital Age. This

became the foundation for the academic program I was building and I began teaching

xiii
with these ideas in mind. As students moved through my classes, I taught them how to

report, write and edit; I also taught them how to shoot and edit video; further, I

explained that there are rhythms and patterns to storytelling and feature stories offer

the best avenues for the aesthetic—or artfulness—of the craft. Armed with these ideas,

my students and I began experimenting with digital tools and story models: we

launched blogs, we created Twitter accounts, we “friended” each other on Facebook

and other social media sites and we produced video. After 14 years of this, it was clear

to me that all the pieces were there but there lacked a cohesive theory that unified

these various media forms and I began struggling for production models. After years

of perfecting the digital tools, I retreated backward into the theory and that path led me

to graduate school.

When I arrived at Salve Regina University in 2014, I thought I knew

journalism and communication theory. I was absolutely wrong to think that. During

my first semester, I took a research theory class that included reading an extensive

amount of Martin Heidegger’s philosophy. At first I thought I’d signed up for a

foreign language course as I struggled to understand some of the circuitous reasoning I

found in the translations of Heidegger’s work. But I soldiered forth and when the class

moved into some of the more modern philosophy, my understanding grew. Then the

program began introducing us to communication theorists including Walter Benjamin,

Terry Eagleton, Marshall McLuhan, and Walter Ong. Suddenly, I realized that I was

not alone in my confusion over media theory; in fact, I realized that I was catching up

with decades of fertile research and theory.

xiv
I also identified the building blocks of modern communication theory, which are

these: oral/aural communication is natural; literal communication is learned. After the

invention of printing and the global trend towards literacy, our approach to oral/aural

communication changed. Just as we were beginning to understand this new age, this

“secondary orality,” a new digital form of communication was introduced called the

“Digital Orality.” It didn’t take long for “Digital Orality” to become a point of interest

for this dissertation but then I realized that this was just the first step in a development

towards something more. That is what this book is about.

Writing the Dissertation

To produce this work, I came at it from three directions: I applied my

knowledge as a practicing journalist, I considered the technologies I learned as a

professor, and I shaped my understanding with the theory I learned as a graduate

student. My purpose here began as an exploration of journalism in the Digital Age but

after much research realized that all forms of story are being affected by digital

innovations.

I also realized that this isn’t the first time humankind has shifted the way it

communicates: in the 15th century, we moved from the oral to the literal during a 50-

year period stretching from 1455 to 1505 in an age we now call the “Incunabula;”

today, we’re moving from the literal to the digital in a period I expect will stretch from

1996 to 2046 and I’d like to call it the “Digital Incunabula.” Our journey has been

(and will be): Oral à Literal à Digital. Given that the first Incunabula took five

decades, I expect that the current Digital Incunabula will likely take as long and I’ve

xv
decided to reflect upon the first transition Oral to Literal searching for clues hinting at

what’s to come during the current Literal to Digital change. That idea sent me

rummaging through old archives to look at rare book collections and considering the

changes that transpired after the invention of the printing press in 1455. I was

surprised to find how sweeping those changes were and, oddly, how similar the former

and current transitions have become.

What you’re going to find as you move through this treatise is a sense of that

exploration. As you can see from the beginning of this introduction, I begin with

Johannes Gutenberg and move forward through the Protestant Reformation and the

Age of Literacy. From there, I review the return of oral communication and look at

innovations in photography, film, sound and so forth. Then I looked at the influences

of the Digital Age.

Because it was impressed upon me during my thesis proposal to consider

augmented reality and virtual reality, I looked at these technologies; and then I

considered how these devices may be influencing storytelling. Along the way, I

looked at the economic and social factors preying upon the development of digital

storytelling and I discussed the history of multimedia storytelling before finally

arriving at a final resting place.

I found the journey to be fascinating and I hope that the writing in this thesis

reflects this understanding. Finally, please understand that this work is a culmination

of three long journeys, which included 20 years of practice in the newsroom, 14 years

teaching in the classroom, and three years of theoretical study in graduate school. My

hope is to find a middle ground, a resting place, which summarizes a final core theory.

xvi
As you move through the essay, you will notice that my writing style dwells heavily in

a magazine form. I’ve written academic papers, and I’ve found some success writing

in that medium, but I find academic writing to be cumbersome and dull and, frankly,

didn’t want me or anyone else to have to suffer through 400 pages of academic

drudgery. Instead, I applied my knowledge as a magazine writer, believing that the

narrative would be clearer, fresher and more engaging. Again, I hope that is the case.

The Form of the Dissertation

The thesis is broken up into four sections, which move from “Literacy” to

“Oral” and into “Multimedia” before moving into the “Conclusion.” The method here

was to start by defining the age of literacy moving from the 15th century to the present;

after that, I reviewed the age of (secondary and tertiary) orality; from there, I moved

into multimedia to review its history and theory; before making some conclusions

about the future. Along the way, I wrote about some important moments in the

development of story narrative, technologies that advanced the communication media,

and techniques that have influenced the direction of future narrative.

In the first section entitled “Literal Landscape,” I open with Johannes

Gutenberg and the printing press, which will move us forward into the reign of King

Henry VIII and the Protestant Reformation. Armed with the history, I move into the

theory to explain the birth of English literature and the influences literacy had over the

future development of mankind. I review the birth of journalism and move through its

evolution to the modern practice of “literary journalism,” and so forth. I also address

xvii
the issues of narrative form and the influence “emplotment” has over story design and

the aesthetic of nonfiction literary storytelling.

The second section, entitled “Return of Orality,” follows the historical and

technological developments of photography, motion picture, audio recordings and so

forth. Walter Benjamin’s ideas about “mechanical reproduction” become instrumental

in my argument as I move through the theory. I look at Walter Ong’s ideas about

“secondary orality,” which are definitive in our relationship with oral-aural media. I

also address the issues of narrative form with regard to oral media including film,

music, photography and sound and how oral storytelling and narrative form shape the

foundation for the aesthetic of these media forms.

In the third section entitled “Multimedia,” I return to the 19th century to

investigate the theories of Richard Wagner and others, who helped build the

foundation for multimedia storytelling in the 20th century. There have been many

experiments designed to commingle literal media and oral media and we’ll review a

few of them before we move into digital development. The age of “digital orality”

appears to be in two phases: one that looks at forms of broadcasted literal materials;

and a second that looks to advance the fusion of the storyteller with the audience. I

look at augmented reality and virtual reality before moving into more complex ideas

about digital storytelling.

In the “Conclusion,” I attempt to pull it all together and explain what’s going

on now and what could happen in the near and distant future. That investigation had

me asking questions about human cognitive abilities and the potential for organic and

computer data integration. Is it possible for the human mind to connect to artificial

xviii
intelligence? Many theorists think so and that innovation will certainly transform the

method for storytelling.

Methodology

Which leads me to a few paragraphs about methodology. Most of the advice

about writing a dissertation included the idea that one must write as they are reading,

which I did aggressively. I found that in the development of this thesis, I started with a

sweeping 180-page sketch of ideas and found myself backtracking and sweetening

theories as I found more definitive research. The core of my thinking was this: Walter

Ong defined oral communication; Terry Eagleton explained literal theory; and

Marshall McLuhan wrote in a flowing free form about the intersections of both the

oral and the literal. A decade ago, researchers began writing about “Digital Orality” or

the influences of digital media over the human condition but the theory here doesn’t

appear fully developed; nevertheless, I read through it filtering out the important

points and added them to the thesis.

As I moved through this research, I found my own ideas shifting and the title

and themes of my thesis began shifting too. I wanted to focus on journalism, but I

discovered that fiction and non-fiction story forms have been influencing each other

equally and I realized that the aesthetic of storytelling was more apropos; from there, I

realized that if print is the medium for text and light and sound are the media for film

and music, what is the defining medium in the Digital Age? I concluded that binary

code is the alphabet for the all the current media forms—text, photo, video and sound

all can be reduced down to a digital format called binary code—we can finally find

xix
ways to fully deliver integrated multimodal stories. This reductive reasoning had me

arrive at the following research question: What is the future of storytelling in the

Digital Age?

Initially, I believed that repackaged multimedia stories presented on an

accommodating piece of consumer electronics would be the solution but then I

discovered there was something more going on.

During the first Incunabula (1455 to 1505), the idea of story altered

significantly. In fact, the definitions of story and the purpose of storytelling shifted in

new and exciting ways. During the “Secondary Orality,” (1820 to 1990) the same was

true. And it’s clear to me, during this pending Digital Age (1990 to the present), the

way we tell stories is shifting again and the dynamics of that change haven’t been fully

realized. One theorist suggested that the way humans and computers communicate

will change substantially in 2045 and this change will alter the entire structure of

human development. Armed with that idea, I began realizing that the initial premise of

storytelling was escapism, and that fiction forms were created to transport the

audience to imaginary realms developed by authors/producers. (For nonfiction stories,

the purpose of the producer is to act as the eyewitness for the audience, and this work

has the same ability to transport readers/viewers through both time and space.) For the

next 500 years, this relationship of the storyteller and the audience remained fairly

defined: the producer creates the story, which is preserved in time until the audience

discovers it. This is true in poetry, novels, audio recordings, film and video. But with

the advent of augmented reality and virtual reality, the relationship between the

producer and the audience becomes closer. With regard to VR specifically, the idea of

xx
this technology is to trick the human senses to experience what the producer

experiences: this is a form of first-person storytelling that attempts to place the

audience in a place where they share the same sentient experiences of the storyteller.

This is the first step towards true “empathetic” storytelling. Developments in computer

software programming are only amplifying this experience, and that idea had me

looking at Artificial Intelligence and the future of human-computer data exchanges.

All that aside, my research question remains the same: What is the future of

storytelling in the Digital Age? Specifically, I am curious what our stories will

become by the middle of the century. They will certainly be transmitted digitally, but

what form will they take? How will they be packaged and delivered? Will this new

future and the narrative design of the story? And what will define the aesthetic—the

artfulness—of these works? My initial idea was to address the evolution of multimedia

storytelling: specifically, what media forms will influence fiction and nonfiction

stories; but as my research evolved, I realized that we may be departing entirely from

written stories and more towards these multimodal story forms that will transform the

author/audience relationship. In the end, I address both ideas, and arrive at a place

very distant from where I thought I was headed.

Back on Capitol Hill

In 1993, as President Bill Clinton was being sworn in, I was exploring the

hallways of the U.S. Capitol Building with a friend. What surprised me is that the

complex actually has an underground maze of corridors leading outward from the

Capitol Building; on the south side, hallways lead to the three House Office Buildings,

xxi
and on the north side, hallways lead to the three Senate Office Buildings. During my

three-year tenure as a journalist covering Congress, I found myself weaving through

this system, pausing occasionally to investigate and appreciate elements that weren’t

necessary open to the visiting public. (On one occasion, I stumbled into Vice President

Al Gore’s empty Capitol Hill office and got as far as his desk before I realized my

transgression.) I also spent a fair amount of time in the Rayburn House Office

Building, which is where most of the Congressional testimony is heard. I became a

permanent fixture in the House Sciences subcommittee hearings, sitting at the press

table, listening to Congressional members conversation unfold as they spoke about the

development of the Internet. Then as now, I felt as though I had advanced into the

Digital Age years before my peers in the news corps. Hearing the Congress muse over

its visions of the future was fascinating. Examples from Congressional testimony

included the idea that, via the Internet, “future English literature students would be

able to read Shakespeare in the original manuscript…”. This idea wasn’t remotely

possible, but that wasn’t really the purpose of this argument; the purpose was over the

growing ubiquity of communication. Overall, these Congressional members believed

the Internet would transform everything… and to their credit, it did.

That was half a lifetime ago. I was 26 and single… I had shocks of wavy blond

hair… and a mountain of student debt. I was also alive with the curiosity of the future

and, again, that curious nature had me all over Capitol Hill.

At one point, I found my way down a corridor that aligned beneath the three

House Office Buildings and walking East found a narrow pathway with a sign that

read “To the Library of Congress” and I followed it. The hallways are old and sloping,

xxii
dimly lit and painted in industrial beige; fire doors break up the pathway, confusing

things… but a series of signs continued to lead the way. When I finally arrived at a

solid steel door at the end of the hallway, I pushed it open and stepped into the sub-

basement leading into the Library of Congress Thomas Jefferson Building.

Moving upward out of the basement, I skimmed through the hallways of the

concourse and the main foyer for just a few minutes before doubling back. What

struck me was the grandiosity of the space and I was overwhelmed with the idea that

this museum, while beautiful, wasn’t going to serve my needs as a journalist. That was

27 years ago.

As part of this dissertation, I found the best and easiest library collection near

my apartment in Washington DC was a short Metro ride away and inside the Library

of Congress. In the final months of my writing this thesis, I established the routine of

going to the Main Reading Room at least once a week to pull texts and other materials

from the archives. The institution is a wonderful resource and I suddenly realized that

my judgment as a younger man was foul. Of all the institutions in Washington DC,

this one—Thomas Jefferson’s gift to the Republic—must be the most valuable. It is

for this understanding (and many others) that I am glad for this experience; as a

graduate student I was afforded the license to return to Capitol Hill and study quietly

in the great Main Reading Room with spirit and purpose.

It was also during these many visits to the library that I rediscovered the Bible

exhibition and how in many ways those two books definitively illustrated the theme of

this doctoral project.

xxiii
Part I: Literal Landscape

Chapter 1

Publishing Revolution

The idea of the book has been with us for centuries. Initially, a book was called

a codex, which was a loose collection of papers and, in the evolution of the book,

elements were added to improve the reading experience including bindings, covers,

grammar, page numbers, paragraph breaks, punctuation, spelling standards and so

forth. For the first 1,400 years, most books were handwritten reproductions that had

religious, scientific or legal content. In this early age, the first real age, books were

called “manuscripts”—which in Latin means “to write”—simply because they were

handwritten and illustrated by scribes and artists. As these books circulated around

Europe and western Asia, standards were established and a “craft” of manuscript

production was formed. By the 15th century, the handwritten book was considered an

expensive commodity available only to the wealthy; 1 these books were also treated as

artistic works and the purveyors of these books—the scribes and the artists who

illustrated the books—became cultural celebrities.2

Wealthy families sought out celebrated scribes hoping to hire them to craft a

book, which would later become a family keepsake and heirloom. Of all the

publications that were produced at this time in history, the “Book of Hours” seems to

be one of the more popular as thousands of them were produced over the years. The

Book of Hours was a Christian prayer book, which chronicled a series of psalms and

1
Burke, James, and Robert E. Ornstein. The Axemaker's Gift: A Double-edged History of
Human Culture. New York: Putnam, 1995. 124.
2
Steinberg, S. H., and John Trevitt. Five Hundred Years of Printing. London: British Library,
1996. 26.

1
prayers and listed the saints and holy days. For 300 years, these books were primarily

handwritten and illuminated with gold leaf and silver and hand painted; in many cases,

the artists would personalize the book by including images of the patron family—

husband, wife and children—and, in some cases, images of famous or celebrated

family members. Over time, each unique edition was given a name reflecting the

patron: so, a book presented to Catherine, Duchess of Guelders, is now referred to as

the Hours of Catherine of Cleves.3 Another example would be the Hours of Gian

Galeazzo Visconti, which was commissioned by the ruling family of Milan in the 14th

century.4

Typically, a book would take anywhere from 18 months to several years to be

produced; and they were expensive, costing an annual salary or more depending on the

quality of the paper or vellum and the amount of gold leaf and silver added to the final

work.5 Over their long history, tens of thousands of these books were produced.

The artistry of these books—these illuminated manuscripts—is quite amazing. The

books could run 100 or more pages; were written in a complex hand-drafted script

often specific to the scribe; and the hand paintings were often luxurious and

captivating. Initially, most Book of Hours were written in Latin but, over time,

vernacular editions were created in Dutch, English, Flemish, French, German, Italian,

Russian and Spanish. Ideally, the writing would be uniform, filling the pages evenly,

and the artistry was in the visual representation of the text.6

3
Catherine, and John Plummer. The Hours of Catherine of Cleves. New York: G. Braziller,
1966. 9.
4
Hamel, Christopher De. A History of Illuminated Manuscript. Oxford: Phaidon, 1986. 183.
5
Ibid., 101.

2
In his book Monuments of Medieval Art, author Robert Calkins writes about

the craft:

The scribe then wrote the text, but left blank the spaces for capital letters or
varying height and, if required in the plan of the book, spaces for painted
miniatures. Small letters were written lightly in the space of the initials or in
the adjacent margin as instructions for the illuminator; these were usually
obscured by the subsequent decoration. Spaces or lines were left blank for
headings written in red ink (rubrics), sometimes supplied by the scribe himself,
sometimes by another, a rubricator.7

So, as the scribe wrote the words, the real artwork came with the added flourishes

called “illumination,” which were sometimes added by the scribe but often were

created by other artists. Searching for examples of the illuminator’s work, one would

be the special care given to the first letters of new chapters or paragraphs; these letters

were enlarged—they are called “initials”—and they were painted ornately, often

featuring images of Christ or saints, who would “dwell” inside the “bowl” of a letter;

this is called an “Inhabited Initial.”8 In effort to better shape your understanding,

imagine the opening of traditional children’s books: “Once upon a time…”. The letter

“O” in many cases is much larger—in some cases, several centimeters larger—and is

often more decorous; this is an initial. This is a holdover, or an anachronism, from the

age of the Illuminated Manuscript, which at its height, would include inhabited initials

decorated in gold leaf, which was carefully hammered into the page and then lightly

6
Ibid., 160.
7
Calkins, Robert G. Monuments of Medieval Art, Issues 51-65. Ithaca: Cornell University
Press, NY. 211.
8
Brown, Michelle. Understanding Illuminated Manuscripts: A Guide to Technical Terms.
Malibu, CA: J. Paul Getty Museum in Association with the British Library, 1994. 24.

3
painted with the image of Jesus Christ or Saint George or knights from the Crusades…

and so forth.9

Given that literacy levels in Europe in the early 15th century lingered well

below 20-percent,10 the value of Illuminated Manuscripts could be defined by the

illustrations included among the text. Books were looked at more than read. Also, as

communication theorist Marshall McLuhan points out, when these books were read,

they were read aloud.11 So, the book and its relationship with the audience (before the

printing press) were oddly different from the modern book and its contemporary

audience. McLuhan suggests that the manuscript book was more performance than

literary, which makes it more of an “oral” form of communication. Communication

Theorist Walter Ong agreed with McLuhan offering these observations:

Manuscript cultures remained largely oral-aural even in retrieval of material


preserved in texts. Manuscripts were not easy to read, by later typographic
standards, and what readers found in manuscripts they tended to commit at
least somewhat to memory. Relocating material in a manuscript was not
always easy. Memorization was encouraged and facilitated also by the fact that
in highly oral manuscript cultures, the verbalization one encountered even in
written texts often continued the oral mnemonic patterning that made for ready
recall. Moreover, readers commonly vocalized, read slowly aloud or sotto
voce, even when reading alone, and this also helped fix matter in the
memory.12

Ong goes on to explain that oral anachronisms remained prevalent well into the 16th

and 17th centuries and the hyphen is a symptom of that trend. The hyphen, he says,

9
Ibid., 24.
10
"Literacy." Our World In Data. Accessed February 08, 2017.
https://ourworldindata.org/literacy/.
11
McLuhan, Marshall. The Gutenberg Galaxy: The Making of Typographic Man. Toronto:
University of Toronto Press, 1962. 94.
12
Ong, Walter J. Orality and Literacy: The Technologizing of the Word. London: Methuen,
1982. 117.

4
was used even in title pages to break words into syllables reflecting how they were

pronounced.

But let’s reflect for a moment on the orality of the manuscript: Given their

visual nature, the fact that the written words were hard to read, that the reader often

read the text aloud and that the illustrated images often dominated the reader’s

attention, one can begin to imagine that the book, in its manuscript form, was not only

an oral instrument, it was actually a tool for recitation and, ultimately, one for

performance.13

Ong and McLuhan both argued that the manuscript was an oral instrument, a

device for the ear; and that Western Culture didn’t shift towards a text-based or literal

instrument until after the invention of the printing press. Ong also argued that “orality”

was man’s natural state and he defines the period before the printing press—this pre-

literate culture—as the “primary orality.”

Human society first formed itself with the aid of oral speech, becoming literate
very late in its history, and at first only in certain groups. Homo sapiens has
been in existence for between 30,000 to 50,000 years. The earliest script dates
from only 6000 years ago. Diachronic study of orality and literacy and of the
various stages in evolution from one to the other sets up a frame of reference in
which it is possible to understand better not only pristine oral culture and
subsequent writing culture, but also the print culture that brings writing to a
new peak and the electronic culture which builds on both writing and print.14

Ong suggests that we must understand orality to understand literacy and the things that

came afterward. Towards that purpose, Ong writes that “primary orality” was a group

activity that thrived in an exchange of spoken ideas. Speaking is the act of brining a

13
Ibid. 63.
14
Ibid., 3.

5
language to life and the act of speech is an instant but ephemeral media form. The

spoken word lives and dies in the same breath, he explains.15

Marshall McLuhan’s ideas are very similar although he uses different labels to

make his argument. Instead of talking about primary and secondary orality, McLuhan

describes media as being either ‘hot’ or ‘cold’ and he uses these terms to define his

mantra that “the medium is the message;” and by this he believes the way the message

is conveyed can be just as important as the content of the message itself. So, writing

“Marry me?” on a cocktail napkin can be perceived much differently than having the

same message “Marry me?” tattooed on your back. One might suggest that the

message on the cocktail napkin is more ephemeral than the message scored into the

flesh; one might also argue that the first is more spontaneous than the second, which

incorporates elements of planning, pain and self-mutilation. So, which marriage

proposal do you favor? The messages are certainly different.

On the issue of ‘hot’ and ‘cool,’ McLuhan suggests that the ease over how the

audience receives these messages can also assess their value. He describes radio as a

‘hot’ medium because it tends to wash over the audience, while the written word,

which he calls a ‘cool’ medium, requires intense concentration. Photographs, too,

because of the ease of perception, can also be considered ‘hot’ media, while telephone

conversations, which require a certain amount of attention, would be ‘cool.’16

Turning this idea to the manuscript, McLuhan might argue that because the

manuscript is an oral instrument, something that is looked at, instead of read, that it

could be perceived as a ‘hot’ medium. This hot medium was the dominant form of

15
Ibid., 8.
16
McLuhan, Marshall. Understanding Media: The Extensions of Man. 48.

6
communication in Europe well into the 15th century, but an invention in the 1450s

would change all of that.

Moveable Type

An obscure German goldsmith named Johannes Gutenberg is credited with

creating one of the most influential inventions of modern times: The printing press.

Few argue over the impact of the printing press; it changed humankind so completely,

philosophers and communication theorists are still trying to work it all out.

In 2009, Mark Dimunation, the Chief of the Rare Books collection at the

Library of Congress, was quick to point out that Gutenberg did not invent printing.

Successful print operations existed in China, Japan and Korea centuries before

Gutenberg created his press. Instead, Dimunation says that Gutenberg is the first

printer to create “a printing press with moveable metal type in Europe.”17 That aside,

Gutenberg converted an old wine press into a device that would transmit black ink

onto the leaves of a book. His innovations include the printing apparatus, a successful

ink formula, and a metallurgical process that allowed him to create hundreds of letters

and punctuation markings that could be placed in series to spell words and form

sentences. All totaled, his printing process became the standard and his inventions

would live on as standards that existed well into the 20th century.18

So how did this happen?

17
The Giant Bible of Mainz. Performed by Daniel DeSimone, Mark Dimunation. Washington,
DC: Library of Congress, 2006. Accessed February 7, 2017.
https://www.loc.gov/today/cyberlc/feature_wdesc.php?rec=4249.
18
Eisenstein, Elizabeth L. The Printing Revolution in Early Modern Europe. Cambridge:
Cambridge University Press, 1983. 13-17.

7
Johannes Gutenberg was born in Mainz, Germany sometime between 1394 and 1406

into a wealthy family. There are no images of him and there are scarce few official

documents about him. What we do know about Gutenberg, we learned from the

lawsuit his business partner Johann Fust took against him, suing him for the moneys

Fust lent him to build his now-famous printing press. Gutenberg lost the lawsuit and

Fust, with the aid of Gutenberg’s shop steward, Peter Schöffer, assumed control of the

printing materials—including the materials being assembled for the 42-line Bible

Gutenberg was manufacturing—and they ultimately printed 180 copies of the religious

text in 1455. Gutenberg’s name does not appear anywhere in the book.19

Gutenberg drifted into obscurity after that, working intermittently for the

church and possibly with other print shops until, as an old man, he was issued a small

pension by the church. He died in 1468 and was buried in Mainz; the church and

surrounding cemetery were later destroyed and Gutenberg’s grave is now lost. It’s

unclear what the final years of his life were like. There is speculation that he was blind

and broke although there is no substantial proof to that effect. When he died, few in

Mainz or elsewhere for that matter, understood his contribution to the literate world.

Today, he is known as the man who perfected moveable type, revolutionizing the

printing press and, in doing so, changed the world.20

As for Johann Fust, he took the Bibles to Paris—which, at the time, hosted the

largest academic community in Europe—and proceeded to sell the books as

manuscripts (or handwritten works). Given the quality and the price of the books, Fust

19
Freeman, Janet Ing. Johann Gutenberg and His Bible: A Historical Study. New York:
Typophiles, 1988. 68.
20
Ibid., 32.

8
found an easy market for his Bibles but, when the scribes of the city saw Fust’s

product, they thought it was impossible, given the labor of handwritten manuscripts,

for one man to possess so many books and he was accused of witchcraft. It didn’t help

that some of the text in the books was published with red ink, which some mistook for

blood, and the rumors of an unnatural origin only festered. Fust ultimately fled the

city.21

That aside, the Fust-Schöffer partnership actually continued to thrive and

together, they built a publishing dynasty that moved from Mainz to Cologne. To

further the arrangement, Schöffer ultimately married Fust’s only daughter, and the

family went on to run one of Europe’s most successful print operations well into the

16th century.22

The 42-line Bible

One of the problems plaguing Gutenberg’s relationship with Fust was the

investor’s impatience with the artist. Gutenberg, who was a goldsmith by trade, was

meticulous by nature and experts suspect that he spent countless hours working to

master and perfect the pages of his Bible.

Many Europeans did not believe that a mechanically printed manuscript would
compare with the visual standards of handwritten manuscripts. Perhaps they
don’t, but Gutenberg’s first book, a Bible printed between 1452 and 1456, was
stunningly beautiful and technically perfect, laid out in twelve hundred two-
column pages, each column forty-two lines in length, in two volumes. It is
commonly known as the 42-line Bible. In fact, it is so perfect that historians
are convinced that Gutenberg must have experimented on a considerable

21
Eisenstein, Elizabeth L. The Printing Revolution in Early Modern Europe. Cambridge:
Cambridge UP, 1983. 21-23.
22
Kapr, Albert, and Douglas Martin. Johann Gutenberg: The Man and His Invention.
Aldershot, England: Scolar, 1996. 123-137.

9
number of pages before producing the Bible, though no such pages have ever
been found. There is a decade-long gap between the time he moved to
Strasbourg and the printing of his first book in which he may have been
working on his idea.23

By most accounts, the Gutenberg B42 is considered a masterpiece. The columns are

narrow and straight, the ink pressings are clean and sharp and the pages are tightly

aligned. The typography of the work, created nearly from scratch, is pristine with each

margin flushed to the left and perfectly aligned; each page looks as though a human

drafted every line with meticulous precision.

Again, the project took 21 months to complete, 180 Bibles were manufactured

and each book contained 1,286 pages. To do all of this, Gutenberg and (an estimated)

15 pressmen operated two presses and pulled the press an estimated 232,000 times.24

Today, there are roughly 49 copies of the Gutenberg Bible accounted for and, of these,

only 21 are complete texts, and six of these are here in the United States.25

Princeton University has published a PDF version of its Gutenberg Bible

online.26 This specimen is still in its original leather cover and the work is printed

primarily in black rounded Gothic-font type on paper in Latin. Each page features a

pair of columns, which run 42-lines. There is a clear, half-inch break dividing the

columns. Throughout the book, most paragraphs open with an “initial” letter—a letter

larger than the other text—which is ornate and added by hand; occasionally, the initial

23
Kurlansky, Mark. Paper: Paging through History. N.p.: n.p., n.d. 98-117.
24
Idea Wars and the Birth of Printing. Directed by Marc Jampolsky, XiveTV, 2016. Amazon
Prime.
25
Wagner, Bettina, and Marcia Reed. Early Printed Books as Material Objects:Proceedings of
the Conference Organized by the IFLA Rare Books and Manuscripts Section, Munich, 19-21 August
2009. Berlin: De Gruyter Saur, 2010. 48.
26
"Princeton University Digital Library." Princeton University. N.p., n.d. Web. 03 Sept. 2016.

10
is a much more detailed illustration known as an “inhabited Initial,” which means the

letter’s “counter” region is filled with bright colors and images of people, animals or

things.27 According to the Princeton University website, the artist who added the

flourishes and illuminated images for this book was known simply as “Master of the

Playing Cards.”28

To create the completed text, Gutenberg must have done several things

simultaneously. First, he had to invent the moveable type, which he did by casting

small molds and experimenting with metal alloys that could sustain their shapes after

multiple pressings; next, he worked with ink formulas, settling on a Linseed oil-based

ink; finally, he had to build a typography, or visual language, for the printed word.29

At the time, there were other printers doing similar experiments, but it was Gutenberg

who found his way to a formula for mass-produced published work that met the

standards for hand written manuscripts.

So how did he do?

Let’s return to the main foyer of the U.S. Library of Congress, and its

exhibition of the two Bibles. On the north side of the hallway, a glass case is holding

Gutenberg’s 42-line Bible; on the south side is a second glass case hosting the Giant

Bible of Mainz, a handwritten copy of the Bible completed in 1453. To the uneducated

eye, the books could be mistaken for one another. Gutenberg’s meticulous nature, his

attention to detail, his need to duplicate the handwriting of the scribe, launched a

27
Brown, Michelle. Understanding Illuminated Manuscripts: A Guide to Technical Terms.
Malibu, CA: J. Paul Getty Museum in Association with the British Library, 1994. 24-25.
28
"Princeton University Digital Library -- Item Overview." Princeton University. N.p., n.d.
Web. 03 Sept. 2016.
29
Steinberg, S. H., and John Trevitt. Five Hundred Years of Printing. London: British Library,
1996. 70.

11
movement in bookmaking that changed the world. To stand over either Bible and to

peer down looking at it, the scribe’s handwriting is so clean, it looks like it could have

been the work of a printmaker; and the text of the published Bible includes a font of

such flourish, that it appears as though each letter was handcrafted. This attention to

detail is the genius of Gutenberg and the creation that altered the Western world.

Incunabula

In the years after Gutenberg’s invention, printing operations sprung up all over

central Europe, first along the Rhine River Valley and outward into England, France,

Holland, Italy and Spain. For the first 50 years, there was a period of transition, one

that attempted to retain the manuscript traditions as printers moved forward,

innovating. This period is called the “Incunabula,” which is Latin for “of the cradle,”

and it was during this time that the standards for typography and printing were formed.

The evolutionary process took 50 years starting in 1455 and extended to 1505. Early

on, printers attempted to automate the text and artists were brought in later to add

illustrations and other “Illumination.” (Illumination is the process of painting

illustrations and adding gold-plated letters.) By the end of the period, even most of the

“Illumination” was done mechanically. All that aside, the Incunabula period was very

productive.

Library of Congress Librarian Mark Dimunation estimates the creation of 8 to

20 million during the Incunabula period, which stretches from 1455 to 1505; of that

12
volume, only 100,000 exist today.30 Dimunation describes these early days of

publishing as naïve and innocent as the publishing community defined itself.

Of course many changes abounded and many aspects of the old manuscript age

faded into obscurity. Monastic scribes disappeared, the culture of handwriting began

fading and the age of mechanical reproduction ultimately did away with the artwork

endemic to the Illuminated Manuscript. But these things took time.

Historian Elizabeth Eisenstein put it this way: “By 1500, one may say with

some assurance that the age of scribes had ended and age of printers had begun.”31

One of the first things to happen is the printers created the first title pages. During the

manuscript period, scribes crafted something called a colophon, which is an

inscription explaining how the book came into existence, who paid for it and when it

was crafted; sometimes, the scribe would also add a threat or a curse hoping to

preserve the integrity of the work.32 And while those things are interesting, the most

important aspect of the colophon was its location: the scribal tradition was to place

this inscription on the last page of the work, sometimes in different colored inks.

When the printers took over, they invented the title page, which includes all the same

information found in a colophon, but the printers moved this page to the front of the

book.33

30
The Giant Bible of Mainz. Performed by Daniel DeSimone, Mark Dimunation. Washington,
DC: Library of Congress, 2006. Accessed February 7, 2017.
https://www.loc.gov/today/cyberlc/feature_wdesc.php?rec=4249.
31
Eisenstein, Elizabeth L. The Printing Revolution in Early Modern Europe. Cambridge:
Cambridge University Press, 1983. 127.
32
Brown, Michelle. Understanding Illuminated Manuscripts: A Guide to Technical Terms.
Malibu, CA: J. Paul Getty Museum in Association with the British Library, 1994.

13
As self-serving publicists, early printers issued book lists, circulars and
broadsides. They put their firm’s name, emblem, and shop address on the front
page of their books. Indeed, their use of title pages entailed a significant
reversal of scribal procedures; they put themselves first. Scribal colophons had
come last.34

Clearly, the pride in reproduction had transferred from the scribe to the printer but, to

read Eisenstein, her interpretation is that of vanity. Printers certainly placed aside the

humility of the colophon, transforming the opening of every book into a venue for

recognition.

This wasn’t the only major change in book culture. Early printers attempted to

retain the rubrication of the Illuminated Manuscript. Rubrication is the process of

adding painted images, illustrations and precious metals to the pages. During the

manuscript age, rubrication was integrated as the book was being crafted; often, the

scribe was also the one painting the margins and adding the illuminated initials. In the

early print operations, in fact with Gutenberg’s first attempts to publish his Bibles,

printers left gaps in the text so artists could add illuminated elements later on. Doing

so actually created a hybrid book experience: one where a book was both mass

produced and personalized, as the rubrication would add flourishes that made each

volume distinct.35 Looking at existing Incunabula, one can actually see tiny letters

strategically placed by the printer so the artist will know which letter to paint into the

33
Eisenstein, Elizabeth L. The Printing Revolution in Early Modern Europe. Cambridge:
Cambridge University Press, 1983. 149.
34
Whitlock, Keith. The Renaissance in Europe: A Reader. New Haven: Yale University Press,
2000. 67.
35
The Giant Bible of Mainz. Performed by Daniel DeSimone, Mark Dimunation. Washington,
DC: Library of Congress, 2006. Accessed February 7, 2017.
https://www.loc.gov/today/cyberlc/feature_wdesc.php?rec=4249.

14
space. In many cases, these books never got illuminated and the gaps remain

unpainted.

Concurrently, as a byproduct of the printing press, printers sought ways to

mechanically reproduce images too. They ultimately arrived at a process called

xylography, which is the process of carving images into blocks of wood and using

these “woodcuts” to illuminate the books mechanically. The first woodcuts appeared

in print around 1461 and by 1500, most rubricration had been replaced with

woodblock printing and a new artist culture was created.36

Like the letterforms used by printers, woodblocks were carved so high areas

would catch the ink, and lower areas would create white space on the page. Given the

sophistication of Gutenberg’s ink, a well-carved woodcut could offer precise images

on the page. The movement, again, transformed the printing operation, and the artists

who could create these detailed etchings were celebrated throughout Europe.

One of the more famous artists from this period was Albrecht Dürer, a painter,

a printmaker and a woodcut pioneer. With an eye for detail, linear perspective and an

interest in the authentic, his illustrations and paintings were and are examples of stark

realism. In the realm of publishing, he’d draw complex images onto blocks of wood

and then commission woodworkers to carve the images creating ornate woodblocks.

His works continue to be a celebration of mass-produced art and he is today known by

many as the “Apelle Germaniae,” or the best German artist of all time.37

36
Eisenstein, Elizabeth L. The Printing Revolution in Early Modern Europe. Cambridge:
Cambridge University Press, 1983. 46-101.
37
Schröder, Klaus Albrecht., Ernst Rebel, Andrew Robison, and Albrecht Dürer. Albrecht
Dürer: Master Drawings, Watercolors, and Prints from the Albertina. Washington: National Gallery of
Art, 2013.

15
Another major transformation to take place during this time was in the

introduction of new fonts. Gutenberg’s original font, which was designed to look like

manuscript, is a heavy dark Gothic font called “black letter” or “Textura.”38 The font

was also difficult to read: This is Gutenberg’s Textura font.

Looking to do better, a printer in Venice named Nicholas Jenson decided to

design his own typeface. His letters were sharper, clearer and more mechanical

looking; they had some flourish but weren’t overly ornate; the font was also much

easier to read. Jenson called the font “Roman,” and it quickly became the standard.

Part of the lasting influence of Jenson’s fonts is their extreme legibility, but it
was his ability to design the spaces between the letters and within each form to
create an even tone throughout the page that placed the mark of genius on his
work.39

Jenson’s Roman font still exists today simply because it was the inspiration for New

Roman and Times New Roman, which is one of the leading fonts. (Even the font you

are reading—Times-New Roman—is a derivative of Jenson’s Roman typeface.) Jenson

went on to design a series of other letterforms, and all have had a lasting impression

on modern publishing.40

By 1501, the age of the Incunabula was declining. Today, the British Museum

hosts a database entitled the “Incunabula Short Title Catalogue,” and its website boasts

that it has a catalogue of 30,375 editions.41 This database is really the central catalogue

38
McLean, Ruari. The Thames and Hudson Manual of Typography. London: Thames and
Hudson, 1988.
39
Meggs, Philip B., and Alston W. Purvis. Meggs' History of Graphic Design. Hoboken, NJ: J.
Wiley & Sons, 2006. 86.
40
Ibid., 87.
41
"Incunabula Short Title Catalogue." The British Library. February 04, 2015. Accessed
February 08, 2017. http://www.bl.uk/catalogues/istc/.

16
system for researchers around the world and it cross references books by author, title,

publishing date and volume location. If someone is looking to find any book from the

Incunabula era, this repository helps direct researchers showing these details.

Searching by date of publication, I noted that in 1495, 1,690 books are listed as

published that year; in 1500, the peak year, the catalogue lists 2,862 titles; and in

1505, only 230 Incunabula are listed.42 Although this method is far from scientific, it

does illustrate a decline in volume, which may echo the volume of books published

during that period.

Of course, the Incunabula period was very important; it marked the creative

transformation in the world of publishing. In addition to new typefaces, and printing

processes, the age of the manmade book ended, setting us down a path of mechanical

reproduction. In the process, the cost of manufacturing books dropped exponentially,

allowing publishers to sell titles at prices far below the prices paid for the manuscript.

This time triggered a period of economic growth, the emergence of a new merchant

class, and the introduction of a middle class in Europe. Books were also published in

local dialects, allowing poor people to learn to read; and this growth in literacy would

have a lasting effect on the fate of Europe.43

Mechanical Reproduction

Looking at the theory, Walter Benjamin’s essay on “The Works of Art in the

Age of Mechanical Reproduction” strikes me as an important piece of work relevant to

42
Ibid.
43
Ong, Walter J. Orality and Literacy: The Technologizing of the Word. London: Methuen,
1982. 77-113.

17
Gutenberg and his printing press.44 Clearly, with the invention of the printing press,

Gutenberg created one of the first industrial machines that allowed a small workshop

to mass produce a product. In doing so, Gutenberg transformed the book from a work

of art to a work of industry.

The Greeks knew only two procedures of technically reproducing works of art:
founding and stamping. Bronzes, terra cottas, and coins were the only art
works which they could produce in quantity. All others were unique and could
not be mechanically reproduced. With the woodcut graphic art became
mechanically reproducible for the first time, long before script became
reproducible by print. The enormous changes which printing, the mechanical
reproduction of writing, has brought about in literature are familiar story.
However, within the phenomenon which we are here examining from the
perspective of world history, print is merely a special, though particularly
important, case. During the Middle Ages engraving and etching were added to
the woodcut; at the beginning of the nineteenth century lithography made its
appearance.45

When the scribe was expelled from the process, the artfulness of his handwriting was

replaced with mechanical lettering; when illustrators were replaced with the woodcut,

the same transformation took place. In his essay on the subject, Benjamin doesn’t

mention Gutenberg, but he does write about printing and his message is clear: When

the printing press was born, the artful “aura” of the manuscript was destined to die.

During the Incunabula period, the artful “aura” of publishing was preserved for a time,

but soon, the handmade product was gone and with it the uniqueness and the oneness

were replaced with a sameness. The craft of the book was dead.

But the printing press did give birth to a new form of artist: With the drop in

the cost of printing, publishers began scrambling for books to print and the demand for

content fostered an age for writers. Until this point and as odd as that seems, the idea

44
Benjamin, Walter, Hannah Arendt, and Harry Zohn. Illuminations. New York: Harcourt,
Brace & World, 1968. 217-252.
45
Ibid., 217-252.

18
of the “writer” was nearly alien to the world of books. As an example, in the 13th

century Saint Bonaventura said that there were only four ways of making books:

A man might write the works of others, adding and changing nothing, in which
case he is simply called a “scribe” (scriptor). Another writes the work of others
with additions which are not his own; and he is called a “compiler”
(compilator). Another writes both others’ work and his own, but with others’
work in principal place, adding his own for purposes of explanation; and he is
called a “commentator” (commentator)… Another writes both his own work
and others’ but with his own work in principal place adding others’ for
purposes of confirmation; and such a man should be called an “author
(auctor).46

Reading through the list carefully, one sees a recipe for scribes and the craft of

reproduction but there is no description here that accounts for wholly original works

of writing.

In the 16th century, everything changed and one of the first new ideas was the

practice of celebrated authorship. In England and throughout Europe there certainly

were original works, but the act of printing amplified the audience for this work and

one of the first authors to benefit from this new age of ‘writing’ was Geoffrey

Chaucer, a semi-obscure poet who created a chronicle of stories about pilgrims

moving across England. Although Geoffrey Chaucer (1343 to 1400) was long dead, a

clerk in the court of Henry VIII named William Thynne gathered the collective works

of Chaucer (all in manuscript with the exception of one print edition), edited them into

two volumes and published The Works of Geffray Chaucer in 1532. When they were

finished, he presented the volumes to King Henry VIII who loved them. With that,

46
Eisenstein, Elizabeth L. The Printing Revolution in Early Modern Europe. Cambridge:
Cambridge University Press, 1983. 59.

19
Chaucer’s work was resurrected, the King found favor with the idea of printed

materials, and Thynne went on to be a celebrated printer in London.47

The result of this zealous activity was, critics have concurred, a significant
contribution to Chaucer scholarship, the first real ‘edition’ of the poet rather
than simply a collection of writings. It added a number of authentic pieces,
amounting to around 15,000 words to the canon: notably The Romaunt of the
Rose, The Legend of Good Women, Boece, The Book of the Duchess, and The
Treatise of the Astrolabe among the longer works, and the shorter poems ‘Pity’
and ‘Lack of Steadfastness’, as well as a good deal of apocrypha. But the 1532
Chaucer did more than gather together new and superior examples of the
poet’s writings; it placed those writings confidently in the context of a wider
history of English literature.48

Because the book is dedicated to King Henry VIII, historian Greg Walker argues that

Thynne assigned the responsibility of Chaucer’s legacy to the monarch. Henry did not

disappoint and Chaucer became known as the “father” of the next generation of

writers… and for good reason. Chaucer’s work was written primarily in English, a fact

that makes him the inventor of “the English language itself as a poetic medium,”

writes Walker.49 When writing the preface to the works, Thynne suggests that

Chaucer’s literary mastery and use of the English language elevates this language to

the heights equal to the other major European languages.

Just as Chaucer was to refine and polish English letters, so the Thynne of the
Preface declares his task as editor to be to clean and polish the Chaucerian text,
removing its fifteenth-century scribal accretions and restoring it to its original
glory. Hence, motivated by patriotism and loyalty (‘in manner appertinent unto
my duty and that of very honesty and love to my country’), he began heroically
to recover the lost glories of Chaucerian English, and of English history
itself.50

47
Walker, Greg. Writing under Tyranny: English Literature and the Henrician Reformation.
Oxford: Oxford University Press, 2005. 56-72.
48
Ibid., 56-72.
49
Ibid., 56-72.
50
Ibid., 56-72.

20
Henry certainly saw the value here. Thynne armed the King with a weapon for

propaganda and nationalism. In the preface, Thynne framed Chaucer as a “moderate,

consensual, figure, not a radical one” and the King saw an opportunity to use Chaucer

as a tool to construct a new English cultural legacy.51

In fact, one might argue that King Henry VIII wielded Chaucer’s writing to

construct the modern sense of English identity and a key component of that was the

wholesale invention of the English literary tradition. It certainly worked. By the end of

the 16th century, England was home to Francis Bacon, John Donne, Ben Jonson,

Christopher Marlowe, Williams Shakespeare and Sir Philip Sidney among many

others, and writing had been transformed.

Take for example, the opening lines of Ben Jonson’s “Song to Celia” published

in 1616:

Drink to me only with thine eyes,


And I will pledge with mine;
Or leave a kiss but in the cup,
And I’ll not look for wine.52

The now all-too-famous work is a love letter to Celia telling her that a glance from her

would be rewarded with his undying allegiance, and that the mere touch of her lips to

a glass, would replace his thirst for wine. Clearly, this complex message would have

taken another form should the suitor had simply approached her and explain his

feelings. But this is not the mission of literature. Literature, writes Terry Eagleton, is

51
Ibid., 56-72.
52
Jonson, Ben. "Song: To Celia [“Drink to Me Only with Thine Eyes”]." Poetry Foundation.
Accessed March 13, 2017. https://www.poetryfoundation.org/poems-and-poets/poems/detail/44464.

21
not about the economy of words, it is about the aesthetic of sentence structure.53 Part

of Ben Jonson’s message isn’t just the fact that the subject desires Celia’s attention,

but also that his feelings for her have inspired him to express those desires in a poetic

form that is all-together complex, curious and beautiful.

A host of other writing forms followed on the heels of poetry. Shakespeare, of

course, mapped out a modern vision for plays; and a series of essayists followed soon

after. The essay, as defined, is the writer’s own thoughts and literary criticism was the

foundation for this form;54 author Samuel Richardson, a printer, is credited with

writing the first English novel Pamela, which was published in 1740; and author

Nathaniel Hawthorne is credited with inventing the short story when he published

Twice-Told Tales in 1837.55

For a long time it has been the business of academics to build genealogies for
the Novel that challenged Ian Watt’s narrative of its ‘rise’ via Defoe and
Richardson and Fielding. Yet the many prehistories of the Novel that try to
make Richardson’s achievement appear less surprising miss a simple truth: his
contemporaries did think that Richardson’s creation was unprecedented. Many
disliked it for just this reason. As the anonymous work’s authorship became
known, the fact that he was a 51-year-old printer, a businessman with no
literary track record, emphasized the sense of Pamela as a book that came from
nowhere. In a rush it became disputed, admired, parodied, reviled. Suddenly,
and as it happened, irreversibly, the Novel became a genre with the potential to
be morally serious.56

With that, we see how printing transformed writing.

53
Eagleton, Terry. Literary Theory: An Introduction. Minneapolis: University of Minnesota
Press, 1983. 15-46.
54
Harmon, William. A Handbook to Literature. Boston, MA: Longman, 2012. 131-145.
55
Baron, Naomi S. Words Onscreen: The Fate of Reading in a Digital World. New York:
Oxford University Press, 2016. 26.
56
Mullan, John. "LRB · John Mullan · High-Meriting, Low-Descended: The Unpolished
Pamela." London Review of Books. December 11, 2002. Accessed July 18, 2017.
https://www.lrb.co.uk/v24/n24/john-mullan/high-meriting-low-descended.

22
Before the printing press, the act of manuscript writing was really the act of

physical recitation: the scribe rewrote the content of another book, personalizing the

physical notion of the letters by adding flourishes, including initials and so forth, but

the words themselves did not change. After the printing press, the language of the

sentence changed; writing had gone from something visually outward to something

more contextually insular. Instead of gazing at the illuminated manuscript as a

keepsake and a commodity, the aesthetic value of the book was moved inside the

language of the text. We didn’t need “inhabited initials” or gold leaf or illustrations of

the family inserted into the book; instead, we had to read the context of the story—and

the language of each sentence—to find the art. Unlike any other work of art, the

artfulness of the book was inverted by the machinery of the printing press.

Looking to Walter Ong and Marshall McLuhan for answers, they’d simply tell

us that with this inversion, the orality of the manuscript was eradicated, replaced with

a dominion dominated by legible text lettering. The performance was over. The new

age of literacy had begun and over the next 550 years, the written word evolved,

moving ever internally, finally arriving in a very private place: the space between our

ears. Today, readers sit in quiet places, alone with their thoughts, grazing with their

eyes across the pages of textbooks and novels, listening to the author’s narrative as it

plays out silently inside their minds.

Burned in Anger

In the opening decades of the 16th century, literacy flowered across Europe.

Armed with the mass-produced books, Europeans began learning to read in astounding

23
numbers, and the demand for more books flourished. This, of course, triggered a

demand for books written about a wider array of disciplines, and soon, books on a

variety of topics including history, navigation, the sciences and so forth joined

theology texts. Education centers also opened and the demand for books only

escalated. If the second half of the 15th century was about the transition, the 16th

century was about the affirmation of literacy. Reading became the dominant form of

mass communication in Europe and that fact would have consequences.

The relationship between Gutenberg’s printing press and the Protestant

Reformation has been widely documented and most scholars believe that the

introduction of the first, fostered the division of the Church of Rome. The Reformation

was not a quiet, peaceful time.

Starting in 1517 with Martin Luther’s list of grievances against the Pope and

the Roman Catholic Church, the Reformation tore Europe in half. On one side stood

the loyal Catholics, on the other Protestants who were forming and defining their own

sense of faith. The publishing community followed the schism and a litany of Bibles

was published favoring diametrically opposed theological beliefs. The Catholics

published their Bibles; the Protestants published their Bibles; and often, factions from

one group seized and burned religious materials from the opposing faith. For the next

50 years, books were burned in anger all across Europe and the intellectual fallout

from that movement is still incalculable. Historian Matthew Fishburn wrote about the

time period:

…protest fires are a common symptom of social upheaval. A few years later,
fires would mark the beginning of a more lasting revolution, when Martin
Luther burned the bull demanding his excommunication along with the
writings of his enemies under a large oak outside the walls of Wittenberg. The

24
Catholic authorities responded in turn, with his 95 Theses ordered burned as
heretical by the Theological Faculties of Louvain and Cologne in 1519, and
after his excommunication by Leo X he was burned in effigy alongside his
books.57

Martin Luther’s influence didn’t stop there.

Because he believed in the literal interpretation in the scriptures of Paul—from

“Paul’s Letter to the Galatians”—Luther believed that every man had the ability to

read and interpret the language of the Bible in his own way. His idea, simply, was to

strip the power of the pulpit and hand it over to the congregation; in doing, he

threatened to marginalize the authority of the priests and, ultimately, the papacy. This

idea was the essence of Catholic heresy and you can see why the Church of Rome

denounced him immediately. But Luther was undaunted. During his appearance before

the Diet of Worms in 1520, Luther explained those beliefs to Emperor Charles V and

then retired into exile, where he began translating the Bible from Latin into German.58

Historians Heidi White and Michael Shenefelt believe this is a huge moment in

publishing:

The first person to foresee this new role for vernacular writing had been the
Protestant leader Martin Luther. Back in the 1520s, Luther had translated the
Bible into German. He was the first widely circulated author to realize that
though the printing press had opened the possibility of a new, mass leadership,
the books it produced would need to appear in a language the mass of readers
could understand. His translation of the Bible was quickly imitated by other
translators working in other languages. The circulation of these translated
Bibles then became the model for the publication of other vernacular
books….59

57
Fishburn, M. Burning Books. Place of Publication Not Identified: Palgrave Macmillan, 2014.
6.
58
Mullett, Michael A. Martin Luther. London: Routledge, 2015.
59
Shenefelt, Michael, and Heidi White. If A, Then B: How the World Discovered Logic. New
York: Columbia UP, 2013. 161.

25
The turn towards vernacular publishing also threw wide the opportunities for other

texts beyond the theological. On this point, Walter Ong is clear: he believed that

“Learned Latin,” which was the language of the academy and scholarship, was

impersonal and that vernacular languages—he calls them “mother tongues”—were

more intimate and personal.

But Learned Latin remained always distanced even in its literary use. It was
always insulated from the writer’s infancy. As noted before, it knew no baby
talk. There was no way to conceive of anything such as Swift’s Journal to
Stella in Learned Latin. This sounds trivial, but it also means that the areas of
consciousness and of the unconscious surfaced in Finnegans Wake were
unreachable in Learned Latin, as were the areas of experience which figure in
Virginia Woolf’s novels. Learned Latin was a literary medium in a specialized,
distance sense. Moreover, its use as a literary medium was, until the nineteenth
century, much less widespread than its use for formally academic,
administrative, or liturgical purposes.60

Ong explains that Learned Latin was the language of science and bureaucracy and

lacked the nuance of the literary aesthetic. When Martin Luther translated the Bible

into German, he made the scriptures available to the common man and, in doing so,

blazed a trail for other books published in local dialects. The printing community took

up the cause and soon scores of book titles were published in the vernacular.

Monarchs also used the period of Reformation as license to seize the property

and possessions of religious institutions. There were many offenders but England’s

King Henry VIII may have been the most aggressive. During his reign, he issued a

series of edicts that first dispatched one of his courtiers, Thomas Cromwell, to inspect

the monasteries, and later, to seize the wealth of these institutions. In the process, King

60
Ong, Walter J. Interfaces of the Word: Studies in the Evolution of Consciousness and
Culture. Ithaca: Cornell University Press, 1977. 33-36.

26
Henry VIII appropriated scores of monastic libraries often dispensing with their

contents, burning books and other ephemera.61

The effect of the discovery of printing was evident in the savage religious wars
of the sixteenth and seventeen centuries. Application of power to
communication industries hastened the consolidation of vernaculars, the rise of
nationalism, revolution, and new outbreaks of savagery into the twentieth
century.62

Again, the impact of these actions is incalculable but one can certainly imagine

why the catalogue of remaining Incunabula is so comparatively small. The

Reformation exacted a price for literacy and that price was a bounty of books

published before 1520.

In a lecture on books from the Incunabula period, Librarian Mark Dimunation

estimates that roughly 8 to 20 million Incunables were published during this 50-year

period, and today, only 100,000 exist.63 Dimunation doesn’t mention book burning or

how these books disappeared but the timing of the Reformation is key, and book

burning was probably a major factor in their demise.

The Great Fire of London in 1666 was certainly another. This fire, which

started in a London bakery, burned for five days, taking with it the historic

61
Walker, Greg. Writing under Tyranny: English Literature and the Henrician Reformation.
Oxford: Oxford University Press, 2005. 252-280.
62
Innis, Harold Adams. The Bias of Communication. Toronto: University of Toronto Press,
1951. 29.
63
The Giant Bible of Mainz. Performed by Daniel DeSimone, Mark Dimunation. Washington,
DC: Library of Congress, 2006. Accessed February 7, 2017.
https://www.loc.gov/today/cyberlc/feature_wdesc.php?rec=4249.

27
neighborhoods, including the row of publishing houses established there, destroying

thousands of books in the process.64

The Commodity of Thought

The printing press certainly transformed book culture moving it from the realm

of the homespun operation and into the dominion of industrialization. What began as a

cottage industry of small printing communities along the Rhine River Valley exploded

outward, growing across Europe as printers established shops across the whole of the

continent.

Communication theorist Harold Innis argued that “[i]n the early stages of

printing, presses were of a family character and were operated by savants interested in

art. They were replaced by an industry operating for profit and the working classes

were separated from the masters. Large establishments of 250 workers had

emerged.”65 It was under this new evolving system that the “master printer” arose as

the central figure in the world of publishing. This character was the leader of the

process and, as such, he possessed a sweeping command of all of the mechanics of the

printing press, the process of forming typefaces and arranging press runs; he had to be

literate, multilingual and intuitive; and he had to be able to see a press run move

smoothly through the process. They also had to separate themselves from the clerks

64
Peter Ackroyd's London. Performed by Peter Ackroyd. Peter Ackroyd's London Part 1: Fire
and Destiny. May 27, 2014. Accessed April 2, 2017.
https://www.youtube.com/watch?v=wEKQb6IDO0Q.
65
Innis, Harold Adams, William Buxton, Michael R. Cheney, and Paul Heyer. Harold Innis's
History of Communications. Lanham: Rowman & Littlefield, 2015. 87.

28
who worked the presses. This division of labor and the subsequent industrialization of

the printing world, proved very successful. Innis writes about this progress:

The output of the press had materially increased. Whereas the Gutenberg press
produced one leaf every three minutes or 300 leaves in fourteen hours, in 1571
after the introduction of the glissiere, tympan, and frisquette, production
reached 200 leaves per hour or 3,500 leaves per day of fifteen to sixteen hours.
In 1572, there were the beginnings of an arbitration tribunal. Masters were
limited to two apprentices (one for the press and one for the case), who knew
how to read and write for three years. Maximum wages in Paris were fixed at
18 livres tournois per month or 7 sols per day. Masters were not allowed to
interrupt work for more than three weeks and were required to give eight-days
notice, as compagnons had been required to do. In 1586, masters were required
to have two presses. While printers were paid from one half to two thirds
higher than other workers by virtue of the higher skill involved, they were able
to establish an effective organization within the first century, after the
invention was introduced.66

Master printers became one of the dominant figures of the newly established middle

class as the popularity of published works grew. The expense of publishing books

dropped exponentially as compared to cost of manuscripts and this made books more

assessable to the working classes. The Master Printer was instrumental in that shift

and Elizabeth Eisenstein’s research reveals that fact:

As the key figure around whom all arrangements revolved, the master printer
himself bridged many worlds. He was responsible for obtaining money,
supplies, and labor, while developing complex production schedules, coping
with strikes, trying to estimate book markets, and lining up learned assistants.
He had to keep on good terms with officials who provided protection and
lucrative jobs, while cultivating and promoting talented authors and artists who
might bring his firm profits or prestige. In those places where his enterprise
prospered and he achieved a position of influence with fellow townsmen, his
workshop became a veritable cultural center attracting local literati and
celebrated foreigners, providing both a meeting place and message center for
an expanding cosmopolitan Commonwealth of Learning.67

66
Ibid., 89.
67
Eisenstein, Elizabeth L. The Printing Revolution in Early Modern Europe. Cambridge:
Cambridge University Press, 1983. 56.

29
Eisenstein says that Gutenberg’s apprentice, Peter Schöffer was one of the first to

make the transition from manuscript scribe to book printer, which she described as “a

genuine occupational mutation.”68 To succeed, Schöffer had to possess a series of

publishing skills including the ability to read and write; the ability to read several

languages including German and Latin; the skill to design and cast new letters; the

technique to operate a printing press; the ability to manage a single press run; the skill

to assemble the collected pages into a single book; and the ability to manage the

finances and the personnel necessary to make the process work. Because Schöffer was

the first, he became the model for the Master Printer and his standards endured for

several generations.69

By the end of the century, Schöffer had risen to a position of eminence in the
city of Mainz. He commanded a “far-flung sales organization,” had become a
partner in a joint mining enterprise, and had founded a printing dynasty. His
supply of types went to his sons upon his death, and the Schöffer firm
continued in operation, expanding to encompass music printing, through the
next generation.70

For the first 50 years after the birth of the printing press, most press runs were

published editions of manuscripts but that source soon ran dry. By 1500, printers were

seeking out other sources and they started with theological texts before advancing into

scientific and medical texts. The university system also demanded more textbooks,

which then, as now, were very profitable. There was also a demand for vernacular

68
Ibid., 57.
69
Ibid., 43-160
70
Ibid., 43-160.

30
texts, or books published in languages other than Latin. By 1600, 96 percent of the

books published in Europe were in languages other than Latin.71

As for the media theory here, writing something down makes a thought

tangible and tangible ideas, removed from the author, are commodities, which can be

published and sold. Eagleton’s fear was that once the idea was removed from the

thinker, the idea can have a life of its own; in the case of capitalism and its emergence

in Europe, the written word was a commodity to be exploited and a litany of

associative industries sprung up around the published word. One of the more definitive

was the author.

The English Author

The work of the scribe was not to write new books, it was to rewrite or

duplicate existing ones and, in most cases, these books were theological texts simply

because most scribes were monks. Author Richard Fine wrote about the monastic

scribe:

With the decline of Rome, cultural activity, including writing, withdrew into
the Christian monasteries. The Church supported the production of texts, as
monks labored in scriptoria; so arduous was the process of making manuscripts
that such copying was commended as a labor of love. There were no lay
writers, there was very little original writing, and literacy itself was severely
limited. Authorship in the modern sense simply did not exist. As Elizabeth
Eisenstein concludes, within the medieval world view “a writer is a man who
‘makes books’ with a pen, just as a cobbler is a man who makes shoes on a
last.”72

71
Eisenstein, Elizabeth L. The Printing Revolution in Early Modern Europe. Cambridge:
Cambridge University Press, 1983. 43-160.
72
Fine, Richard. James M. Cain and the American Authors' Authority. Austin: University of
Texas Press, 1992.

31
And demand was rising for new content. At the same time, England found itself

coming to terms with the reality that, during the 15th century, it had fallen well behind

continental Europe in the world of publishing. In fact, it wasn’t until 1473 that the first

London printer, William Caxton, published the first printed book in England; that

book was Recuyell of the Historyes of Troye. He later went on to create an English

translation of Aesop’s Fables among other notable work.73 Caxton was later joined by

printers Wynkyn de Worde and Julian Notary, but this small group of publishers were

the exception, not the rule, and the government took action to bring more printers to

London.74

Demand steamed so far ahead of domestic production in the late fifteenth


century that Richard III enacted a law to encourage book importation and to
attract foreign-trained printers to England. The Statute of Richard succeeded so
well that production soon exceeded demand, and in 1533, under pressure from
domestic printers, Henry VIII reversed his predecessor and banned the sale of
imported books.75

By 1557, the government granted London’s 20 most important printers a virtual

monopoly when it established a guild called the Stationers’ Company. This guild

dominated the whole of England, controlling every aspect of the publishing world, and

it was from this guild that the original doctrines for copyright law were generated. It

was in this environment, and under the control of the Tutor monarchy, that literary

works began to gain traction and the craft of writing became a true commodity.76 But

there was also a problem: once the printer commissioned a piece of writing, the writer

73
Duff, E. Gordon. William Caxton. New York: B. Franklin, 1970.
74
Fine, Richard. James M. Cain and the American Authors' Authority. Austin: University of
Texas Press, 1992.
75
Ibid.
76
Ibid.

32
lost control of the work often under the harshest of legal conditions as historian

Richard Fine observed:

The system of prior censorship conceived by the state and zealously


implemented at Stationers’ Hall kept authors in a precarious position until the
end of the seventeenth century. Severe penalties were meted out to those
whose work was deemed treasonable or blasphemous, or who violated the
licensing provisions. Printers acquired manuscripts by fair means or foul, and
in one historian’s memorable phrase, the author under licensing “was much
more likely to have his ears cropped off than his purse filled.”77

And yet, the later half of the 16th century marked a great golden age of English

literature. The works of Sir Thomas Malory and others were rescued from their

manuscript forms and print editions were made in English, not Latin.78

This move to re-introduce these writings in England helped shape the culture

of the country in several ways: first, publishing creates the idea of the nation-state;

second, printing in the vernacular only further supported that idea. Marshall McLuhan

calls this the “phenomenon of nationalism” and quotes an editorial essay by Simone de

Beauvoir to make his point:

…and to obtain this it is almost necessary, in our age, to be a member of a


national community that has, along with whatever moral and aesthetic
excellences, the quite vulgar quality of being in some degree powerful—of
being regarded attentively by the world and, most important, listened to. The
existence of such a community seems to be a precondition for the emergence
of a national literature sufficiently large in extent and weighty in substance to
fix the world’s eye and give shape to the world’s imagination; …it was the
writers themselves who helped call into being the thing called “national
literature”. At first, their activity had a pleasing artlessness about it… Later
under the spell of the Romantic movement, moribund languages were revived,
new national epics were composed for nations that as yet barely existed, while

77
Ibid.
78
"BBC Radio 4 - In Our Time, Caxton and the Printing Press." BBC News. October 18, 2012.
Accessed February 10, 2017. http://www.bbc.co.uk/programmes/b01nbqz3.

33
literature enthusiastically ascribed to the idea of national existence the most
supernatural virtues…79

So pride in nation came from literature. McLuhan says that France certainly believed

this and led the nationalist charge in the 15th and 16th centuries; but the English found

their identity in national literature too. Theorist Terry Eagleton certainly believes this.

In his book Literary Theory, he devotes an entire chapter entitled “The Rise of

English,” explaining the power English literature had over English national identity. In

the 18th century, England emerged from its destructive civil war searching for a global

identity and seized upon what he describes as “neo-classical notions of Reason,

Nature, order and propriety, epitomized in art….”80 At the same time, on the

continent, as the French and Germans contemplated the philosophies of life, the

English search for identity had them looking inward, reflecting upon their budding

literary tradition.

From the 16th and 17th centuries, they had the writings of Chaucer and Sir

Thomas Malory who were followed by John Donne, Ben Jonson, Christopher

Marlowe, Sir Walter Scott, Williams Shakespeare, Edmund Spenser and Sir Philip

Sydney. The collection included poets, playwrights and storytellers. Eagleton suggests

that the heart of a true Englishman is a long-held respect for these authors and poets.

To be English, one had to know their collective works, the plays, the verse, the

literature of English prose. Of course it’s unclear if Shakespeare, for example was a

79
McLuhan, Marshall. The Gutenberg Galaxy: The Making of Typographic Man. Toronto:
University of Toronto Press, 1962. 199.
80
Eagleton, Terry. Literary Theory: An Introduction. Minneapolis: University of Minnesota
Press, 1983. 15.

34
great literary genius, or—because of the availability of his published works—his

greatness was forged.

Shakespeare was not great literature lying conveniently to hand, which the
literary institutions then happily discovered: he is great literature because the
institution constitutes him as such. This does not mean that he is not ‘really’
great literature—that it is just a matter of people’s opinions about him—
because there is no such thing as literature which is ‘really’ great, or ‘really’
anything, independently of the ways in which that writing is treated within
specific forms of social and institutional life.81

It was criticism, Eagleton argues, that established the definitions of greatness among

the English reading public. Shakespeare certainly wrote about his love for England

and his works became commonly quoted as tomes of adulation for the nation. Take,

for example, the following lines from his play Richard II:

This royal throne of kings, this sceptered isle,


This earth of majesty, this seat of Mars,
This other Eden, demi-paradise,
This fortress built by nature for herself
Against infection and the hand of war,
This happy breed of men, this little world,
This precious stone set in the silver sea,
Which serves it in the office of a wall
Or as a moat defensive to a house,
Against the envy of less happier lands,
This blessed plot, this earth, this realm, this England.82

Henry VIII certainly got what he wanted; an English literary tradition steeped in the

mythology of hope and bounty. Shakespeare and his peers were certainly the architects

of that grand literary mission.

By the 18th century, the idea of ‘literature’ began to shift, paring away other

genres to make way for a clearly defined class of writing. Eagleton says that new

definition of literature was clarified to the realm of the “imaginary.”

81
Ibid., 176.
82
Shakespeare, William. Richard II. Hamburg, Germany: Tredition GmbH, 2015.

35
But by the time of the Romantic period, literature was becoming virtually
synonymous with the ‘imaginative’: to write about what did not exist was
somehow more soul-stirring and valuable than to pen an account of
Birmingham or the circulation of blood. The word ‘imaginative’ contains an
ambiguity suggestive of this attitude: it has a resonance of the descriptive term
‘imaginary’, meaning ‘literally untrue’, but is also of course an evaluative
term, meaning ‘visionary’ or ‘inventive’.83

During the next century, the popularity of English writers would rise and fall until,

finally, in the wake of World War I, a resurgence in English chauvinism flourished,

elevating again the influence of English literature.

England’s victory over Germany meant a renewal of national pride, an upsurge


of patriotism which could only aid the English’s cause; but at the same time
the deep trauma of the war, its almost intolerable questioning of every
previously held cultural assumption, gave rise to a ‘spiritual hungering’, as one
contemporary commentator described it, for which poetry seemed to provide
an answer. It is a chastening thought that we owe the University study of
English, in part at least, to a meaningless massacre. The Great War, with its
carnage of ruling-class rhetoric, but paid to some of the more strident forms of
chauvinism on which English had previously thrived: there could be few more
Walter Releighs after Winfred Owen. English Literature rode to power on the
back of wartime nationalism; but it also represented a search for spiritual
solutions on the part of an English ruling class whose sense of identity had
been profoundly shaken, whose psyche was ineradicably scarred by the horrors
it had endured. Literature would be at once solace and reaffirmation, a familiar
ground on which Englishman could regroup both to explore, and to find some
alternative to, the nightmare of history.84

That pride in English Literature lingers today and, in fact, has found its way into the

British colonial literary network where it thrives in nations including the United

States, Canada and Australia among many others.

As it happens, the literary cultures and publishing cultures among the British

Commonwealth nations and the United States all draw their influences and origins to

the professional printing culture established in the 16th and 17th centuries. And that

83
Eagleton, Terry. Literary Theory: An Introduction. Minneapolis: University of Minnesota
Press, 1983. 16.
84
Ibid., 26.

36
modern publishing model began rather modestly in the streets of London with little

hand notices called pamphlets.

The Pamphlet

The pamphlet is a simple publishing device, which traditionally consisted of a

few pieces of paper that were folded in half or in quarters and were loosely bound.

Although they existed in manuscript form before the printing press, the pamphlet rose

in influence after printing became the standard across Europe. The first pamphlets

were used as marketing tools but later became devices for instruction and recruitment.

One can also imagine, although there is little research supporting this idea, that the

pamphlet was a tool used to train apprentices in the craft of print production.

Early examples of the pamphlet and its history can be tracked to the source: As

print operations began defining themselves along the Rhine River Valley, Mainz

Master Printer Peter Schöffer began expanding his operations to ward off competition.

In addition to printing books, he began distributing handbills, circulars and sales

catalogues that boasted the quality of his print operations.

The drive to tap markets went together with efforts to hold competitors at bay
by offering better products or, at least, by printing a prospectus advertising the
firm’s “more readable” texts, “more complete and better arranged” indexes,
“more careful proof-reading” and editing. Officials serving archbishops and
emperors were cultivated, not only as potential bibliophiles and potential
customers who issued a steady flow of orders for the print of ordinances,
edicts, bulls indulgences, broadsides and tracts.85

It was in this environment that changes in the Catholic Church triggered a new

publication, the pamphlet, which was traditionally a loosely bound collection of pages

85
Eisenstein, Elizabeth L. The Printing Revolution in Early Modern Europe. Cambridge:
Cambridge University Press, 1983. 29.

37
with a message designed to help shape public opinion as the Reformation began.

During the first half of the 16th century, the pamphlet was used primarily as a

conversion tool hoping to inspire people to leave the Catholic faith and join the

Protestant revolution. Towards that purpose, pamphlets often included vernacular

translations of poems and Bible verses and prayers; also, they were often published

absent a profit motive, meaning the author of these works circulated them without

seeking payment.86

The pamphlet also became a tool for literacy as author Alexander Monro

suggests:

Luther’s pamphlets were styled for a popular readership, many of whom were
probably only partially literate and who might consult a pamphlet with family
and friends, or who simply read their printed matter more slowly than others,
sometimes skipping difficult words. Literacy, after all, is not a zero-sum game
and printing did hasten standardization, making letters and words far easier for
the layman to read. Renaissance printing had already made books and
pamphlets more commonplace; the reformers acted as a further spur. As books
became part and parcel of the urban landscape, their content (and their script)
inevitably grew more familiar.87

The pamphlet evolved into having many purposes, but during the Reformation,

literacy was a key component of conversion and the pamphlet certainly gave the

European community a tool for understanding.

During the 18th century, easily the most famous of all pamphlets was a call for

revolution printed by Robert Bell in Philadelphia. The publication was called Common

Sense and the author was an American revolutionary named Thomas Paine.

86
"1911 Encyclopædia Britannica/Pamphlets." 1911 Encyclopædia Britannica/Pamphlets -
Wikisource, the Free Online Library. Accessed March 10, 2017.
https://en.wikisource.org/wiki/1911_Encyclop%C3%A6dia_Britannica/Pamphlets.
87
Monro, Alexander. The Paper Trail: An Unexpected History of a Revolutionary Invention.
New York: Alfred A. Knopf, 2016. 245.

38
Common Sense was the equivalent of a modern-day international best-seller. It
caused an immediate sensation in Philadelphia, and was shortly reprinted in
cities up and down the eastern seaboard. It was published in France, Germany,
and even in England (where it went through five editions in London as well as
appearing in Edinburgh and Newcastle). By deciding to publish Common
Sense Bell gave the independence movement a new revolutionary voice. He
also launched the career of a man who would turn out to be one of the most
prolific and persuasive writers of the late eighteenth century: Thomas Paine.88

Common Sense’s greatest achievement was the ability to shape public opinion and to

sway support in favor of revolution. It was literary events like this one that encouraged

the architects of the U.S. Constitution to define the freedoms of spoken and written

speech included in the First Amendment. And because Common Sense was a catalyst

for the call for revolution, many Americans found faith in the value of the printed

word and developed a natural thirst for newsprint.

‘Hack’ Writers of Grub Street

The pamphlet movement also defined the value of the written word as a

commodity and a new class of writer was borne: “hack” writers. “Hacks” were a class

of “writers for hire” who congregated in the Grub Street district in central London.

The origins of the “hack writer” are relatively simple. The word “hack” is derived

from the word “hackney,” which was a verb used to describe a horse that anyone

could ride. When these “writers for hire” began appearing in London, the more

established authors began referring to them as “hacks” because anyone could pay them

to write anything. Because of the proximity to the London publishing houses, these

poor aspiring writers gathered along Milton Street in the Moorfields district, which

88
Paine, Thomas, and Edward Larkin. Common Sense. Peterborough, Ont.: Broadview Press.,
2004.

39
was also a haven for crime, poverty and prostitution.89 Because these writers were

working without patrons, they were left to make a living writing for anyone willing to

pay them. This was a clear break from the culture of patronage that existed prior.90

In his chapter in The Cambridge History of the Book in Britain, author Graham

Parry explained the role of arts patronage:

Patronage was a significant condition of publication in Elizabethan and early


Stuart times. A patron’s name gave assurance that the book was a responsible
work, accountable to a known figure in public life. The usual procedure
involved the author seeking permission from a patron to offer the dedication of
a particular work; acceptance implied approval of the subject of the book, and
usually meant that the patron was willing to reward the author in some way,
usually financial, as an acknowledgement for the honour implied by the
dedication. Quite possibly, if the book proved contentious, the patron would
offer some skid of protection to the author. By attracting dedications of certain
kinds of work, a patron could demonstrate where his interests were most
engaged in matters of religion, history, poetry or other aspects of learning.91

But Parry notes that Queen Elizabeth, herself, did not see patronage as a royal concern

and historically made little effort to fund English writers. Instead, this was left to the

nobles. The opposite was true in France and Italy.

During Elizabeth’s reign the court as a whole effectively failed to act as a


centre of patronage, and it fell to a few noblemen with an interest in letters to
act as patrons in a consciously discriminating way, aware that in France or
Italy, or indeed in ancient Rome (to which the whole patronage system looked
back), intelligent and generous encouragement of authors had raised a noble
and enduring literature. The interrelated families of the Earl of Leicester, Sir
Philip Sidney and the Earl of Pembroke performed the functions of patronage
most productively, patriotically intent as they were on fostering the growth of
humane letters in England. The other figure in this period who stands out for
his discriminating patronage is William Cecil, Lord Burghley.92

89
Treglown, Jeremy, and Bridget Bennett. Grub Street and the Ivory Tower Literary
Journalism and Literary Scholarship from Fielding to the Internet. Oxford: Clarendon, 1998. 1.
90
Ibid., 2.
91
McKitterick, David. The Cambridge History of the Book in Britain. Cambridge: Cambridge
University Press, 2014. 174.
92
Ibid.

40
Without patrons, aspiring English authors were left to fend for themselves, looking

ideally to the economic forces of the London publishing market for a living and this

fact transformed writing from a pure art form and into a commodity.

From this English patronage vacuum, a fertile publishing web formed, one that

included printers, writers and booksellers; all of who fell under the authority of

Stationers’ Hall, a quasigovernmental authority that required authors, printers and

booksellers to register all book titles.

For Stationers’ Hall is the custom-house of British literature, empowered by


parliamentary wisdom to levy an impost upon knowledge, and the produce of
the brain, for the sole and exclusive benefit of its owners. Authors must pay
that “Stationers” may dine. Visiting the wonderful Hall, you may see the two
pass together through the high iron gates, and go up to their respective doors:
he who struts up to the front entrance of honour in glossy Sunday clothes, stout
and rubicund, as becoming the member of a worshipful Company, and he who
turns the other side, thin and haggard, perhaps nervously fingering a piece of
paper. The author, meekly entering the side-door of Stationers Hall, is going to
“register” a work he has written. It is very strange, but it is true, that the laws
of this country do not protect a man who produces a book, that may possibly
be the delight of generations, in the same way as the other man who produces a
clothes-brush, or a wheelbarrow. If the clothes brush, or the wheelbarrow, is
stolen from its lawful producer and owner, the individual committing the theft
is called a thief, and sent to prison; but if the contents of the book are stolen,
the individual committing the theft is, except under certain conditions, not
called a thief, and not sent to prison. Legislative wisdom has made it an
absolute necessity for an author, before he can call a book which he himself
has written and produced his own property, to have it “registered,” and
legislative wisdom has further provided that the registration must take place,
not at one or other of the public offices devoted to like purposes, where the
work might be done at little or no expense, but at the ancient Hall of the
Company of Stationers.93

By 1650, the business model had been set. Booksellers would commission authors and

printers to write and publish books, which would be registered in the Stationers’ Hall

and copyrighted. Before this system, the book model was for the author to approach a

93
"Entered at Stationers' Hall". A Sketch of the History and Privileges of the Company of
Stationers. With Notes, Etc. London, 1871.

41
wealthy member of the English nobility for a letter of endorsement (which would

affirm that the book topic was tasteful) and the funding to produce the work.94 But this

model was fading and the new business model was pure capitalism: the writer

produced the work; the printer packaged the work; the bookseller distributed the work.

Stationers’ Hall merely kept the written work from offending the monarchy and

catalogued all the titles to minimize plagiarism and piracy; it also used its authority to

fleece the whole of the publishing community.95

Because the booksellers and printers occupied the neighborhoods around St.

Paul’s Cathedral, populating Fleet Street and other avenues, the “hack writers” also

moved there. Given its proximity, Grub Street (now Milton Street) became a magnet

for aspiring writers—poets, playwrights, pamphleteers—who ultimately inspired a

culture of journalism as well.

Authors there including Oliver Goldsmith, Andrew Marvell, Anthony

Trollope, Ned Ward, John Wolcot and many others launched literary careers from

London’s inner city.96 Of all of these, Goldsmith—an Irish playwright, poet and

novelist—was probably the most famous having written a series of successful pieces

of literature including The Vicar of Wakefield, The Good-Natur’d Man and The

Traveller.

Historian Norma Clarke writes about him:

94
McKitterick, David. The Cambridge History of the Book in Britain. Cambridge: Cambridge
University Press, 2014.
95
Fine, Richard. James M. Cain and the American Authors' Authority. Austin: University of
Texas Press, 1992.
96
Clarke, Norma. Brothers of the Quill: Oliver Goldsmith in Grub Street. Cambridge, MA:
Harvard UP, 2016. Print.

42
Goldsmith was among the first generation of ‘writers by profession’ who were
able to look to the reading public and booksellers for financial support rather
than to aristocratic patrons. With success came social status as well as the
money that secured his wants; his name was recognized in the highest circles
of the land, and he mingled with those who had never known what it was to
wonder where the next meal was coming from.97

To pay their bills and feed their families, these writers often took contract work and

were paid by the word. It also became commonplace for these writers to pool efforts

and create dedicated publications that addressed social and political issues.

Newspapers

Newspapers are really just a byproduct of the pamphlet movement. Pamphlets

were circulated to change minds but as the Reformation unfolded, the European

economy began evolving, and a Middle Class that never existed before, began to

emerge and this triggered a new age of commerce. The newspaper really represented a

profitable form of the pamphlet.

The burden of the publication of the pamphlet rested solely upon the author,
and he could not, and did not, look for reimbursement by sales. Its circulation
was correspondingly limited, and it was, moreover, necessarily accidental, as it
was without organized and regular methods of reaching the hands of those to
whom It was addressed. These methods the newspapers at once supplied, and
the amalgamation of the political pamphlet and the newspaper became speedily
complete, adding to the importance of each.98

So, while the pamphlet was a free-floating publication searching for an audience, the

newspaper had a much more structured process; first it published on a schedule, it had

97
Ibid.
98
North, S. N. D. History and Present Condition of the Newspaper and Periodical Press of the
United States with a Catalogue of the Publications of the Census Year. Washington: Government
Printing Office, 1884.

43
a set press run, the content stuck to common themes, it spoke to a specific audience,

and the publisher sought payment for the work.99

The first European newspaper appeared in the city of Venice in 1562 and it

was a monthly publication called a “gazette.” The word “gazette” draws its roots from

the “gazetta,” which was the currency of the citystate and, as it happens, the cost of the

paper. At the time, Venice was a major trading port and one of the leading commerce

centers of Europe; as merchant ships moved around the Mediterranean and up along

Europe’s western coastline, the Venice gazettes, often came along with the crew, only

later to be circulated by happenstance among other ports of call. In doing so, the

gazettes found a new reading audience who saw value in the stories, specifically, the

ones about trade, surplus and finance. It didn’t take long for similar newspapers to

spring up along these trading roots.100

Then as now, the core purpose of the newspaper is to share information that is

of some value to the readership. As the newspaper evolved, the design and content of

these publications became more complex and more sophisticated. Terry Eagleton

wrote that he thought printing was a “disembodiment of thought,” and the newspaper

was a natural offshoot of that idea; newspapers, by their very design, are the process of

profiting from packaged information, or thought as a form of commodity.

During the next five centuries, newspapers elevated in cultural value and were

often attached to major world events including the Reformation, the American

Revolution, the U.S. Civil War, and the First and Second World Wars. People have a

99
Ibid.
100
North, S. N. D. History and Present Condition of the Newspaper and Periodical Press of
the United States with a Catalogue of the Publications of the Census Year. Washington: Government
Printing Office, 1884.

44
natural curiosity for things that exist beyond the horizon and newspapers had the

ability to deliver that information.

In 1622, the first English newspaper was published and it was called Butter’s

Weekly Newes, which was edited by Nathaniel Butter and its pages covered issues

related to trade and commerce from across Europe. All Butter actually did was take

the gossip that was commonly circulated around the local pubs and posted it in a

published form but the model worked. Because the publication was a piece of paper

folded into quarters, it resembled a book and a fledging newspaper form called the

“newsbook” became the standard, at least for the first decade. Given the times,

Butter’s publication became a resource that confirmed gossip and other rumors

emanating from the English and European royal courts. Because Butter printed on a

regular schedule and numbered his publications, Weekly Newes is recognized as the

first English newspaper. In 1870, historian Cucheval-Clarigny described the Weekly

Newes:

The Weekly Newes, then, was a real newspaper in the sense which we attach to
the word in the present day. This first born of the English Press was far from
possessing the ample portions of the thorough newspaper. A single number of
the Times contains more matter than the Weekly Newes gave in a year. It was a
small quarto sheet, printed upon very coarse paper, which contained side by
side with one another, and without an attempt at arrangement, any important or
remarkable events which had happened upon the continent; a victory of the
Count of Mansfeldt in Germany, a case of sacrilege at Boulogne, an
assassination or a poisoning at Venice, a great fire at Paris. It never made the
least allusion to what was passing in England, and the tidings from the
continent were mere matter of simple recital, without the slightest comment or
reflection. In that respect, the Weekly Newes was no different from the fugitive
prints that had preceded it; but it was for the time one of those great novelties
that attract attention far and wide.101

101
Andrews, Alexander, ed. Newspaper Press. London: n.p., 1870.

45
Because of Butter’s success with the publication, the Weekly Newes was followed by a

series of other offerings including the more prominent The Public Intelligencer in

1663 and The London Gazette in 1665. These newspapers grew and expanded and

were ultimately transformed into broadsheets, which is one of the more popular design

forms common in the modern press.102

It didn’t take long for journalists to rise in stature and one of the first celebrity

journalists was Daniel Defoe, the English author and novelist who ultimately wrote

Robinson Crusoe. In 1704, he wrote the first example of “disaster journalism” when

he published The Storm, which was his observations of a storm that devastated the

coastline of southern England. The reporting and research he conducted during the

aftermath of the storm created a standard he’d repeat again and again in his news

writing and today he is recognized as the “father of modern journalism,” at least in the

United Kingdom.103

His work also set the stage for other forms of the craft of journalism.

In 1695, a lapse in the Licensing Act lifted many of the tight controls over the

printing and publishing community in London, and this new freedom inspired a whole

generation of news writers.

New businesses started up, new ventures like periodicals that invited letters
from readers, and new relations between publishers, pamphleteers, journalists,
poets and essayists became the stuff of printed matter. The same stories were
read, told and repeated; they circulated through coffee houses and taverns. By
the mid-century Londoners, passionate about news and information of every

102
Leonard-Stuart, Charles, and George J. Hagar. People's Cyclopedia. New York: Syndicate,
1914.
103
Kerrane, Kevin, and Ben Yagoda. The Art of Fact: A Historical Anthology of Literary
Journalism. New York: Simon and Schuster, 1998. Introduction.

46
kind, were sustaining no fewer than six daily newspapers, as well as six that
came out three times a week and a further six weeklies.104

Among the more influential, was a newspaper called The Spectator, which

began circulating around London in 1711. The purpose of the publication’s writers,

Joseph Addison and Richard Steele, was to help shape public morality as the modern

world took shape in the age of Enlightenment.105 Because nothing had really come

before, The Spectator was borne from inspiration and written in a way very different

from modern newspapers. To tell their stories, Addison and Steele created amalgams

to represent the different classes and then they drafted allegorical stories illustrating

the troubles affecting those tradesmen.

The characters Addison and Steele use in The Spectator are types: Sir Roger de
Coverley is a baronet and hereditary landowner, Sir Andrew Freeport a
merchant, Captain Sentry a retired soldier, and Will Honeycomb a gentleman
of fashion with independent means (‘The Spectator’s Club’). These interact
with each other and the writers entertainingly narrate stories about them,
especially Sir Roger. The habits, manners, and adventures of these characters
were emblematic of the upper and middle classes at large.106

The paper had a circulation of around 3,000, but Addison was convinced it circulated

freely around London, reaching a readership of 60,000.107 The work was so well

received it transformed literary traditions throughout England.108

104
Clarke, Norma. Brothers of the Quill: Oliver Goldsmith in Grub Street. Cambridge, MA:
Harvard UP, 2016.
105
Eisenstein, Elizabeth L. Printing as Divine Art: Celebrating Western Technology in the Age
of the Hand Press. Oberlin, OH: Oberlin College, 1996. 98-152.
106
Cavill, Paul, and Heather Ward. The Christian Tradition in English Literature: Poetry,
Plays, and Shorter Prose. Grand Rapids, Mich: Zondervan, 2007. 211
107
Ellis, Sue. Applied Linguistics and Primary School Teaching. Cambridge: Cambridge UP,
2014. 122.
108
Ibid. 113.

47
The first newspaper published in the Americas was a monthly called Publick

Occurrences Both Forreign and Domestick, which appeared in Boston on September

25, 1690. It was expected to publish monthly but was shut down immediately by the

English government. British authorities closed the newspaper because its editor,

Benjamin Harris, failed to get a commercial license for the operation. Harris saw the

process of applying for a license as an affront to his ideas of freedom.109

Benjamin Harris cursed his fate and those who, in his view, had so arbitrarily
brought it upon him. The historian Louis Solomon, speaking for the minority,
takes his side. Harris “stands out,” Solomon believes, “as the first in a long list
of ornery, non-conforming, trouble-making newspapermen who have insisted
on being free despite the consequences. Winners or losers, they are the pride of
American journalism.”110

Dejected, Harris left the American colonies and returned to London to start another

newspaper there. In his absence, Bostonians had to wait another 14 years for the next

newspaper to come along: it was called The Boston News-Letter, and it lasted seven

decades; it also set the modern standards for future American publications.111

New York City’s first printer was William Bradford and he published the

city’s first newspaper: the New York Gazette, which was a weekly that ran from 1725

to 1744.112 It was also from this printing office that the next generation of printers

would emerge. His son Andrew Bradford worked as an apprentice for many years

before relocating to Philadelphia to start his own press operations there. Andrew and

109
Burns, Eric. Infamous Scribblers: The Founding Fathers and the Rowdy Beginnings of
American Journalism. New York: PublicAffairs, 2007. 28-34.
110
Ibid., 33.
111
Ibid., 28-34.
112
Hudson, Frederic. Journalism in the United States, from 1690 to 1872. New York: Harper,
1873.

48
his wife Elizabeth Bradford also launched that city’s first newspaper, The American

Weekly Mercury, in 1719.113

New York City printer John Peter Zenger also worked for William Bradford

first as an apprentice and later as a partner before striking out on his own to launch the

New York Weekly Journal. As the publisher of the New York Weekly Journal, Zenger

took exception with New York Governor William Cosby and his successor Lieutenant

Governor George Clarke and it was through this newspaper that Zenger objected to the

inner workings of the New York government. In 1734, Zenger was arrested and

charged with Seditious Libel for his writings and spent nine months in jail awaiting

trial for what would be a landmark case in American court history. Zenger was

ultimately acquitted and returned to his printing operation where he continued to

publish the New York Weekly Journal until his death in 1746.

It was during this same point in U.S. history that a series of newspapers

launched throughout New England and Mid-Atlantic region. In Boston, a printer

named James Franklin discontinued his newspaper, The Courant, in 1726 and

relocated his family to Newport, Rhode Island to launch the Rhode-Island Gazette,

and it was in this newsroom that he trained his younger brother, Benjamin, the craft of

printing. Once he’d finished his apprenticeship, Ben Franklin moved first to New

York hoping to work for William Bradford before moving south to Philadelphia. Once

there, Franklin worked for Bradford’s son, Andrew, for a time. Franklin later started

his own publishing house, one that ultimately included a newspaper—The

113
Ibid.

49
Pennsylvania Gazette—, an annual magazine called Poor Richard’s Almanack, and

several pamphlets. That was 1729.114

In 1733, Ben Franklin returned briefly to Newport to visit with his brother,

whose health was failing.115

Benjamin Franklin, who was becoming established as a printer in Philadelphia,


visited Newport in the autumn and had a “cordial and affectionate” meeting
with his brother. “He was fast declining in his Health,” Benjamin wrote in his
memoirs, “and requested of me that in case of his Death which he apprehended
not far distant, I would take home his Son, then but 10 Years of Age, and bring
him up to the Printing Business.”116

James Franklin died in 1735 and, for a time, his wife Ann and her daughters operated

the Rhode-Island Gazette. In 1758, her son, James Franklin Jr. returned from

Philadelphia after finishing an internship with his uncle Ben Franklin. And, following

in his father’s footsteps, James founded the Newport Mercury, which has through the

years passed into and out of operation. The Newport Mercury is in print today, but in

the form of an “alternative weekly,” which is written for a younger, more urban

audience.117

Back in Philadelphia, Benjamin Franklin used his print operations to elevate

his status throughout the city, rising to the rank of a cult hero. He did so by guile,

through the power of his printing operations, and by shameless self-promotion. Even

his enemies were stunned by his abilities:

Even John Adams, who disliked Franklin, grudgingly acknowledged


his fame:

114
More, Paul Elmer. Benjamin Franklin. Place of Publication Not Identified: Nabu, 2010.
115
Smith, Jeffery Alan. Printers and Press Freedom: The Ideology of Early American
Journalism. New York: Oxford University Press, 1990. 106-110.
116
Ibid. 107.
117
Ibid. 107.

50
His name was familiar to kings, courtiers, nobility, clergy and philosophes as
well as to the plebeians to such a degree there was scarcely a peasant or a
citizen, a valet-de-chambre or a footman, a lady’s chambermaid or a scullion in
a kitchen who was not familiar with it and who did not consider him as a friend
to humankind.

Adams attributed this unusual celebrity to the many useful contacts Franklin
had made as a printer.118

After the Revolution, print operations spread throughout the United States and became

sources for information. In many cases, the local newspaper helped define the

dynamic of the communities they served. Still the newspapers of this period were not

as they are today, many of the traditions and rituals of news gathering and writing

were not fully formed.

Historian Janice Hume offers this observation:

Yet there would be no “reporters” and no “news” in the modern sense for
another half century. Newspapers printed letters, information clipped from
other papers, and government documents. They were, in large part, “a
miscellany of fact and fancy about strangers far from home,” but by the mid
decades of the nineteenth century, they became an “intimate part of citizenship
and politics,” [Michael] Schudson noted.119

That would soon change.

One of the major turning points in American journalism came in 1836, when a

New York City prostitute named Helen Jewett was found murdered in her bed. As the

news spread across the city, James Gordon Bennett Sr., editor of The New York

Herald, did something no other journalist had ever done before: he walked over to the

crime scene and walked through it, speaking to witnesses and investigators. Because

118
Eisenstein, Elizabeth L. Printing as Divine Art: Celebrating Western Technology in the Age
of the Hand Press. Oberlin, OH: Oberlin College, 1996. 141.
119
Hume, Janice. Popular Media and the American Revolution: Shaping Collective Memory.
New York: Routledge, 2014. 3.

51
no one had ever done this before, law enforcement officers didn’t think to stop him (or

for that matter, anyone else) from walking through the building to look at the carnage.

When Bennett was done, he wrote the story offering graphic details of the crime

scene, quotes from investigators and other information he gleaned from the research.

Until that point, journalists didn’t actually do any reporting—they merely reacted to

the events of the day—and Bennett created the craft of it.120

The freshness of his reporting made the story that much more appealing to the

audience and the circulation of The New York Herald exploded; the Helen Jewett story

also triggered the other New York City newspapers to follow suit. Although it seems

academic at this point, what Bennett did was discover the power of eyewitness

reporting and how the reporter can become the conduit tying the readership closer to

the event. Even today, the audience lives through the observations of journalists and

this is the element that lends credence to the value of the newspaper.

In the succeeding decades, newspapers continued to evolve.

During the U.S. Civil War, the various newspapers took sides in the conflict. In New

York City, the two leading papers—The New York Herald and The New York

Tribune—defined the schism, taking opposite sides in the war. The Herald, which was

edited by James Gordon Bennett Sr., favored allowing the southern states to leave the

union, and The Tribune, which was edited by Horace Greeley, supported Abraham

Lincoln’s call to war. In doing so, these papers became recognized as “partisan

newspapers,” or papers that sponsored the political ideologies of the respective

120
Cohen, Patricia Cline. The Murder of Helen Jewett the Life and Death of a Prostitute in
Nineteenth-century New York. New York: Vintage Books, 1999.

52
parties.121 This trend of partisan newspapering continued well into the 20th century,

and vestiges of it are still apparent even today in the names of some of the older

publications; examples include The Waterbury Republican-American and The

Rochester Democrat and Chronicle.

Near the close of the 19th century, a group of journalists began writing at

length about corporate, governmental, political and theological corruption with the

purpose of defending the rights of the poor. One of them, a journalist named Ida

Tarbell, focused her attention on the oil company that put her family business into

bankruptcy and she began to gather information about the company and its owner John

D. Rockefeller. In 1904, Tarbell’s research, reporting and writing were published into

a 19-part series on the Standard Oil company and the monopoly Rockefeller had over

the oil industry; that story appeared in the pages of McClure’s Magazine and

ultimately lead to the breakup of the company.122

Her work, along with the work of Nellie Bly, Jack Reed, Upton Sinclair,

Lincoln Steffens and many others, inspired President Theodore Roosevelt to label

these investigative journalists as “muckrakers,” or people who stir up trouble.

Writer Anya Schiffrin summed it up this way in the introduction on her book about

investigative journalism:

…exploring the historical roots of contemporary journalism shows that battles


that make the headlines today—against corruption, human rights abuses, and
corporate exploitation—are subjects that journalists have been exposing for
more than a century. The fact that journalists have been calling attention to

121
Sheppard, Si. The Partisan Press: A History of Media Bias in the United States. Jefferson,
NC: McFarland & Company, 2007.
122
King, Elliot, and Jane L. Chapman. Key Readings in Journalism. New York, NY:
Routledge, 2012. 265-279.

53
some of the same problems for more than a hundred years might make one
despondent, but it shouldn’t: their writing had significant impact even when it
was not as fully effective as could be hoped. In many cases, if not most, the
worst abuses were halted. Journalism succeeded in mobilizing public pressure
or at least in engaging the elites who had decisions-making powers so that they
could take (or not take) necessary measures. That the battles are still going on
should simply remind us that new abuses, new forms of corruption, are always
emerging, providing new opportunities and new responsibilities for the
media.123

Investigative journalism ultimately became the high-water mark for the news industry

and newspapers made a point of celebrating the reporters who broke ground, revealing

corruption in all its forms. The culture of the muckraker faded during the Great

Depression and during World War II, but began reinventing itself during the Vietnam

Era… until finally, there was a major breakthrough during the Nixon years.

In 1972, a pair of young metro reporters working at The Washington Post

would score the biggest investigative news story in American history, in a series of

stories we now know as “Watergate.” The work started with a petty burglary at the

Watergate Hotel, which led reporters Bob Woodward and Carl Bernstein down a long

complex path that ultimately revealed the financial and legal missteps of the President

Richard M. Nixon. Their investigative journalism ultimately led to Nixon’s

resignation in 1974.124

In his autobiography, Ben Bradlee, executive editor of The Washington Post,

reflected upon their collective work on the Watergate investigation:

Watergate marked the final passage of journalists into the best seats of the
establishment. This trip had begun long before when men such as Walter
Lippmann and Arthur Krock separated themselves from the rough-and-tumble,

123
Schiffrin, Anya. Global Muckraking: 100 Years of Investigative Journalism from around
the World. New York: Perseus Distribution Services, 2014. 1-14.
124
Bernstein, Carl, and Bob Woodward. All the President's Men. New York: Simon &
Schuster Paperbacks, 2014.

54
hard-drinking journalists made famous in the 1920s in Hecht and MacArthur’s
Front Page, and emerged in the 1930s as leaders of a new tribe of intelligent,
educated, eminently presentable newspaper people, mostly male. In their wake
came the Scotty Restons, the Alsop brothers, Marquis Childs, Ed Lahey,
Roscoe Drummond, and finally the pioneers of television like Murrow,
Huntley, Brinkley, and Cronkite, who mixed easily with leaders of government
and business. If they all weren’t making Wall Street money yet, they were well
on their way to respectability. Watergate was the last leg of this trip, bestowing
the final accolade of establishmentarianism, or the semblance of it, on the daily
press.125

Although he doesn’t quite say it, he does hint at the fact that journalism, as a

byproduct of the Watergate scandal, evolved from a blue collar to a white collar

profession and soon a generation of aspiring radicals dropped their political science

degrees and traded away their dreams of law school, for the opportunity to study and

ultimately perform the craft of journalism. Now, to be fair, enrollment in journalism

programs was rising aggressively during the 1960s, but after the Watergate

investigation and the release of All the President’s Men, a film that cast Robert

Redford and Dustin Hoffman as Bob Woodward and Carl Bernstein, the nation’s

romantic fascination with crusading investigative journalism was assured.

More and more college students majored in journalism and mass


communications, with enrollment in some programs increasing threefold after
the 1974 Watergate flood of news stories. Investigative Reporters and Editors
(IRE), set standards and recognized merit.126

But a corporate malaise had glazed over the news industry. After World War II, many

of the family-owned newspapers were purchased by corporations; and soon, the

culture of the firebrand publisher and the crusading city editor gave way to the white-

125
Bradlee, Ben. A Good Life: Newspapering and Other Adventures. London: Touchstone,
1997.
126
Betty Houchin Winfield (Editor). "Journalism 1908: Birth of a Profession Hardcover –
September 3, 2008." Journalism 1908: Birth of a Profession: Betty Houchin Winfield: 9780826218117:
Amazon.com: Books. Accessed June 09, 2017. https://www.amazon.com/Journalism-1908-Betty-
Houchin-Winfield/dp/0826218113.

55
shoe lawyers and penny-obsessed accountants. In the 1980s, most of the evening daily

newspapers disappeared and, in the 1990s, most cities found themselves with only one

daily newspaper.

Along the way, newspapering had become incorporated, institutionalized, dull

and formulaic; by 2000, most newspaper content had been reduced to three primary

forms of writing: breaking news, general news, and feature writing. In each case, time

is the element that defines the freshness of the information. Breaking news is about the

immediateness of the event, or the factory fire as it’s happening; general news is the

detailed summary after the smoke has cleared; and the feature story is the look back

hoping to make sense of it all. Investigative journalism is the subset of the feature

news form and remains the high-water mark for the newspaper industry.

Before we move forward, we must look at one more key moment in the

foundation of journalism. In 1908, the Missouri School of Journalism was opened by

Walter Williams at a pivotal time in the development of news gathering and reporting.

Williams was part of a group of journalists who wanted to elevate the practice of

journalism from that of a trade to something that was part of the “learned professions.”

To do this, journalism would need a school and a curriculum and standards of practice.

Williams declared in the years after he opened the school that journalism was to serve

the public interest and that journalists were “public servants.”127

There is no surer test of the earning capacity of a newspaper than the measure
of its public service. There is more in journalism than bread and butter—
necessary as that is—or than dividends upon shares of stock. Journalism has a
nobler mission. It is preeminently the profession of public service. The
newspaper small or large is the greatest public utility institution. While all
other public utility institutions have been regulated by law, the newspaper is, in

127
Winfield, Betty Houchin. Journalism, 1908: Birth of a Profession. Columbia: University of
Missouri Press, 2008. 163.

56
a special sense, its own regulator. It voices, even when it does not create, the
public opinion to which itself must answer. Peculiar responsibility, therefore,
rests upon journalism to recognize its mission as a public servant.128

Key to his idea of “public service” is a sense of respect for the readership. Williams

believed that journalism must be transparent, meaning that source materials for news

must be cited as part of the storytelling. Williams’ ideas about journalism challenged

the standards of for-profit news operations, which were often influenced by advertisers

and political brokers. Williams was appealing to a higher power. He wanted his

students to be well trained, ethical and intuitive news gathers and reporters. His wife,

Sara Lockwood Williams, later wrote about their goals:

The aim from the first has been to give the student high ideals and standards of
ethics, and at the same time to put him in the “newspaper office” or laboratory
to prove for himself that these standards may be successfully applied. The
School paid heed to the old convention that the best place to study newspaper
work is in the newspaper office. It recognized, however, that the drawback
there was that the newspaper makers have their hands full without stopping to
explain what they are doing and why.129

This was a stark turn in the ethical boundaries of journalism. Before the Missouri

model, journalists could be “partisans, spectators, or reporters,” and there really

weren’t many boundaries or codes.130 The Missouri School of Journalism established

those codes, which, over the course of the next hundred years evolved and became

this: Journalists must be fair, even, unbiased, well researched, and prepared to tell

news stories without prejudice. The Columbia Graduate School of Journalism opened

just three years after Missouri, and it too clung aggressively to the idea of journalism

128
Ibid. 88.
129
Ibid.
130
Ibid.

57
as a “public service.” James Carey, a professor at Columbia University, suggested that

“journalism owes much to its cultural history,” and had this to say about its public

value:131

This is the point that I really want to hold on to—to enter any medium of
communication is to enter a world of often predefined, but negotiable,
identities, and, at the same time, in which a positive affirmation takes place.
When Hegel writes, “The modern world begins with reading the newspapers,”
in some sense, that says it all. To get up and to make the choice to say that I’m
going to enter the secular world, the political world in this case, the world of
being a citizen or a subject…. Now, these are political acts.132

Carey believed that educating journalists allowed the schools to inject intellectual and

philosophical ideals into the curriculum and by association into the profession of

journalism.133 By the end of the 20th century, the standards for ethical journalism were

established and institutionalized at least for a time. And, of course, the newspaper, the

pamphlet and the printed book were all tools in the development of global literacy.

The Birth of Narrative

Before we move on, we need to talk about narrative. Narrative is the

conversation that takes place between the author and the reader and, during the last six

centuries, the idea of narrative has matured to a point where some accepted norms

have been created. The purpose of narrative form is clarity. Narrative is about the

process of translating something that is known into something that can be understood

by an audience. Narrative is about story design and in the world of books, the

authority of that authenticity belongs to the author. The author has a certain

131
Ibid.
132
Ibid.
133
Ibid.

58
responsibility to offer the fairest interpretation of experience, ideally, in a form that

offers a fidelity of thought. Strong successful writing places the writer and the reader

in the same frame of understanding.

As the late (and profoundly missed) Roland Barthes remarked, narrative “is
simply there like life itself… international, transhistorical, transcultural.” Far
from being a problem, then, narrative might well be considered a solution to a
problem of general human concern, namely, the problem of how to translate
knowing into telling, the problem of fashioning human experience into a form
assailable to structures of meaning that are generally human rather than
culture-specific. We may not be able fully to comprehend specific thought
patterns of another culture, but we have relatively less difficulty understanding
a story coming from another culture, however exotic that culture may appear to
us. As Barthes says, “narrative is translatable without fundamental damage,”
in a way that a lyric poem or philosophical discourse is not.134

Simply, narrative is the way the author designs the story and because criticism is a

component of authorship, there is an extensive amount of research and debate over

narrative theory, which works to define the author-reader dynamic, the meaning and

purpose of storytelling, and who (the author? the audience?) gets to define the

meaning of the published work.

As a byproduct of five centuries of authorship, several key formulas have

emerged as the blueprint for modern narrative. Again, key to the success of any

narrative is clarity. What does the author intend to convey? Is this a work of fact? Is

this a work of fiction?

On this point, theorist Hayden White wrote that works of fiction have freer

license to move towards complex abstraction. The fiction work of Italian writer Italo

Calvino, for example, tended to stray towards the surreal. The issue of fidelity,

however, remains: does the reader understand the truest form of the author’s meaning?

134
White, Hayden. The Content of the Form. Baltimore, MD: Johns Hopkins University Press.
1987. 1.

59
Is there an empathy of thought here? White says it’s important but not entirely

necessary.

Nonfiction is another matter. With historiography, fidelity to the facts is

important and White defines two forms of historical writing: the first is chronology,

the second is narrative. To him, early story design was about chronology. Chronicles

were recitations of facts in the order in which they occurred, and that these facts may

not have any relevant order of meaning. To White, narrative is more complex because

it offers analysis of the facts and sears them together forming a story.

While annals represent historical reality as if real events did not display the
form of story, the chronicler represents it as if real events appeared to human
consciousness in the form of unfinished stories. And the official wisdom has it
that however objective a historian might be in his reporting of events, however
judicious he has been in his assessment of evidence, however punctilious he
has been in his dating of res gestae, his account remains something less than a
proper if he has failed to give to reality the form of a story. Where there is no
narrative, Croce said, there is no history. And Peter Gay, writing from a
perspective directly opposed to the relativism of Croce, puts it just as starkly:
“Historical narration without analysis is trivial, historical analysis without
narration is incomplete.” Gay’s formulation calls up the Kantian bias of the
demand for narration in historical representation, for it suggests, to paraphrase
Kant, that historical narratives without analysis are empty, while historical
analyses without narrative are blind. Thus we may ask, What kind of insight
does narrative give into the nature of real events? What kind of blindness with
respect to reality does narrative dispel?135

White goes on to explain that he doesn’t think that journalism is a form of narrative

because of a lack of analysis and secondary sources. Journalism, he says, is temporal

and he defines it as chronicle writing.

Journalists tell stories about “what happened” yesterday or yesteryear and


often explain what happened with greater or lesser adequacy, in the same way
that detectives of law may do. But the stories they tell should not be confused
with historical narratives—as theorists of historiography looking for an
analogue of historical discourse in the world of everyday affairs to often do—
because such stories typically lack “secondary referentiality” of historical
135
Ibid. 5.

60
narratives, the indirect reference to the “structure of temporality” that gives to
the events related in the story the aura of “historicality” (Geschichtlichkeit).
Without this particular secondary referent, the journalistic story, however
interesting, insightful, informative, and even explanatory it may be, remains
locked within the confines of the purview of the “chronicle.”136

On this point, I find myself in partial agreement with White. It is my belief that

most daily broadcast and print journalism are merely chronicle writing lacking in

context, but it is also my belief that long-form magazine journalism has evolved to

include context, analysis and the use of “secondary referentiality.” Journalism has

always been about fast accurate reporting but not necessarily about storytelling. With

the advent of long-form magazine-style journalism, there has been a push towards the

aesthetic but the digital age has place strong emphasis on speed while, in fact,

contemporary journalism has split into two very clear factions: one that is chronicle

and another that is narrative.

In a world of “tweeting” and texted breaking news briefs, we have become

engaged with a tapestry of chronology absent any context. Cable news, for example, is

really just the patter of chronological information constant and flowing over a 24-hour

network, but this platform is absent any context. If one looks at CNN for several

hours, one will notice that the content tends to be repetitive but never self-reflective.

Instead, CNN presents a chronicle of facts that can, in series, actually appear unrelated

to each other. Reviewing a Facebook or Twitter feed offers the same impressions: a

chronology of events, related or not, appear in series and it’s up to the audience to

glean some sort of meaning from it all, or not. Serialized social media content has, at

this point in history, emerged as a dominant form of data dissemination, which is

sorely lacking in context and narrative.


136
Ibid. 172.

61
Now, as these chronicle media proliferate, print media have been evolving,

maturing. An evolution of long-form storytelling has introduced a comprehensive

body of news narrative that has its own rules and aesthetic values, which has elevated

this written form to a literary level. Simply put, during the last century, experiments in

narrative storytelling have produced a canon of literary journalism. It is now possible

for journalism to be artful, creative and beautiful and still be relevant, contextual and

historical.

There were signs of this emerging aesthetic during the muckraking era (1890

to 1920) at the end of the 19th and the beginning of the 20th century, when a caucus of

magazine-style writers began producing investigative pieces about government and

institutional corruption. This body of work was insightful, creative and dynamic and

might have sustained itself if the two world wars hadn’t interrupted, hijacking the

national identity. The vestiges of literary journalism wouldn’t reemerge again until

after World War II.

Nonfiction Narrative Structure

In the latter half of the 20th century, The Wall Street Journal newspaper

emerged as a powerful print news operation in the United States in part because it

reported primarily on global business enterprises but also because it had developed its

own narrative style. As I studied for my masters at Columbia University, the faculty

there introduced my peers and I to something it called the “Wall Street Journal Style.”

Here’s a description from one of the textbooks:

The Wall Street Journal structure is based on a structure used by many


journalists but especially by journalists from the Wall Street Journal. In this

62
style the focus is first on the individual and then it moves to the larger issue at
stake. The story ends with a strong statement. For example, readers sometimes
find it hard to relate to statistics: 80% of workers of a certain company are
being retrenched. Readers can, however, relate to the plight of a certain man
who works for the company and how his retrenchment affects him and his
family profoundly. Therefore the news report first focuses on the plight of a
specific family and then focuses on the general plight of all involved. The
news report will conclude on a strong note—again it focuses on the family,
who are sure to lose their home and have to move to an informal settlement.137

The success in this narrative formula derives from the cantilever structure created by

identifying the national issue using an everyman figure to illustrate the condition. The

audience’s sympathies for the “everyman” are used to sustain interest in the national

issue and while the identify of the “everyman” has some value, the larger national

issue makes the story relevant to a broader national audience. As a result, the “Wall

Street Journal Style” has become the model for most American magazine journalism.

During my years in the classroom, I’ve improved upon this model crafting tools for

teaching undergraduates. As explained above, the method of the “Wall Street Journal

Style” was to take a national issue and “localize it” by finding a person who is affected

by this issue. In the classroom, I told my students that we use this person—this

“face”—as a vehicle (as an “everyman”) to elicit sympathy from the audience and, in

the process of telling this person’s story, the audience will learn about the overriding

national issue.

For example, on the issue of healthcare in the United States: to explain the

details of the Affordable Care Act and its troubles, the writer would find someone who

is attempting to use the system, and by telling their story, the writer explains the

complexities and the benefits of the program. That’s the overview.

137
Fourie, Pieter J. Media Studies: Content, Audiences and Production. Lansdowne: Juta,
2001. 359.

63
As to the actual form, the writer must seize upon a moment important to the

“face,” which illustrates one of the conflicts in the process. So, in the timeline of this

woman’s use of the ACA, the writer might seize upon her cancer diagnosis and open

the story with that. This lede must be designed to capture the reader’s imagination and

sense of sympathy and the moment should reflect the details and the trauma

experienced by the “face.” From here, the writer must attach the “face” to the issue.

Next, the writer must explain what the larger issue might be. After that, the writer

offers a thesis statement explaining that this “face” isn’t the only person suffering with

this problem and that there are many others out there who also have this trouble. From

here, the author moves into the history of the issue introducing sources, expert

comments and other “faces” before nearing the conclusion. In the conclusion, the

writer reintroduces the “face” and offers some sense of resolution.

Here is my list of steps:

§Lede:  Here  you  introduce  "the  face"  of  your  topic  


§Sub-­‐‑Lede:  Here  you  build  interest  in  "the  face"  and  his  relationship  with  
the  issue  
§ Nut-­‐‑Graph:  Here  you  reveal  what  the  larger  issue  is  
§ Thesis  Statement:  Here  you  connect  "the  face"  to  a  national/international  
audience  
§ History:  Here  you  explain  what  has  transpired  on  the  issue  at  hand  
§ Anti-­‐‑Hero:  Here  you  introduce  someone  who  thinks  the  issue  is  nothing  of  
importance  
§ Other  theories:  Here  you  explain  other  theories  
§ Conclusion:  Here  you  bring  "the  face"  back  into  the  story  to  demonstrate  
some  resolution  
 
As you might note from this design, this narrative form is circular. The author opens

with a dramatic moment in the chronology of the event, explains the context of the

drama and then when the writer reaches the history portion of the story, a chronology

is offered before returning again to the “face.”

64
When journalists follow this design they are deviating from a chronicle form

and towards a narrative form. Because this narrative structure has been around for

some decades, many journalists have elevated this narrative form to a literary level.

Further, because of the potency of the history section, there is an opportunity for what

Hayden White defined as “secondary referencitality.”

White wrote that by reshaping the chronicle into a narrative form, this

packaging give the stories meaning.

To “emplot” a sequence of events and thereby transform what would otherwise


be only a chronicle of events into a story is to effect a mediation between
events and certain universally human “experiences of temporality.” And this
goes for fictional stories no less thant for historical stories. The meaning of
stories is given in their “emplotment.” By empotment, a sequence of events is
“configured” (“grasped together”) in such a way as to represent “symbolically”
what would otherwise be unutterable in language, namely, the ineluctably
“aporetic” nature of the human experience in time.138

When the journalist breaks from the chronicle form, he/she packages the story,

creating a complete and whole idea, which includes history, analysis and context.

Although this style of writing is called the “Wall Street Journal Style,” it has become

pervasive across the print media world and is commonly used by feature writers,

magazine journalists, book writers and so forth. Of course, a literary aesthetic soon

followed.

Literary Journalism

Although culturally, there is a perception that fiction and journalism should be

strictly divided, it should be noted that the opposite is true. In the world of writing,

authors have been dancing back and forth writing fiction and nonfiction nearly from

138
White, Hayden. The Content of the Form. Baltimore, MD: Johns Hopkins University Press.
1987. 172.

65
the very beginning of the literary movement. And just so we’re clear, literature is the

infusion of artfulness in writing, it is the aesthetic of the creatively written word and

over the centuries many authors explored the dynamic of the artful story. English

writer Daniel Defoe (1660 to 1731) is a clear example of one of those authors. In the

succeeding years, he was followed by many other nonfiction authors including James

Agee, Truman Capote, Stephen Crane, Ernest Hemingway, Martha Gellhorn, Stephen

King, Jack London, Rosemary Mahoney, Norman Mailer, Joseph Mitchell, George

Orwell, Gay Talese, Walt Whitman and Tom Wolfe among many others.

Given this cohesiveness between fiction and nonfiction, it’s fairly expected

that there will be some crossover in writing styles. Hemingway, for example, wrote in

a spare prose, which inferred ideas rather than explained them. His economical style,

of course, was a derivative of his work in journalism and that ultimately became his

signature literary style in his novels. The same could be said for George Orwell and

Gay Talese; these writers had a light, accurate touch that defined the potency of their

literary and nonfiction storytelling styles. And yet, it was clear to the reader when

Orwell was writing fiction and when he was writing nonfiction.

It wasn’t until the late 1950s, at the dawn of the so-called “New Journalism”

movement, that a young, aspiring writer named Tom Wolfe smashed the model and

created his own form. The breakthrough essay was a story Wolfe was writing for

Esquire magazine about the youth and car culture emerging in southern California. To

write it, he flew to Los Angeles and walked through a hot-rod car show there;

afterward, he returned to his apartment in New York City, and with the notes splayed

out before him, couldn’t write a single word. His writer’s block was a combination of

66
ritual and circumstance: he understood the ritual language of newspaper journalism,

but the circumstances of his reportage had him reeling with new ideas and he was

uncertain how to write the thing because the language in his head didn’t strike him as

appropriate for the reading audience of Esquire. Stumped, he traded messages with an

editor who finally directed Wolfe to pack up his notes and messenger them across

town. In doing so, Wolfe drafted a letter of apology, which carried on for pages… as it

happens, 49 pages. When Wolfe delivered the letter and the notes, the editor reviewed

the memo, struck the salutation from the top and published the 49-page memo under

the banner headline: “The Kandy-Kolored Tangerine-Flake Streamline Baby.”139

Like the title of the story, Wolfe’s work of journalism was an assault on the

senses. In his writing, girls are shimmering and shaking in skintight hot pants and tank

tops as a rock ‘n’ roll band is playing “hully-gully” electric music that has the

teenagers quaking about the dance floor.

Inside, two things hit you. The first is a huge platform a good seven feet off the
ground with a hully-gully band—everything is electrified, the bass, the guitars,
the saxophones—and then behind the band, on the platform, about two
hundred kids are doing frantic dances called the hully-gully, the bird, and the
shampoo. As I said, it’s noontime. The dances the kids are doing are very
jerky. The boys and girls don’t touch, not even with their hands. They just
ricochet around. Then you notice that all the girls are dressed exactly alike.
They have bouffant hairdos—all of them—and slacks that are, well, skin-tight
does not get the idea across; it’s more the conformation than how tight the
slacks are. It’s as if some lecherous old tailor with a gluteus-maximus fixation
designed them, striation by striation.140

The work is seminal. First, Wolfe, either deliberately or coincidentally, fused the

traditions of nonfiction and literature together, creating (or reviving) a writing form

139
Tennis, Cary. "Tom Wolfe." Salon. N.p., 1 Feb. 2000. Web. 19 Apr. 2017.
140
Wolfe, Tom. The Kandy-Kolored Tangerine-flake Streamline Baby. New York: Farrar,
Straus and Giroux, 1965. 75.

67
called “literary journalism,” which is nonfiction storytelling that employs the same

aesthetics found in fiction. When “Kandy-Kolored” appeared in Esquire, a generation

of literary journalists followed.

Wolfe’s “Manifesto,” in his anthology The New Journalism (1973), advocated


the need for journalists to go beyond the limits of conventional reporting in
order to represent the turbulent events of 1960s America. He identified four
narrative devices borrowed from realistic fiction to chronicle contemporary
events: (1) dramatic scenes instead of historical summary, (2) complete
dialogue instead of occasional quotations, (3) multiple points of view instead
of the narrator’s perspective, and (4) close attention to status details. Use of
these literary techniques enabled journalists to provide PHSYCHOLOGICAL
depth to a degree not usually possible in newspaper reporting based solely on
facts. The voice of the New Journalist was avowedly subjective in opposition
to the objectivity expected from reporters since the beginning of the twentieth
century.141

Norman Mailer wrote a series of nonfiction books in the literary form, including The

Executioner’s Song, The Naked and the Dead, and Armies of the Night. Sports writer

Hunter S. Thompson stormed on the scene, writing A Fear and Loathing in Las Vegas,

which published in series inside the pages of Rolling Stone Magazine. And Joan

Didion wrote Slouching Towards Bethlehem. In each case, these books addressed

serious nonfiction issues but these histories were written in a form initially dedicated

to the pages of literary fiction; journalism had found a narrative form. Each of these

works cast a lasting and long shadow across the culture of news gathering and writing,

but the standard remains a “true crime” book written by author Truman Capote called

In Cold Blood.

Published in 1965, In Cold Blood is about the brutal murders of the Clutters, a

small Midwestern family living in the farm country found in Kansas in 1959. To write

141
Logan, Peter Melville, Olakunle George, Susan Hegeman, and Efrain Kristal. The
Encyclopedia of the Novel. Malden, MA: Wiley-Blackwell, 2011. 458.

68
the book, author Truman Capote had unprecedented access to the investigators, the

murder suspects, the courts, the homes and the people of Holcomb, Kansas.

The village of Holcomb stands on the high wheat plains of western Kansas, a
lonesome area that other Kansans call “out there.” Some seventy miles east of
the Colorado border, the countryside, with its hard blue skies and desert-clear
air, has an atmosphere that is rather more Far West than Middle West. The
local accent is barbed with a prairie twang, a ranch-hand nasalness, and the
men, many of them, wear narrow frontier trousers, Stetsons, and high-heeled
boots with pointed toes. The land is flat, and the views are awesomely
extensive; horses, herds of cattle, a white cluster of grain elevators rising as
gracefully as Greek temples are visible long before a traveler reaches them.142

Certainly his was not the language of The New York Times. Instead, this passage reads

like fiction and could possibly be confused with Truman Capote’s other top piece of

writing, Breakfast at Tiffany’s, which is a work of fiction about a New York City

prostitute. When Capote published In Cold Blood, it became an immediate sensation

and the author went on to conduct a series of public readings and other celebrity

appearances for the rest of his life. The trappings of his fame, however, robbed him of

his interest in writing, and he went on to publish a handful of little-noticed books

before he died of liver disease at the age of 59 in 1984.

But his influence is lasting.

Like Wolfe and Thompson and Hemingway, Capote remains one of the great

literary journalists of his time. His work In Cold Blood continues to be well received.

And today, as a result, a new literary tradition—a catalog of long-form creative

nonfiction—continues to grow to a point where, fairly soon, there might very well be

an MFA program dedicated to Literary Journalism. At the very least, there is an

142
Capote, Truman. In Cold Blood: A True Account of a Multiple Murder and Its
Consequences. NY, NY: Modern Library, an Imprint of the Random House Group, 2013. 3.

69
aesthetic standard for the craft of long-form nonfiction and that artfulness is deserving

of study and development.

Author William Zinsser, in his book On Writing Well, notes that during the

20th century, American readers shifted away from fiction simply because the

seriousness of the times demanded it. To make his argument, he looks at the Book-of-

the-Month Club, which was founded in 1926 as a resource for American’s seeking

great new fiction books. Zinsser explains that the books issued by the Book-of-the-

Month Club for the next two decades included the literary giants of the age including

Somerset Maugham, Willa Cather, Virginia Wolf and John Steinbeck. But, with

World War II, the tastes of the club’s members shifted.

All of this changed with Pearl Harbor. World War II sent seven million
Americans overseas and opened their eyes to reality: to new places and issues
and events. After the war that trend was reinforced by the advent of television.
People who saw reality every evening in their living room lost patience with
the slower rhythms and glancing allusions of the novelist. Overnight, America
became a fact-minded nation. Since 1946 the Book-of-the-Month Club’s
members have predominantly demanded—and therefore received—
nonfiction.143

And he should know: Zinsser served as the executive editor of the Book-of-the-Month

Club from 1979 to 1987.

Literacy

Literacy is the process of learning to read and the influence of writing swept

across Europe dividing the people into two distinct classes: those who could read and

those who could not. Communication Theorist Terry Eagleton wrote extensively on

the idea of literacy and it’s not entirely clear that he thinks it’s a good idea. In the

143
Zinsser, William. On Writing Well. N.p.: Harper Paperbacks, 2013.

70
modern age, Americans are of the mind that reading is important; in fact, some might

argue that it’s the most important responsibility of the education system. “Reading is

fundamental” was the slogan for and the name of RIF, an American non-profit that

promised to teach reading to anyone who wanted to learn it. Reading, they argue, is

the tool that allows anyone to learn anything. If you can translate the icons for letters

into words, one can form ideas and teach oneself new theories and skills. Author

Stephen King calls the relationship between writing and reading a form of telepathy;

the author can write something down in Portland, Maine in 1990 and the reader can

receive that message in San Diego, California in 2010.

In a chapter entitled “What Writing Is,” Stephen King explained his thinking:

Telepathy, of course. It’s amusing when you stop to think about it—for years
people have argued about whether or not such things exist, folks like J.B.
Rhine have busted their brains trying to create a valid testing process to isolate
it, and all the time it’s been right there, lying out in the open like Mr. Poe’s
Purloined Letter. All the arts depend upon telepathy to some degree, but I
believe that writing offers the purest distillation. Perhaps I’m prejudiced, but
even if I am we may as well stick with writing, since it’s what we came here to
think and talk about.144

He goes on to explain that while he’s writing the chapter in December 1997, the book

is scheduled for publication in 2000 and that his audience could find these words some

years after that. It’s this preservation of ideas that lends value to writing. It is possible

for a teacher to write something down and for the student to find it—hours, days,

weeks, years—later and learn from the chronicle of ideas. King isn’t talking about the

psychometric idea of “telepathy,” but he is engaging the concept that ideas can be

trapped and transported over time and space. Writing allows the author the leisure of

allowing his audience to find him.

144
King, Stephen. On Writing: A Memoir of the Craft. New York: Scribner, 2000. 95.

71
But the exercise of writing has darker influences over the relationship between

man and communication. Writing, in some instances, is perceived as a split in human

behavior; Terry Eagleton writes about this at length, arguing that writing is a

disembodiment of thought.

My spoken words seem immediately present to my consciousness, and my


voice becomes their intimate, spontaneous medium. In writing, by contrast, my
meanings threaten to escape my control: I commit my thoughts to the
impersonal medium of print, and since a printed text has a durable, material
existence it can always be circulated, reproduced, cited, used in ways which I
did not foresee or intend. Writing seems to rob me of my being; it is a second-
hand mode of communication, a pallid, mechanical transcript of speech, and so
always at one remove from my consciousness.145

To Eagleton, the written expression is a thing beyond the reach of its own creator. We

write it, it exists as its own thing and, often, the sentences and ideas can grow and live

well beyond the life of the author. His worry is that without control over the thought,

the thought can become something else. He doesn’t say this, but the written statement

has the potential to redefine the reader’s perception of the writer, a fact that might

necessitate a need to control the idea.

Marshall McLuhan takes it a step further. He defines literacy as a form of

schizophrenia. In his complex argument, McLuhan writes that phonetic writing splits

thought from action in a way that other alphabets—pictographic, ideogrammic or

hieroglyphic—fail to do. Phonetic letters, McLuhan writes, appear as building blocks

in the human mind that are assembled to craft words and this order of things is

indicative to the thought processes of the “Western child” (a phrase he borrows from

psychologist J.C. Carothers). Learning phonetic letters “detribalizes” an individual,

removing him from the oral culture.

145
Eagleton, Terry. Literary Theory: An Introduction. Minneapolis: University of Minnesota
Press, 1983. 113.

72
No other kind of writing save the phonetic has ever translated man out of the
possessive world of total interdependence and interrelation that is the auditory
network. From the magical resonating world of simultaneous relations that is
the oral and acoustic space there is only one route to the freedom and
independence of detribalized man. That route is via the phonetic alphabet,
which lands men at once in varying degrees of dualistic schizophrenia.146

In other words, by converting oral sounds into written letters, the human mind begins

to picture these sounds as building blocks and this act of conversion from something

heard to something seen exhumes the thinker from the realm of oral communication.

It’s odd to fathom simply because so many of us have made the conversion and dwell

so completely in the world of phonetic language.

McLuhan suggests that we are alone with our thoughts because our thoughts

live entirely in our minds; we are exhumed from the tribe. Eagleton argues that when

we write our ideas down, they don’t belong to us any more and our solitude is simply

reaffirmed.

And public education certainly isn’t fostering a return to oral culture as

McLuhan aptly observes:

The school system, the custodian of print culture, has no place for the rugged
individual. It is, indeed, the homogenizing hopper into which we toss our
integral tots for processing.147

Since 1455, the world has been dominated by the idea of literacy. We are a literate

world. In the West, our literacy is bound primarily to a phonetic alphabet, building

blocks to form mental words, which become visual tools for communication. Given

our predilection for literacy, we insist that our educational systems dwell on the

146
McLuhan, Marshall. The Gutenberg Galaxy: The Making of Typographic Man. Toronto:
University of Toronto Press, 1962. 22.
147
Ibid., 244.

73
abilities to read and write and the idea of oral communication, in any form, is placed

aside.

Over the next five centuries, the value of orality disintegrated: Oral

communication, we concluded, was less formal and by default was less-serious than

written exchanges. Because there were no proof of the oral exchange, it wasn’t

considered vivid or durable. By the middle of the 20th century, we were told “to get it

in writing,” and with that, literacy was king.

The Sacred Space

Reading is a pleasing experience.

Sitting in the Main Reading Room inside the Library of Congress, I’m

ordering books through the digital library catalog system, which arrive at my desk—

#171—hand delivered by one of the librarians. The holdings in the Library of

Congress are extensive: there are over 32 million books and other printed materials

making it the largest repository of printed information in the world… and most of it is

just a digital data request away.

When the book arrives, I inspect the binding, holding it up beneath the reading

lamp mounted just above the slanted wooden desk, to confirm that the book is the

book I ordered. Once confirmed, I open the binding and begin flipping through the

seasoned yellowed pages. Books are tactile. We touch the leather binding, we leaf

through the pages, we gaze swiftly through the lines of printed characters searching

for something to halt our browsing and engage our full attention, our reading; hard,

fervent reading is a form of consumption like drinking from the well of knowledge.

74
This is an experience I’ve known most of my life and there is a fascination, a

longing to better know the mystery of this book and the mysteries of other books; how

they were written, how they were conceived and published and shipped through the

years to this moment of contact. I can smell the dry, mulching, weathered pages,

sneezing a little at the dust mites and other contaminants that have been borne from

this volume; I am alive in the moment being-with this book, placing the focus of my

consciousness inside the pages of this time capsule, this tangible form of telepathic

thought launched into the universe years or centuries before by someone, an author,

elsewhere, distant and gone.

There are actually many forms of reading. Theorists emphasize that of the

many things one earns from reading, cognitive reading or so-called “deep reading” is

probably the most important.148 Online readers, the experts say, are “prowlers” or

people who skim text-based media looking for certain words… much like the way we

read a restaurant menu. “Deep reading” is when we reach a level of concentration that

enchants us, almost hypnotizing us, to devour the words on the page. In this state, we

are thinking, we are evaluating and we are reasoning.

Theorists Maryanne Wolf and Mirit Barzillai explain the cognitive processes in

their essay “The Importance of Deep Reading”:

By deep reading, we mean the array of sophisticated processes that propel


comprehension and that include inferential and deductive reasoning, analogical
skills, critical analysis, reflection and insight. The expert reader needs
milliseconds to execute these processes; the young brain needs years to
develop them.149

148
Scherer, Marge. Challenging the Whole Child: Reflections on Best Practices in Learning -
Teaching - and Leadership. Alexandria: Association for Supervision and Curriculum Development,
2009.
149
Ibid., 131.

75
I love reading because there is something transcendent about the experience of

reading. After hours of “deep reading,” I feel better, almost euphoric, and energized; I

feel what athletes call the “runner’s high;” and I feel that I’ve discovered the

restorative powers one can only find in a resting place, a place French author Marcel

Proust called the “reading sanctuary.”150

Trouble is, I may have been taught to love reading. In the modern age, we have

placed long emphasis on the importance of reading.

Literacy is power, we are told… and, in fact, it is a kind of power… but not

exactly as it is explained to us. We are told that literacy is a tool for self-instruction.

To learn something, all we must do is find a book and read through the ideas. But

literacy is a two-way street. Literacy is also an easement into the human mind.

Teaching someone to read is like boring a hole into their mind, opening a pathway for

written information to penetrate. The experience is much more complicated than we

imagine.

When we read, we set the eyes’ attention on the line of text and skim from left

to right seeing the pattern of symbols forming words, and then sentences, and then

ideas. Cognitive scientists are only now beginning to understand this process and they

describe it this way: When photons of light reflect off the written page, they penetrate

through the lens of the eye, which projects that light onto the retina at the rear of the

eyeball; here, the retina translates the visual information into electronic pulses which

are transmitted through the optic nerve to reception centers inside the brain.151

150
Ibid.
151
Dehaene, Stanislas. Reading in the Brain: The New Science of How We Read. New York:
Penguin, 2010.

76
But it gets more complicated than that. Only a small portion of the retina—

known as the fovea—is where we actually see the text lettering. Because the fovea

occupies only 15 degrees inside the human’s field of vision, we must constantly move

the gaze of the eye tracking the new letters through this narrow passageway of sight.152

Scientist Stanislas Dehaene wrote about the process:

The need to bring words into the fovea explains why our eyes are in constant
motion when we read. By orienting our gaze, we “scan” text with the most
sensitive part of our vision, the only one that has the resolution needed to
determine letters. However, our eyes do not travel continuously across the
page. Quite the opposite: they move in small steps called saccades. At this very
moment, you are making four or five of these jerky movements every second,
in order to bring new information to your fovea.153

Reading is not easy. To get to this point in this essay, you have done a lot of

work, gazing and digesting the context of these letters. And our field of vision is just

seven to nine letters at a time.154 By the time/you finish/ this sentence/ the beginning/

is out of/ view. The process of reading is akin to a long march taken seven steps at a

time.

As is commonly known, the structure of the retina is such that the field of clear
vision is of necessity limited. In the center we find the fovea centralis—a
depression approximately one fifth of a millimeter in diameter—which
constitutes the area of clearest vision. On the printed page this represents under
ordinary reading conditions a distance of approximately four millimeters—the
equivalent of three letter spaces. Immediately surrounding the fovea lies the
macula lutea—a yellow structure representing together with the fovea a
horizontal diameter of approximately three millimeters—which constitutes an
area of less distinct vision. On the printed page the macula lutea and the fovea
cover under normal reading conditions a distance of some sixty-four
millimeters—about forty-five letter spaces. Within this area vision decreases
rather gradually in clearness as the distance from the fovea increases.155
152
Ibid.
153
Ibid.
154
Ibid.
155
Smith, William Anton. The Reading Process. New York: Macmillan, 1923. 132.

77
Once the symbols are moved through the organic visual system into the brain,

the information reaches two different brain regions, which connects and fuses them.156

This is where the act of reading alters the cognitive process inside the human brain.

Learning to read involves connecting two sets of brain regions that are already
present in infancy: the object recognition system and the language circuit.
Reading acquisition has three major phases: the pictorial stage, a brief period
where children photograph a few words; the phonological stage, where they
learn to decode graphemes into phonemes; and the orthographic stage, where
word recognition becomes fast and automatic.157

As children, we learn reading in the following steps: We see the symbols, we

recognize the purpose of the symbols, we identify the word as a sound. In doing so, we

bind these two learning centers inside the brain creating a sight-sound relationship

there.

Brain imaging shows that several brain circuits altered during this process,
notably those of the left occipito-temporal letterbox area. Over several years,
the neural activity evoked by written words increases, becomes selective, and
converges onto the adult reading network.158

This is the evolutionary shift that transforms the human mind from the organic, natural

state of oral-aural sound-based communication to the literal status of printed materials.

As we learn to read, these various learning centers inside the brain begin developing,

inter-connecting, enlarging, and growing more complex as synaptic pathways (like

cognitive superhighways) become more defined.

Our “instinct to learn” plays a crucial role in our capacity to learn to read.
Synaptic plasticity, which is extensive in children, but also exists in adults,
allows our primate visual cortex to adapt, in part, to the peculiar problems

156
Dehaene, Stanislas. Reading in the Brain: The New Science of How We Read. New York:
Penguin, 2010.
157
Ibid.
158
Ibid.

78
raised by letter and word recognition. Our visual system has inherited just
enough plasticity from its evolution to become a reader’s brain.159

Literacy alters our biology.

Learning to recognize symbols and assign oral-aural signatures to them, which

are pronounced inside the quiet mind, has altered the human animal. Literacy alters

human evolution. But Stanislas Dehaene does not think this is a bad thing:

A classic Darwinian concept, defined as “exaptation” by Stephen Jay Gould,


comes to mind. Exaptation refers to the conversion, in the course of evolution,
of an ancient biological mechanism to a role different from the one for which it
originally evolved. The minute bones, deep in the ear, that seem so perfectly
designed to amplify incoming sounds are an excellent example—Darwinian
evolution whittled them out of the jawbones of ancient reptiles. In a much-
cited article, François Jacob pictured evolution as a tireless tinkerer who keeps
a lot of junk in his backyard and occasionally assembles pieces of it to create a
new contraption. In my hypothesis, cultural invention arises similarly from the
recombination of ancient neuronal circuits into new cultural objects, selected
because they are useful to humans and stable enough to proliferate from brain
to brain.160

Maybe we evolved to be more pliant or as Dehaene suggests: more plastic… and this

“plasticity” enables the ability for the human brain to change and accommodate new

cognitive communication forms.

Of course, once these changes have taken place, there isn’t any way of turning

back. Once we committed to literacy, the organic and natural state of “orality” was

gone. It is for this reason that I define the human mind as the sacred space. Once these

changes have been made—once we’ve bitten into the fruit from the tree of

knowledge—there is no way of resetting or returning to the organic state. Instead, we

are forced to move forward down a different evolutionary path.

159
Ibid.
160
Ibid.

79
Writing and Daydreaming

Western writing owes a lot to the creative talents of its literary writers. The

process of writing fiction is a daunting one. The writer must conceive of an idea,

create characters, define action between the characters and move them through some

conflict before arriving at resolution. Literature is filled with great heroes who lived, at

least in the literary sense, great adventures. The craft of fiction writing is a testament

to the skills of the literary imagination. Begging the question: What kind of thinker

does it take to become a great writer? In 1908, Sigmund Freud took up that question

and authored the essay “Writers and Day-Dreaming” to explain the mental processes

that must exist:

The creative writer does the same as the child at play. He creates a world of
phantasy which he takes very seriously—that is, which he invests with large
amounts of emotion—while separating it sharply from reality. Language has
preserved this relationship between children’s play and poetic creation. It gives
[in German] the name of ‘Spiel’ [‘play’] to those forms of imaginative writing
which require to be linked to tangible objects and which are capable of
representation. It speaks of a ‘Lustspiel’ or ‘Trauerspiel’ [‘comedy’ or
‘tragedy’: literally, ‘pleasure play’ or ‘mourning play’] and describes those
who carry out the representation as ‘Schauspieler’ [‘players’: literally ‘show-
players’]. The unreality of the writer’s imaginative world, however, has very
important consequences for the technique of his art; for many things which, if
they were real, could give no enjoyment, can do so in the play of phantasy, and
many excitements which, in themselves, are actually distressing, can become a
source of pleasure for the hearers and spectators at the performance of a
writer’s work.161

Freud goes on to explain that fantasy is really just deliberate daydreaming and that

daydreaming is derived from the REM sleep patterns and the fantasies that transpire as

we sleep.

161
Figueira, Servulo A., Peter Fonagy, and Ethel Spector Person, eds. On Freud's
"Creative Writers and Day-Dreaming". London: Karnac, 2013. 4.

80
Terry Eagleton offers a more precise explanation of Freud’s thinking: Eagleton

suggests that the creative writer learns the art of daydreaming from his sleeping and,

through practice, transforms his daydreaming into fantasy and then, in the process of

writing those fantasies down, creates literal prose.162

The pool of literary talent is vast. From Homer to Edmund Spenser to William

Shakespeare to Jonathan Swift to Edgar Allen Poe to Williams Faulkner to C.S. Lewis

to J.R.R. Tolkien to Stephen King to J.K. Rowling; one can trace the wondrous

influences of fantastical writing. From The Odyssey, we got The Faerie Queene, which

inspired The Tempest, which inspired Gulliver’s Travels, which inspired Ligeia, which

inspired A Light in August, which inspired The Lion, the Witch and the Wardrobe,

which inspired Lord of the Rings, which inspired The Stand, which inspired Harry

Potter and the Philosopher’s Stone.

There have been many high-water marks in literary history but at the center of

it were authors and the depths of their imagination. Freud argues that writing is a

matter of ego and the toil of working to resolve unsettled sexual issues. No matter. The

fact is, as far as literary depth goes, since 1500, a succession of writers have created

art out of the language of cold printer’s type. To think about it, this is a strange place

to dwell, weaving meaning into the hidden places formed by typesetters: The words

are written and the sentences are formed and edited and the pages are designed, and

the letter icons are placed in order, and inked before being pressed to pieces of paper,

which are bound and moved to market. The art of storytelling has taken on a solid

mature formula, and the medium of books and written text remain the dominant

162
Eagleton, Terry. Literary Theory: An Introduction. Minneapolis: U of Minnesota, 1983.
131-168.

81
communication force even now, in an age when the airways and communication

networks are teaming with other media forms.

But we have an opportunity now to end the age of cool media. Print is a lonely

medium. Writers work alone; readers read alone. There is no mass-communication

device that allows for literal communication to be conducted in any other way.

Besides, there are many other “hot media” to be enjoyed, commingled with the literal

and we finally have the tools to bring the media together on a converged platform. The

problem, for now, is that the publishing world is too set in its ways, most individual

producers lack the complex talents necessary for multimedia publishing and audiences

are underwhelmed by the offerings that have been thus far presented.

But it doesn’t need to be this way.

Five Centuries of Silence

To this point, my argument has been that manuscripts were a form of oral

communication and printed books are a form of literal communication. We looked at

but didn’t necessarily read the manuscript; with the book, we read the text but the

pages lack the power and beauty of “illumination.” Mine is not an easy argument to

make. Because of the wholesale domination of literacy, it’s hard to argue that we need

to return to, or at the very least restore, the idea of oral communication. (I’m headed in

that direction but I’m not there yet.) Searching for an analogy, one might offer the

human relationship with a wedding cake: Initially, we perceive the beauty of the cake,

its design, its flourishes and we might take a picture of it. This approach is how we

consumed manuscripts. We looked at them, but we didn’t necessarily ingest them. Our

82
relationship with the printed book is the opposite: we don’t linger long over the quality

of the print or the beauty of the cover; instead we consume the content of the book,

like cake, and when we’re done, nothing remains. The Incunabula period was about

saving while ingesting but the limits of print technology made the expense of mass-

produced illumination nearly impossible.

Yielding to that barrier, printers dwelled on a text-heavy model. Books may

have had light woodblock illustrations in them, but the content, for the most part, was

centered on the written information. We consumed books for their internal information

not for their external beauty.

The age of printing certainly changed Europe. One of the more important

changes was the movement away from monasteries; monastic scribes in distant idyllic

rural settings crafted manuscripts; now books were being published by commercial

enterprises located in business centers. Theology is supplanted by commerce. This

movement away from the monasteries also created a demand for libraries and other

institutions.

The number of universities also exploded. During the second half of the 15th

century, no less than 25 universities opened up across central and southern Europe.163

This does not mean that, by the end of the fifteenth century, the European
university network was virtually complete. After all, new universities were still
being founded in the sixteenth century. However, a number of people began to
grow anxious, in the course of the fifteenth century, about the proliferation of
new foundations. The ancient universities tried, with varying degrees of
success, to preserve their monopoly.164

163
Rüegg, Walter, and Hilde De Ridder-Symoens. A History of the University in Europe.
Cambridge: Cambridge University Press, 1992. 55-62.
164
Ibid., 58.

83
The increase in universities reflected the rising demand for information and

instruction. This also created opportunities for a new professional class, which would

be populated by bankers, doctors, engineers, lawyers, scientists and so forth. Of

course, the growth of these professions necessitated a cyclical growth in books for

these respective disciplines. Books begot universities, which begot students, who

became professionals, who demanded books, which facilitated the growth of

universities. Publishers thrived, libraries grew, universities swelled. Monasteries,

however, were marginalized and culturally shoved aside.

Marshall McLuhan argues that during the Middle Ages, reading was done

aloud and that this was a holdover from the monastic culture.

But for centuries to come “reading” meant reading aloud. In fact, it’s only
today that the decree nisi has been handed down by the speed-reading institutes
to divorce eye and speech in the act of reading. The recognition that in reading
from left to right we make incipient word formations with our throat muscles
was discovered to be the principal cause of “slow” reading. But the hushing up
of the reader has been a gradual process, and even the printed word did not
succeed in silencing all readers. But we have tended to associate lip
movements and mutterings from a reader with semi-literacy, a fact which has
contributed to the American stress on a merely visual approach to reading in
elementary learning.165

McLuhan goes on to wonder if we have lost something with the idea of reading aloud.

Later in his book The Gutenberg Galaxy, McLuhan suggests that medieval monks

must have sat in their reading carrels, reading aloud, and he likened them to “singing

booths.”166 In the modern library, silence is the rule and patrons are considered

‘antisocial’ if they speak aloud in pairs or as they sit by themselves.

165
McLuhan, Marshall. The Gutenberg Galaxy: The Making of Typographic Man. Toronto:
University of Toronto Press, 1962. 83.
166
Ibid., 292.

84
For the same reason that the reading-room of the British Museum is not
divided into sound-proof compartments. The habit of silent reading has made
such an arrangement unnecessary; but fill the reading-room with medieval
readers and the buzz of whispering and muttering would be intolerable.

These facts deserve greater attention from the editors of medieval texts. When
the eye of the modern copyist leaves the manuscript before him in order to
write, he carries in his mind a visual reminiscence of what he has seen. What
the medieval scribe carried was an auditory memory, and probably in many
cases, a memory of one word at a time.167

So the practice of reading has moved into the silent realm of the internal monologue.

We sit alone, we hold the book, we scan the page, and we sound out the words in our

minds. Orality has been virtually banned in this environment and the fallout from this

transition in culture has had sweeping effect.

Just look at what’s become of poetry. Poetry was created initially as a spoken

form of communication. The poet employs mnemonic devices—rhyme, rhythm,

meter—to remind himself of the next in a series of lines. This practice was created so

the poet could recite, from memory, hundreds of lines of verse, in an effort to share a

story. The blind poet Homer remains the model for this genre: he spent his life

travelling around Greece reciting the epics of the Iliad and the Odyssey from

memory.168 In the process of doing so, his recitations weren’t just stories, they were

also performances, and poetry, at its base level, was an oral medium. Poetry was borne

directly from the oral tradition. Today, however, with the internalization of the written

word, poetry (as with all the other story genres) has been forced into a literal

container, a process that has repressed it, confining it to the realm of the internal

monologue.

167
Ibid., 92.
168
Hadas, Moses. Ancilla to Classical Reading. Pleasantville, NY: Akadine Press, 1999. 138-
144.

85
In the 1960s, with poets like E.E. Cummings, poetry appeared to grow comfortable

with this confinement. Walter Ong writes about this issue:

Cummings’s untitled Poem No. 276 (1968) about the grasshopper disintegrates
the words of its text and scatters them unevenly about the page until at last
letters come together in the final word ‘grasshopper’ —all this suggest the
erratic and optically dizzying flight of a grasshopper until he finally
reassembles himself straightforwardly on the blade of grass before us. White
space is so integral to Cummings’s poem that it is utterly impossible to read
the poem aloud. The sounds cued in by the letters have to be present in the
imagination but their presence is not simply auditory: it interacts with the
visual and kinesthetically perceived space around them.169

To hear Ong write about it, one cannot help but believe that Cummings’s poem is

really a mutation of the original form. Poetry as spoken verse wasn’t created for a

written form; Cummings, however, is embracing the literal by transforming his poetry

verse into an animation that can only exist on a literal plane where the typography

becomes the poetry. But was this the initial intent of poetry? And with this mutation,

is Cummings’s work poetry at all or just an artful form of typography? For now, the

Cummings’s approach to the form has gone dormant awaiting another artist’s

interpretation.

As for the state of poetry today, there seems to be some seepage. Poetry has

begun escaping this captivity with the invention of the “slam poetry” movement, but

most poetry, still dwells not in the living word, but rather in the literal pages of the

book.

169
Ong, Walter J. Orality and Literacy: The Technologizing of the Word. London: Methuen,
1982. 126.

86
Summary of Literary Media

The invention of the printing press moved the European community away from

manuscript through the hybrid age of the Incunabula and then finally into printed

books. In the process of doing so, the human relationship with the book shifted from

the oral observation—one where the user admired the artfulness of the Illuminated

Manuscript—and towards the literal, or one where the user admires the language of

the text. In doing so, the value of media shifted away from “hot media”—or media that

was observed naturally—and towards “cool media,” which requires training and

concentration to master. That shift also isolated the reader from the tribe, killing our

sense of performance, as Marshall McLuhan observed. Now, instead of participating

in the oral traditions and the public discourses, the act of reading silenced the

discussion, creating an internal monologue in the minds of the reading public. This

move towards silence was slow but uniform and ultimately realized in the 1960s, at

least in the United States. Silence in the reading areas of the local public library

became the standard for public reading. The oral culture was “shhh’d” nearly into

extinction by the sound-obsessed reference librarian.

Along the way, something changed in the process of writing. During its

evolution, fiction writing transformed with the birth of narrative structure into an art

form. It took some time, but nonfiction storytelling finally made this transition too.

With nonfiction, it is my belief that writing experiments by authors including Tom

Wolfe and others are elevating journalistic writing to an art form as well. This is being

done by adapting narrative forms traditionally designated for fiction and applying

them to nonfiction stories. The aesthetics of nonfiction are still evolving but some

87
clear models including the “Wall Street Journal Style” have emerged and the

artfulness of the book has moved away from the physical designs and flourishes of the

manuscript and inside the language of the written word. We no longer look at the book

to see its beauty, we must now read the book to discover its artfulness.

While this was going on, new technologies introduced new oral media

platforms and our sense of oral culture was reborn. Light was captured, sound was

recorded, the moving image was manufactured and a matrix of regional broadcasting

systems was erected. All these tools began bringing the global clan back together and

the conversation—centuries delayed—slowly reemerged. In time “hot media” began

flowing over us again in the form of… photography, radio, the phonograph, the

motion picture, television… until, finally, along a winding beach road, it became

possible to sprint along, top down, Wayfarers on, listening to the electromagnetic

musings of the Beach Boys harmonizing their affections for “Barbara Ann” as the

chorus blasted forth from the AM speakers of your drop-top 1960 Ford Thunderbird;

suddenly, we were not alone; we were singing collectively to the dulcet tones set forth

dancing freely over the airwaves from a distant radio transmitter. We were tribal—or

nearly so—once again.

88
Part II: Return to Orality

Chapter 2

The Return of ‘Hot’ Media

Roughly 100 years after we shifted from an oral culture to a written one, we

began discovering new ways to revisit oral—or ‘hot’ media—through scientific and

technological discovery. But there was something different in this return to oral media.

Because we had been introduced to literal culture and there was a subsequent upswing

in global literacy, our new relationship with new oral culture was filtered through the

lens of literal media. Communication theorist Walter Ong called this renaissance of

oral media the “secondary orality” and believed that these new oral media have a close

association with their literal counterparts.1

The secondary orality, however, was vastly different than the first. In man’s

organic state, we were oral creatures by nature and our oral culture included belonging

to groups; after all, you need at least two people to have a conversation. Ong describes

our oral language as something that begins within us and, when uttered, exists in the

moment. He calls this the “primary orality” and he describes it as man’s natural state.

Oral utterance thus encourages a sense of continuity with life, a sense of


participation, because it is itself participatory. Writing and print, despite their
intrinsic value, have obscured the nature of the word and of thought itself, for
they have sequestered the essentially participatory word—fruitfully enough,
beyond a doubt—from its natural habitat, sound, and assimilated it to a mark
on a surface, where a real word cannot exist at all.2

1
Ong, Walter J. Orality and Literacy: The Technologizing of the Word. London: Methuen,
1982. 133-164.
2
Ong, Walter J. Interfaces of the Word: Studies in the Evolution of Consciousness and
Culture. Ithaca: Cornell University Press, 1977. 21.

89
To understand the primary orality, Ong explains that we call our first language our

“mother tongue” and with good reason. Like the relationship with one’s mother, our

first language is personal and that “our first language claims us...” in a way that makes

us a part of its world and it is from this vantage point that we build our understanding

of the universe.3 Literacy exhumes us from this universe, placing us in a new reality.

It seems offensively banal to note that written or printed “words” are only
codes to enable properly informed and skilled persons to reconstruct real words
in externalized sound or in their auditory imaginations. However, many if not
most persons in technological cultures are strongly conditioned to think
unreflectively just the opposite, to assume that the printed word is the real
word, and that the spoken word is inconsequential.4

During our literal phase, our thinking patterns are broken into letter blocks defined by

our alphabets and our consumption of literal media was done internally and often

alone. With the secondary orality, we approached new orally based media with a

literal understanding. Also, and much more complex, we found ourselves standing all

alone in crowds consuming oral media… which was both new and strange to us as

theorist Sarah Bonciarelli points out:

The new method for the acquisition and consumption of information represents
a form of “secondary orality”. According to Ong, second orality represents a
return to orality filtered through the written language, to which it is strictly
related. Ong observes that the individual, when reading written or printed texts,
falls back on himself. The second orality, however, “generates a sense for
group incommensurably larger than those of a primary oral culture”. Before
the rise of reading, the oral man had a very strong sense of belonging to a
group, simply because there were no alternatives. In our era characterized by
secondary orality, this sense is conscious and planned: the individual knows
that he has to pay attention to social connections.5

3
Ong, Walter J. Interfaces of the Word: Studies in the Evolution of Consciousness and
Culture. Ithaca: Cornell University Press, 1977. 21.
4
Ibid., 21.
5
Cavagna, Mattia, and Costantino Maeder. Philology and Performing Arts: A Challenge.
Louvain-La-Neuve: Presses Universitaires De Louvain, 2014. 31.

90
So writing separated human thought from the thinker; printing allowed human thought

to be mass-produced and portable; and now a series of inventions were about to

separate oral or ‘hot’ media from their respective resources arming them with the

ability to be mass-produced as well. Trouble was, after generations of consuming

literal media, we had forgotten our natural oral/aural state and began consuming these

new oral media through the filter of a literal mind; reading English trained our eyes to

scan from left to right; reading created a temporal and physical distance from the

storyteller, which also isolated the reader from other members of the audience; finally,

the literal story—the book—made the story tangible assigning a “thingness” to the

medium. With the advent of new oral technologies, we began applying this literal

understanding, this literal filter to the new orality.

One of the first hot media innovations involved light.

Photochemical

The first chemical concepts leading to the birth of photography date back to

1619. Inside his laboratory, chemist Angelo Sala left silver nitrate powder drying on a

bench—quite by accident—and when he returned, he was surprised to discover that

the light shining through a window had turned most of the powder black; only a line,

shaded by a pull on a window valance protected the powder; beneath the shadow, the

powder remained white. Sala never saw the value of the discovery, but he did publish

a paper on the phenomenon and in it, he revealed that he was unsure if it was the sun’s

light or heat that caused the transformation. In 1839, a French researcher named

Louis-Jacques-Mandé Daguerre advanced the science enough to create a process

91
called the Daguerreotype, which captured a photographic image on a thin copper plate;

a mercury vapor solution was created to “fix” the image and, with that, the process of

photography was born.6

From there, Daguerre experimented with long exposures, taking pictures from

his window of the Paris cityscape. His most famous image is the “Boulevard du

Temple,” which was one in a series taken in the spring of 1838 and is considered to be

the first image of a person. In the photograph, you can see the boulevard lined with

trees and buildings; in the foreground at a bend in the road, a man is standing with one

foot raised and placed on a shoeshine rack; because the man stood motionless as his

boots were polished, his image prevailed through the minute-long exposure.7 The

success here is the fact that Daguerre ‘seized the light’ in the moment and, in doing so,

documented the realness of the Paris street scene for the first time in human history.

This is an astounding technological moment.

Author Jai McKenzie made these observations:

…like Boulevard du Temple, physically demonstrates that photography, from


its earliest beginnings, binds light with time and space, communicating new
properties of each. At this fundamental stage, the novel characteristics of these
light-space and light-time structures are that transposes and transports space
while light-time slowly impresses itself as a two-dimensional image (which
also travels through time).8

Photography certainly evolved from there, as chemical advances allowed for

shorter exposures and improved optic technology inspired lenses that could take

images both very distant and very minute; the technology also evolved so that
6
Friedman, Avner, and David S. Ross. Mathematical Models in Photographic Science. Berlin:
Springer, 2003. 4.
7
McKenzie, Jai. Light and Photomedia: A New History and Future of the Photographic
Image. London: I.B. Tauris, 2013.
8
Ibid.

92
photographs could be taken at very low light conditions or during very high-intensity

lighting situations. As a technology, the craft of photography continues to evolve and

the science has moved away from chemical-light capture to digital-light capture or the

process of capturing light with light sensors.

What has been lacking is a conversation over the impact that photography has

over the human animal and its ability to perceive light, space, time… and, ultimately,

memory.

Marshall McLuhan disliked the impact of photography. He worried that

photographs transformed people into possessions, which could be mass-produced and

circulated, like prostitutes; photography turns people in to commodities. I’m not sure I

agree, but I certainly understand his concern. Betty Grable, for example, was an iconic

“pin up girl” and the image of her in a swimsuit looking suggestively over her

shoulder became a GI favorite during World War II. Under that relationship, Grable

was a possession, a keepsake, and an object for adoring fans.

In a chapter entitled “The Photograph: The Brothel – Without – Walls,”

McLuhan addresses his concerns:

Both monocle and camera tend to turn people into things, and the photograph
extends and multiplies the human image to the proportions of mass-produced
merchandise. The movie stars and matinee idols are put in the public domain
by photography. They become dreams that money can buy. They can be
bought and hugged and thumbed more easily than public prostitutes. Mass-
produced merchandise has always made some people uneasy in its prostitute
aspect.9

The interpretations of the potency of photography vary. Take for example Daguerre’s

photograph of the man getting his shoes shined. Clearly, the audience cannot make out

who the man is, his identity is lost forever, and yet, we have this idea of him. Is this a
9
McLuhan, Marshall. Understanding Media: The Extensions of Man. N.p.: n.p., n.d. 189.

93
bad thing? McLuhan might argue that Daguerre appropriated this man’s image and

today the idea of this man belongs more to the audience looking at the photograph

than it did to the man who cast it. Oddly, this argument is similar to the concerns

Eagleton made about the written word. Once the idea is written down, it develops a

life of its own. Such was the case with Daguerre’s photograph. When he closed the

shutter, he captured the light of the cityscape, which included the image of the man in

the boots. Today, that image lives on beyond the reach of the man, the photographer

and even the generations of people who have gazed upon the photograph over the last

180 years. The photograph became a commodity.

All that aside, this photograph does something more. With the death of the

manuscript in 1500, oral communication died, replaced by the printed word or literal

communication. Oral communication is ‘hot’ media; written communication is ‘cool.’

Cool media had dominated the globe for three centuries, and while this photograph did

little to dispel the powerful reach of literal communication, it does stand as one of the

first serious challenges to the influences of cool media. Photography, given its instant

visual impact, is considered a ‘hot’ medium and it would be the first of its kind to be

mass produced. Others would soon follow.

The Moving Image

The transition from still photograph to moving image is an interesting one. It

began with a question over a horse race. The question, simply, was over whether a

trotting horse ever has all four legs off the ground at the same time. In the later half of

the 1872, a California businessman paid photographer Eadweard Muybridge to

94
conduct a series of investigations to prove the theory right or wrong. To do so,

Muybridge set up 12 cameras in a series, which were triggered by cables as the horse

ran past them.10 Muybridge concluded that the horse was off the ground entirely; but

that is secondary to the experience. What he also did was create a 12-image filmstrip

that demonstrated locomotion and the idea of the motion picture was born.

Marshall McLuhan argues that the moving image, or film image, derives its

value from the printed word. He says that literate man can understand and comprehend

the action taking place in a film sequence, while the non-literate or oral man does not.

McLuhan says that literate man is trained to read the lines of a story from left to right

and, when he reaches the end of the typographical line, the eye resets to the beginning

of the next line. This linear eye training allows for a visual understanding that makes

reading and film viewing similar forms of interpretation.11

The close relation, then, between the reel world of film and the private fantasy
experience of the printed word is indispensable to our Western acceptance of
the film form. Even the film industry regards all of its greatest achievements as
derived from novels, nor is this unreasonable. Film, both in its real form and in
its scenario or scripted form, is completely involved with book culture.12

And while film and books are certainly linked, McLuhan argues that film is a “hot

medium” or one that washes over the viewer. Even in its transitions from black-and-

white, from silent to talking film, and into 3-D, film remains a hot medium, or one that

is easily digestible, as opposed to the written word. McLuhan takes his argument a

step further and argues that film, in its purest form, is a form of dreaming and that the

10
Braun, Marta. Eadweard Muybridge. London: Reaktion Books, 2012. 133-158.
11
McLuhan, Marshall. Understanding Media: The Extensions of Man. 286.
12
Ibid., 286.

95
visual narrative has the ability to transport its audience into a world beyond their own

literal selves.

The movie is not only a supreme expression of mechanism, but paradoxically it


offers as product the most magical of consumer commodities, namely dreams.
It is, therefore, not accidental that the movie has excelled as a medium that
offers poor people roles of riches and power beyond the dreams of avarice.13

After a century of development, the film industry remains one of the more potent

media forms in the world. McLuhan argues that film actually isn’t just one medium

but rather the orchestration of a series of media including color, lighting, sound and

performance. In its final form, McLuhan says the complexity of film can only really

be compared to the symphony orchestra. As with each, we sit in a theater, looking

forward waiting for the media to flow over us. With the orchestra, it is the work of

Gustav Mahler or Joseph Haydn transformed from the written symbols into sounds

that dance over the room thrilling us with melody, tempo and form. With film, it is the

work of Terrance Malick or Stanley Kubrick flowing over us like a river, bathing us in

light and image and beauty. In either case, it is the media flowing forth and all we can

do is witness and consume… awash in the beauty.

Author Virginia Woolf, however, was not fooled by the tricks of the light. She

emerged from a movie theater in 1919 unimpressed with the craft of cinema and wrote

a scathing review of the new medium in an essay simply entitled “The Cinema.” In it,

she compares film making to a new orchestra, fully formed but completely unschooled

in the art of making music.14

13
Ibid. 291.
14
Woolf, Virginia. "The Cinema." Full Text | Woolf Online. Accessed June 03, 2017.
http://www.woolfonline.com/timepasses/?q=essays%2Fcinema%2Ffull.

96
It is as if the savage tribe, instead of finding two bars of iron to play with, had
found scattering the seashore fiddles, flutes, saxophones, trumpets, grand
pianos by Erard and Bechstein, and had begun with incredible energy, but
without knowing a note of music, to hammer and thump upon them all at the
same time.15

Instead, she describes the cinema as visual noise that fools the mind into believing that

life itself is unimportant. She goes on to say that the cinema is a parasite of literature

and many of its stories are appropriated from the more mature medium.

The cinema fell upon its prey with immense rapacity, and to the moment
largely subsists upon the body of its unfortunate victim. But the results are
disastrous to both. The alliance is unnatural. Eye and brain are torn asunder
ruthlessly as they try vainly to work in couples.16

Writing, she contends, is more nuanced and complex.

So we lurch and lumber through the most famous novels of the world. So we
spell them out in words of one syllable, written, too, in the scrawl of an
illiterate schoolboy. A kiss is love. A broken cup is jealousy. A grin is
happiness. Death is a hearse. None of these things has the least connection with
the novel that Tolstoy wrote, and it is only when we give up trying to connect
the pictures with the book that we guess from some accidental scene—like the
gardener mowing the lawn—what the cinema might do if left to its own
devices.17

She certainly raises some important concerns. One might argue that cinema, by its

very design, lacks the ability for complexity and nuance often found in literature. Or,

in its development, it hasn’t matured yet to a point where audiences can truly

appreciate the visual expression of story. Further, it’s worth noticing that many films

lift their stories from the printed word—novels, biographies, graphic novels, comic

books—and are merely repackaged in a cinema format.

15
Ibid.
16
Ibid.
17
Ibid.

97
Reflecting again upon Walter Ong’s idea about “secondary orality,” it was

clear that film was indeed oral media, which had been passed through the literal media

filter. As with most works of written fiction, films follow a linear story arc, which

peaks early introducing conflicts, which the protagonist fights to correct. In most

cases, there is a chronology: a beginning, a middle and an end. However, there have

also been some advances in nonlinear storytelling, which obviously breaks from the

chronicle form, arriving at a more complex narrative form. Narrative, as theorist

Hayden White suggests, is an “emplotment” of story, packaged and neatly designed

with an eye towards the aesthetic.

Searching for modern examples, the ‘surprise’ in Quentin Tarantino’s 1994

classic Pulp Fiction comes when we learn near the end of the film that various

episodes attached to the sub-plots in the movie are presented out of chronological

order, which creates a different but interesting narrative form. This realization comes

when hitman Vincent Vega (played by John Travolta), one of the key characters, is

killed halfway through the film only to return in later scenes to help fill in the blanks

in the overall narrative. This temporal break elevates the aesthetic of the film

medium—because it departs so aggressively away from the chronicle form—and the

artfulness arises from the fact that Tarantino’s does not confuse his audience’s

understanding of the meaning. The work also inspired a generation of directors across

Hollywood to search for new ways to repackage this emerging nonlinear story form.

Author Vincent LoBrutto wrote about Pulp Fiction:

The paradigm for contemporary nonlinear storytelling is exemplified in


Quentin Tarantino’s Pulp Fiction, which led the way for other 1990s and early
twenty-first-century filmmakers. Examples include The Usual Suspects and
Memento, which unfold in a backward chronology driven by the mental state

98
of the main character. Tarantino arrived at the structure of Pulp Fiction by
weaving together characters and story lines he had created working in the
tradition of Mario Bava’s Black Sabbath and literary icon J.D. Salinger’s
handling of the Glass family in his stories and novels. The original manuscript
for the Pulp Fiction screenplay ran 500 pages as Tarantino incessantly
comingled characters and their stories as new narrative inventions inspired him
during the process. Tarantino evolved a circular narrative structure that brought
characters back to a central episode.18

Again, the surprise for the audience here was the simple break in the chronology and

yet it was perceived—rightly so—as a big moment in film storytelling. The wonder in

this is borne from the fact that audiences have gotten used to seeing film presented in a

linear form, which is a form that emerged out of the literary tradition.

Of course, there are many other films—including Citizen Cane (1941) and

Momento (2000)—which have drifted away from the chronicle or chronological form

but it is important to note that as film developed its own story design, it evolved—as

writing did—away from chronicle and towards the more complex narrative form and

this, in turn, created new opportunities for the film aesthetic with regard to the pattern

of story.

Capturing Sound

During the 19th century, scientists began learning how to capture sound. In the

United States, inventor Thomas Edison perfected an audio recording device he called

the “gramophone,” which he perceived as auditory writing or a talking machine.19

Companies like the Victor Company perceived the phonograph as a medium for

speaking books.

18
LoBrutto, Vincent. Becoming Film Literate: The Art and Craft of Motion Pictures. Westport
Conn.: Praeger, 2005. 3.
19
McLuhan, Marshall. Understanding Media: The Extensions of Man. N.p.: n.p., n.d. 290-300.

99
The Victor Company, which under different names acquired a hold over the
American gramophone industry for more than half a century, followed an
approach that Michael Chanon has called ‘a consumption model’: the record
was being dealt with like a book and not like a photograph. Successful
performers were to make larger amounts of money out of their recordings,
however, than most authors made out of their books. Thus, [Enrico] Caruso,
who made his first quality recording in 1901 and his first million-selling record
in 1904, went on to earn two million dollars from his recordings by the time of
his death.20

Like the published book, the gramophone and the phonograph made it possible to

mass produce audio recordings and disperse them globally. And like the printing

press, the gramophone was envisioned as a platform for words—as it happens, the

spoken word—but it wasn’t seen as anything other than an extension of the telephone.

It wasn’t until the early 1900s that gramophone and its new cousin, the phonograph,

became tools for playing music.

As media historians Asa Briggs and Peter Burke point out, the moment Italian

tenor Enrico Caruso began recording his voice, he stopped being just a performer and

became a “recording artist.”21

Today, those terms are commonly exchanged but there is a distinction here. A

“performer” must be seen in the public forum, a “recording artist” and their sound can

be mechanically reproduced and distributed far and wide and listened to in a variety of

private settings… and at a substantial cost benefit to the consumer. This

transformation in the consumption model changed the how and why music was

produced.

20
Briggs, Asa, and Peter Burke. A Social History of the Media: From Gutenberg to the
Internet. Cambridge, UK: Polity, 2014. 159.
21
Ibid., 150-160.

100
Before recorded sound, music was reproduced through sheet music and the

chore of the orchestra and the maestro was one of fidelity. The measure of a

symphony orchestra was how close the music sounded to the intention of the

composer. If Ludwig van Beethoven’s notations were accurate, the maestro had some

room for interpretation with regard with tempo and timbre but still had to respect the

composer’s initial designs.22

Recording music had the opposite effect. Once the sound is recorded, that

version of the performance is preserved in perpetuity; to hear it, one need only replay

the recording. So, it is not necessary to record the same performance over and over

again if the orchestra’s key mission is fidelity. Instead, variations of a piece of music

were encouraged and recordings of other orchestras only fortified an interest in

different sounds and events. The philharmonic orchestras in Boston and Chicago and

Seattle all had variables related to talent, timbre, tempo and tone making variations of

the same piece of music collectable to a growing phonographic audience. One can

have a dozen different recordings of Beethoven’s 9th symphony and each should be

phonically distant to the others as author Lawrence Lessig explains:

With these new technologies, and for the first time in history, a musical
composition could be turned into a form that a machine could play—the player
piano, for example, or the phonograph. Once encoded, copies of this new
musical work could be duplicated at a very low cost. A new industry of
“mechanical music” thus began to spread across the country. For the first time
in human history, with a player piano or a phonograph, ordinary citizens could
access a wide range of music on demand. This was a power only kings had had
before. Now everyone with an Edison or an Aeolian was a king.23

22
Peyser, Joan. The Orchestra: A Collection of 23 Essays on Its Origins and Transformations.
Milwaukee, WI: Hal Leonard, 2006.
23
Lessig, Lawrence. Remix: Making Art and Commerce Thrive in the Hybrid Economy. New
York, NY: Penguin Books, 2009. 24.

101
Conductor and composer John Philip Sousa was appalled. Lessig writes that Sousa

feared that “amateur” culture would suffer because people would be more inclined to

listen to pre-recordings than to produce music themselves and that disparity would

place the influences of culture in the hands of the elite. Apparently, Sousa was correct:

in 1899, 151,000 phonographs were manufactured in the United States; in 1914, they

produced 500,000, and sold 27 million records that same year.24

The movement also defined a producer/audience relationship Lessig describes

as a “read only” environment. This, he explains, is where the professionals can record

and transmit data but the audience can only receive it. This dynamic—the death of the

amateur performer and the emergence of an elite media class—would repeat itself for

the next hundred years and, with the assistance of the corporations, and the U.S.

Congress, a copyright culture would thrive that worked to suppress the adaptation of

previous works to new incarnations. Lessig describes this second group, this creative

class of people who take existing works and revisit them as the “read-write” culture.25

Now, it’s hard to imagine today, but Enrico Caruso was a “pop artist,” simply because

he may have been the first to allow this new medium to mass-produce his sound; after

all, mass-production is the heart and soul of popular culture and a series of other

musicians would follow including James Brown, Ray Charles, Patsy Cline, Bob

Dylan, Billy Joel, George Michael, Roy Orbison, Elvis Presley, Jimmie Rodgers,

Frank Sinatra, Britney Spears, Hank Williams among many others. As with each, their

popularity was hinged upon a mixture of unique-but-accessible music presented at a

time when audiences were prepared for increasingly more sophisticated music. As

24
Ibid., 23-33.
25
Ibid., 23-33.

102
with all media, the medium must mature at a pace equal to (or slightly just ahead of)

the sophistication of the consuming audience. So, without Jimmie Rodgers, listeners

may not have been prepared for Hank Williams… and without Hank Williams and

Chuck Berry, listeners may not have been prepared for Elvis.

In the 1960s, the sophistication of album rock hit a high point when the Beatles

and other bands began exploring the idea of an audio narrative designed for the long-

playing album. In 1966, the Beatles decided they could no longer perform before live

audiences simply because of the amount of screaming that went on brought about by

Beatlemania and they voluntarily severed their performance relationship with their

audience.

By the end of 1966, the Beatles began to see their relationship to the
crowd as an antagonistic one.

Sgt. Pepper finds Lennon, McCartney, Harrison, and Starr retreating from the
public that had so harassed them with Beatlemania and Beatle bashing.
Ironically, part of that desire to abandon live performance was prompted by the
repressive South, from which their chief influences—rock, blues, country, and
jazz—had come and against whose repression these forms had sprung. Such
tensions presented the group with a crisis of identity, which the Beatles tried to
resolve on Sgt. Pepper—through new “readings” of their musical influences,
newly developed philosophical ideals, the developing drug culture, and the
world they wanted to change.26

Absent the ability to perform live, the Beatles looked inward and began producing

more aesthetically complex albums that… overnight evolved to a point where they

were too complicated for live performance. In 1967, with the release of Sgt. Pepper’s

Lonely Hearts Club Band, the Beatles marked a substantial turn towards the aesthetic.

In contrast, George Martin defined the band’s new career phase by saying that
Sgt. Pepper was a ‘watershed which changed the recording art from something

26
Womack, Kenneth, and Todd F. Davis. Reading the Beatles: Cultural Studies, Literary
Criticism, and the Fab Four. Albany: State University of New York Press, 2006. 131.

103
that merely made amusing sounds into something which will stand the test of
time as a valid art form: sculpture in music, if you like’ (Frith 1982, 4). His
claim is important because it suggests that before Sgt. Pepper, the Beatles were
making nothing of any lasting worth. Suddenly, the band found a way to give
their music credibility by emphasizing its relation to the fine arts.27

When Sgt. Pepper’s hit the stores, it lingered as the top-selling record for weeks,

debuting at number one and ultimately selling 30 million copies worldwide.28 Other

examples that followed include the Beatle’s White Album, Abbey Road, and Let It Be;

The Who’s Tommy and Quadrophenia; the Rolling Stone’s Exile on Main St.; Pink

Floyd’s Dark Side of the Moon, Wish You Were Here, Animals and The Wall… among

many others.29

The strongest circles of influence involve the artists of the 1960s, and
competition, interaction and influence between Bob Dylan, the Beatles, the
Rolling Stones and the Beach Boys are central to histories of rock. Tales
abound of how Dylan (in)famously influenced the Beatles, and, in the words of
John Harris, ‘goaded them towards a new maturity like a frazzled Moses’.
Their meeting has now become a crucial moment in rock history….30

Overall, this period of pop culture represented a higher respect for the aesthetics of

popular music, ultimately defining album rock as a serious, but under-appreciated, art

form. The key component here was the fact that these albums, taken together, told

stories and because the same people recorded these sounds at the same time under the

27
Halligan, Benjamin. Arena Concert: Music, Media and Mass Entertainment. Bloomsbury
Publishing Plc, 2016. 39.
28
McIntyre, Hugh. "50 Years Later, The Beatles Are Back At No. 1 With 'Sgt. Pepper's Lonely
Hearts Club Band'." Forbes. June 03, 2017. Accessed June 03, 2017.
https://www.forbes.com/sites/hughmcintyre/2017/06/03/50-years-later-the-beatles-are-back-at-no-1-
with-sgt-peppers-lonely-hearts-club-band/#4043903c7029.
29
Jones, Carys Wyn. The Rock Canon: Canonical Values in the Reception of Rock Albums.
Aldershot: Ashgate, 2009. 53-76.
30
Ibid., 56.

104
same conditions, the songs for each of the albums had a shared audio and temporal

relationship forming an audible tapestry of sorts.

In the case with Sgt. Pepper’s Lonely Hearts Club Band, the album begins and

ends in the same place: we meet the fictional orchestra as it performs before an

audience and, at the end of the record, the band returns to say goodbye; in each case,

the producer adds the sounds of a cheering audience and other symphonic sounds to

create the illusion of a live music experience. This connectivity packages the album

neatly forming an auditory or oral-aural form of storytelling not unlike literature; in

fact, one might say the narrative design of Sgt. Pepper’s is similar to the “Wall Street

Journal Style” form. And, of course, the Beatle’s work certainly defined a shift in the

artistry of pop music.

At times this underlying narrative of evolution is not simply implied, it is


explicitly stated. An account of Marvin Gaye’s career maintains that ‘unlike
most soul greats, he maintained an artistic evolution (albeit sometimes
erratically) over the course of three decades… he was one of the few soul
pioneers to craft lyrically ambitious, album-length singer-songwriter
statements.’ The speed with which the Beatles progressed in artistic
experimentation is often recounted with wonder, as The Mojo Collection notes
with near disbelief that ‘the progression from the zesty “yeah, yeah, yeah” or
She Loves You to the mesmeric, acid-spiked Tomorrow Never Knows took
four Liverpool kids just 33 months.’31

When Sgt. Pepper’s debuted, it was nearly so distant from everything that had come

before, it was almost as if another band had formed and assumed the name “Beatles,”

replacing the pop quartet with more sophisticated musicians. That wasn’t the case, and

the lasting influence of Sgt. Pepper’s continues to linger today.

One must also observe the fact that this form of storytelling draws its roots—

given its rhyme, rhythm and meter—from the poetic forms. In fact, the very act of

31
Jones, Carys Wyn. The Rock Canon: Canonical Values in the Reception of Rock Albums.
Aldershot: Ashgate, 2009. 59.

105
recitation is a throwback to the days of Homer except, now, the poetry is recorded,

assigned to a medium and packages in a process that disconnects the performer from

the audience. Again, we see the influences of Walter Ong’s idea of oral media passing

through a literal filter; just as the book removed the poet from the performance, the

record album exhumed the recording artist from the performance… driving a wedge

between the artist and the audience. One could conclude that recorded music in its

packaged form is very similar to the book in that sense… that all of the potential for

improvisation is lost and that audience, often reduced to individuals listening alone, is

isolated in its appreciation of the art form.

Social theorists Nicholas Abercrombie and Brian Longhurst wrote about the

“ritual” of performance:

All performances involve a degree of ceremony and ritual (and ceremonies


require performance, as we indicated above). Going to the theatre is a
ceremonial event. The audience may dress relatively formally, the play is
received in silence, the rules of behaviour are fairly circumscribed. It would be
tempting to see performances in the mass media as somehow lacking this ritual
quality. However, as we shall argue later, even the act of watching television
or listening to a record at home can have elements of ceremony. In turn, all
performances, though to very different degrees, will be invested with a sense
of the sacred and the extraordinary. Religious worship is an obvious example,
though perhaps atypical. Many political meetings are imbued with the sense
that something out of the ordinary is going on, something that transcends
everyday life and is not part of it. We have to stress again that qualities of
sacredness are attributed to performances in very different ways and to very
different degrees. The act of listening to recorded music while washing up is
clearly less extraordinary and more profane than attending a concert of the
very same music.32

As for the storytelling aspects of the record album, it’s my belief that the

record album, for a time, was really just another story form crafted for a ‘hot’ media

experience and this work found an audience that appreciated the aesthetic of the audio

32
Abercrombie, Nicholas, and Brian Longhurst. Audiences: A Sociological Theory of
Performance and Imagination. London: Sage Publ., 2003. 41.

106
story. Listening to The Who’s Tommy and Quadrophenia—which Pete Townshend

called “rock operas”—and it’s clear that the band perceived the concept for the record

as a complete and whole story. And these are just two of the many “concept albums”

to emerge from the period.

In the 1970s, if you were young and artistically ambitious, it seemed as if


writing the “Great American Novel” was no longer a worthy pursuit; instead,
chances are you were a rock musician trying to record the ultimate concept
album. The concept album—a collection of songs (or sometimes a single
album-length song) arranged around a single subject (“concept”) or narrative
structure, became the rock version of the nineteenth-century song cycle. It
became the vehicle of choice for exploring “deep” philosophical ideas in a
sustained manner, an invitation for the listener to join in the artist’s
contemplation and enter their artistic universe for a while. Music critic Jon
Landau once asserted, “The criterion of art in rock is the capacity of the
musician to create a personal, almost private, universe and to express it fully”.
If Landau is correct, the concept album was one of the most prominent and
distinctive manifestations of rock’s “art” impulse in the late 1960s and early
1970s.33

From the mid-1960s to the early 1990s, there were dozens of concept albums

produced and these and other works encouraged listeners to go out and purchase much

more sophisticated sound equipment and soon records and record players became

staples of most American homes and dorm rooms in the 1970s, 1980s and 1990s.

Concurrently, the pop music movement exploded in popularity during these same

decades and the explosive growth of radio had some influence over the popularity of

pop music as well.

Radio Fills the Air

On October 30, 1938, WABC radio in New York City began reporting that a

meteorite had slammed into the Earth, crashing on farmland in New Jersey, and that

33
Holm-Hudson, Kevin. Genesis and The Lamb Lies down on Broadway. London: Routledge,
2016. 8.

107
aliens had emerged from the object and were now doing great harm to the people in

and around Newark, N.J.

As the story moved forward, broadcasters relayed greater and greater details

about the alien invasion, reporting that Martian forces were moving towards New

York City.

The story continued, reporting that the U.S. military was unable to stop the

invasion and that life in the United States and the world were doomed. WABC wasn’t

the only radio station reporting the story and, in fact, the report was reaching millions

of Americans across the country who took the broadcast literally. After all, the

information—incredible as it seemed—was presented like news, it sounded like a

traditional newscast, and it was airing over a (a new but accepted) medium known as a

resource for news: Why wouldn’t the invasion story be true? As you can imagine,

some people panicked.34

Long before the broadcast had ended, people all over the United States were
praying, crying, fleeing frantically to escape death from the Martians. Some
ran to rescue loved ones. Others telephoned farewells or warnings, hurried to
inform neighbors, sought information from newspapers or radio stations,
summoned ambulances and police cars. At least a million of them were
frightened or disturbed.35

Of course the broadcast was a hoax, a performance, an entertainment. The broadcast

was the brainchild of producer and actor Orson Welles who adapted the book The War

of the Worlds by H.G. Wells for the broadcast; the event aired the night before

Halloween. Their purpose, of course, was to have some fun, offering a little frightful

34
Schwartz, A. Brad. Broadcast Hysteria: Orson Welles's War of the Worlds and the Art of
Fake News. Place of Publication Not Identified: Hill & Wang, 2016. 14.
35
Cantril, Hadley. The Invasion from Mars; a Study in the Psychology of Panic. New York:
Harper and Row. 47.

108
entertainment hours before the Halloween celebration but in their effort to

authentically duplicate radio news, they may have gone too far.

Americans had grown accustomed, in recent months, to hearing radio


programs regularly interrupted by distressing news from Europe as Nazi
Germany pushed the world closer and closer to war. Welles and the Mercury
Theatre copied the style of those bulletins as closely as they could, giving their
invasion from Mars a terrifying immediacy.36

To their credit, the broadcast—as impossible as it seemed—certainly sounded like

news and the story they related caused millions of people to react hysterically. Such

was the potency of radio broadcasting. At the time, radio was live, immediate,

personalized and injected directly into the homes of millions of Americans. Today, we

continue to underestimate the potency of this medium and it continues to hold sway

over the listening public.

So how did we get here?

Radio is an offshoot of the telegraph. The idea was to create a wireless version

of the telegraph and there are many who lay claim to its invention. Experiments in

radio date back to the middle of the 19th century and many famous and not-so-famous

people dabbled in it. Today, the common belief is that an Italian named Guglielmo

Macroni was the pioneering engineer who created a viable wireless radio system in the

1890s.37

The system was initially used much like a telegraph but, over time, broader

uses were found. In the years after World War I, experiments in one-way transmission

inspired inventors to develop a commercially viable receiver. Soon, a symbiotic

36
Schwartz, A. Brad. Broadcast Hysteria: Orson Welles's War of the Worlds and the Art of
Fake News. Place of Publication Not Identified: Hill & Wang, 2016. 7.
37
Hong, Sangook. "Wireless: From Marconi's Black-Box to the Audion." Science Technology,
no. & Human Values, Vol. 28, No. 1 (January 01, 2003): 176-80.

109
relationship had formed where broadcasters began developing networks and curious

listeners began buying radio receivers. By 1920, radio had reached a critical mass and

Americans began buying radio receivers by the millions.38

As signal towers sprung up across the globe, radio stations acquired the ability

to take pre-recorded sound and project it over a region saturated with audio receivers

and the age of radio was born. This process of transmitting sound over long distances

to thousands of users is another form of Walter Benjamin’s idea about “The Work of

Art in the Age of Mechanical Reproduction,” but with an interesting twist. Only one

sound is being transmitted from the radio booth and it is the radio receiver that does

the duplicating; basically, the duplication isn’t taking place at the origin of the

broadcast but rather at the point of reception. Marshall McLuhan argues that this

creates a sense of intimacy for the listener:

Radio affects most people intimately, person-to-person, offering a world of


unspoken communication between writer-speaker and listener. That is the
immediate aspect of radio. A private experience. The subliminal depths of
radio are charged with the resonating echoes of tribal horns and antique drums.
This is inherent in the very nature of this medium, with its power to turn the
psyche and society into a single echo chamber. The resonating dimension of
radio is unheeded by the script writers, with few exceptions.39

Reading also has an intimacy to it: The writer writes, the reader reads; but there is a

stark difference here. With radio, there is the immediateness of the temporal broadcast.

Radio is an “appointment medium,” which means that to participate in the shared

experience, one had to have the radio on and must be listening at the precise moment

of the message is being imparted; doing so also enjoined the listener to a shared

38
Douglas, George H. The Early Days of Radio Broadcasting. Jefferson, NC: McFarland,
2001.
39
McLuhan, Marshall. Understanding Media: The Extensions of Man. 261.

110
experience with other listeners. With books, there isn’t a temporal component, which

means the writer can write and the reader can encounter the information, years later.

Radio demands a precise audience and this precision is unifying. So, reading defines

the individual, radio forms communities.

Since literacy had fostered an extreme of individualism, and radio had done
just the opposite in reviving the ancient experience of kinship webs of deep
tribal involvement, the literate West tried to find some sort of compromise in a
larger sense of collective responsibility. The sudden impulse to this end was
just as subliminal and obscure as the earlier literary pressure toward individual
isolation and irresponsibility; therefore, nobody was happy about any of the
positions arrived at. The Gutenberg technology had produced a new kind of
visual, national entity in the sixteenth century that was gradually meshed with
industrial production and expansion. Telegraph and radio neutralized
nationalism but evoked archaic tribal ghosts of the most vigorous brand. This
is exactly the meeting of eye and ear, of explosion and implosion….40

And radio came at us like nothing before.

Starting in 1920, American corporations began experimenting with radio

transmitters. On Election Day that year, KDKA began broadcasting results from the

Presidential election to its audience in Pittsburgh, Pennsylvania. Warren G. Harding

won the election and KDKA went down in history as the first licensed radio station in

the United States. The station was owned by Westinghouse Electric and was launched

to encourage sales of its radio receivers. By the end of the year, there would be 11

radio stations; in 1921, 15 more stations opened up; and in 1922, the numbers

exploded adding 150 more.41

“After the KDKA election broadcast, radio swiftly captured the imagination of

Americans and became a craze,” writes Tom Lewis in his essay “A Godlike

40
Ibid., 301.
41
Douglas, George H. The Early Days of Radio Broadcasting. Jefferson, NC: McFarland,
2001.

111
Presence”: The Impact of Radio on the 1920s and 1930s. “By the end of 1923, 556

stations dotted the nation’s map in large cities and places…and an estimated 400,000

households had a radio.”42

The American public responded, of course, purchasing millions of radios

defining what would become the first major adoption of a consumer electronic device.

At RCA, David Sarnoff predicted with surprising accuracy that the public would

purchase 1 million radio music boxes earning the company upwards of $80 million

over a three-year period; and his was just one company. By the end of the decade,

these companies would sell over $3.2 billon worth of consumer radio equipment.43

The way America (and the world) responded was overwhelming. With the

demand for radios climbing, the companies began investing in radio programming and

several content streams were formed. Music dominated the airwaves but it was soon

supplemented with weather reports and public readings; then narrative storylines were

introduced along with news and sports programming.44

By 1934, the radio industry has grown so fast that corporations began fighting

for transmission rights among other things, and turf battles formed. So much so, that

the U.S. Congress crafted the Communications Act of 1934, which laid out the rules

for public broadcasting setting standards for conduct on the airwaves and permissions

42
Lewis, T. ""A Godlike Presence": The Impact of Radio on the 1920s and 1930s." OAH
Magazine of History 6, no. 4 (1992): 26-33. doi:10.1093/maghis/6.4.26.
43
Ibid.
44
Ibid.

112
for transition signals; the act also formed the Federal Communication Commission,

which would police and manage the public airwaves.45

The Orson Welles’ “War of the Worlds” broadcast continues to be a sticking

point in the history of public broadcasting history. Because Americans had allowed

radios to permeate the sanctity of their homes, radio was now thriving inside their

living rooms and homeowners were unaware of the value or the invasive potential of

the medium. Theorists Theodor Adorno and Max Horkheimer take it a step further

comparing the power of media with the injective potency of a hypodermic needle,

believing that popular ideas can be inserted directly into the minds of the listening

public.46 Theorist Pieter Fourie made these observations:

The theory equates the media with an intravenous injection: certain values,
ideas and attitudes are injected into the individual media user, resulting in
particular behavior. The recipient is seen as a passive and helpless victim of
media impact.47

The key was finding a mind willing and eager to accept the information as fact. Given

the authentic presentation of Welles’ “War of the Worlds” mockumentary, it was clear

that millions of Americans were susceptible to what really amounted to little more

than early-20th century “fake news.” All this happened just as radio was just getting

started and a few years later, radio was used for something other than entertainment.

The famous Orson Welles broadcast about the invasion from Mars was a
simple demonstration of the all-inclusive, completely involving scope of the
auditory image of radio. It was Hitler who gave radio Orson Welles treatment
for real.48

45
Goodman, David. Radio's Civic Ambition: American Broadcasting and Democracy in the
1930s. New York: Oxford University Press, 2011. 288.
46
Edwards, Mike. Key Ideas in Media. Cheltenham: Nelson Thornes (Publishers), 2003. 159.
47
Fourie, Pieter . J. Media Studies: Media History, Media and Society. 294.
48
McLuhan, Marshall. Understanding Media: The Extensions of Man. 401.

113
We must also reflect upon Nazi-era Germany and the impact radio had aiding Adolph

Hitler’s rise to power. Radio has the effect of disassociation; it separates the performer

from the audience; it also separates the audience from each other. Although radio

forms us into clans, it separates clan members into isolated individuals who don’t have

the ability to observe how their peers are reacting to the radio message, which exists in

ubiquity. This is the essence of the Secondary Orality.49 In this isolation, radical ideas

have the ability to thrive. Because of these powerful properties of radio, Hitler had an

unequalled influence over the Germany people during his ascension. It was clear that

Hitler or his propaganda expert Joseph Goebbels saw an opportunity to employ radio

to bring the German people to accept their radical political agenda and the Nazis did

so with stunning impact.

In their book Introduction to Psychology: Gateways to Mind and Behavior,

authors Dennis Coon and John O. Mitterer address the potency of isolation in their

section on “Brainwashing:”

Brainwashing typically begins by making the target person feel completely


helpless. Physical and psychological abuse, lack of sleep, humiliation, and
isolation serve to unfreeze, or loosen former beliefs. When exhaustion,
pressure, and fear become unbearable, change occurs as the person begins to
abandon former beliefs.50

Isolation eliminates the ability for competing messages to challenge the primary

message and, in a radio environment, that one message becomes definitive. Given the

dire economic troubles—famine, hyperinflation, poverty, unemployment—during the

49
McLuhan, Marshall. The Gutenberg Galaxy: The Making of Typographic Man. Toronto:
University of Toronto Press, 1962.
50
Coon, Dennis, and John O. Mitterer. Introduction to Psychology: Gateways to Mind and
Behavior. Boston, MA: Cengage Learning, 2016. 567.

114
Weimar Republic, Germans were vulnerable to the Nazi propaganda message and

radio injected that message right into their living rooms.

The powerful reach of radio does have the potential for positive impact.

President Franklin Delano Roosevelt was one of the first world leaders to appreciate

the power and reach of radio. Starting in just days after his inauguration in 1933—at

the height of the Great Depression—FDR began hosting the first of his 31 so-called

“fireside chats” where he used radio to explain the details of his New Deal

legislation.51 Because FDR was using public radio to make his broadcasts, he found a

way to circumnavigate corporate media filters and reach directly into the living rooms

of the American public.

Through radio and television, for the first time, voters did not actually have to
be in the room to hear audio of a live debate or conversation, bringing them
closer to the process. Using radio as the primary medium, FDR famously held
his regular “fireside chats” with the American public. Many scholars credit the
success of the New Deal, one of the most ambitious presidential agendas in
history, with these fireside chats.52

Again, you can begin to see the point that Horkheimer and Adorno were making about

the power of the hypodermic needle theory. FDR’s “fireside chats” were not

discussions at all; instead, they were monologues, disguised as discourse, designed to

explain a political agenda to a listening public.53 The success of the New Deal clearly

rested upon the Roosevelt Administration’s appreciation of radio potency.

The history of media research begins in the period leading up to World War II,
when radio was beginning to make an impact on the cultural landscape on both
sides of the Atlantic. Two important early works were Cantril and Allport’s

51
Schill, Dan, Rita Kirk, and Amy E. Jasperson. Political Communication in Real Time:
Theoretical and Applied Research Approaches. New York, NY: Routledge, 2017. 72.
52
Ibid., 72.
53
Ibid., 72.

115
The Pyschology of Radio (1935), a look at the possible psychological effects of
mass communication, and Cantril, Gaudet, and Herzog’s The Invasion From
Mars: A Study in the Psychology of Panic (1940), which examined the
implications of the outbreak of hysteria induced when a dramatized version of
H.G. Wells’ War of the Worlds, complete with realistic news bulletins, was
broadcast on American radio one evening in 1938. These authors were
disturbed by the prospect of radio as a potential vehicle for propaganda during
a politically unstable period. A popular metaphor in this period for the
psychological effects of media was the “hypodermic needle” (Lasswell, 1935),
which likened the effects of propaganda to an “injection” of ideological bias
that contaminated radio listeners, rather in the manner of “brain-washing,” an
expression still widely used today to refer to the apparently hypnotic effect of
media.54

Marshall McLuhan compared Hitler to FDR and Bonito Mussolini calling them “tribal

emperors on a scale theretofore unknown in the world, because they all mastered their

media.”55 Later, McLuhan takes it a step further saying that if the dominant medium

during Hitler’s age were television and not radio, the broadcasted images of the

dictator’s speeches would have made him look cartoonish and unbelievable.

Hitler’s screaming tirades are perfect for hot radio, but the Hitler phenomenon
would have looked ridiculous on television. In this respect, heat and cold seem
to be properties inscribed in the nature of specific media themselves.56

Clearly, given the influences the Nazis had over the German people, radio proved that

it has the ability to inject poison into the minds of an anxious listening audience.

British sociologist Stuart Hall believed that readers would shop for meaning

and would embrace media in one of three ways, which he described as “preferred,”

“negotiated” and “oppositional.”

The preferred reading is the one that the makers of texts have built into them
and which they hope the audiences will take from them. The negotiated
54
Giles, David. Media Psychology. New York: Routledge, 2009. 14.
55
MacLuhan, Eric, Marshall McLuhan, and Frank Zingrone. Essential McLuhan. London:
Routledge, 2006. 247.
56
Copejec, Joan, and Joel Goldbach. Umbr(a) a Journal of the Unconscious 2012:
Technology. Buffalo: State University of New York at Buffalo, 2012.

116
reading is the one that results when audiences partly agree with or respond to
the meaning built into texts. An oppositional reading is one that is in
opposition to what the maker of the text intended. A simple way to understand
the difference between the three types is to consider a comedian who has just
told a joke onstage. If the audience laughs wholeheartedly, then the joke has
produced the preferred reading. If only some of the audience laughs whole
heartedly, while others chuckle or sneer, then the joke has brought about a
negotiated reading. Finally, if the audience reacts negatively to the joke, with
resentment, then it has produced an oppositional reading.57

Applying Hall’s ideas to the use of radio in the 1930s, one can see that both

Hitler and Roosevelt were gaining ground with their audiences, winning a “preferred”

standing for their respective ideas.

Pocket Radio

New technologies helped radio to evolve and grow in influence. Over the next

several decades, radio moved out of our living rooms and into our cars before finally

arriving in the form of portable transistor radios. The invention of the transistor radio

transformed youth culture, further personalizing the radio experience. Now, it was

possible to bring music anywhere: school, the park, the beach, a party… you get the

idea.

In 1957, Sony introduced the TR-63, a small pocket-sized transistor radio for

$39.95. It was small, it was portable, it was cheap, it was fashionable, it was stylish, it

was easy to operate, and it worked with an existing media (radio broadcast) network.

Whether they knew or not, the engineers at the fledgling Sony corporation redefined

consumer electronics with this device, setting a standard for public consumption that

57
Danesi, Marcel. Popular Culture: Introductory Perspectives. Lanham: Rowman &
Littlefield, 2015. 49-50.

117
remains today.58 Given all of its virtues, the radio became overwhelmingly popular in

the United States and around the world.

By 1965, radio had evolved to a point where it was as universal as publishing

and other media. In fact, radio—given the ease of its consumption—may have been

more popular than print. The music industry, during this same period, blossomed and

exploded, transforming the Pop culture scene forever.

Clearly, during the sixties, the portable radio found a place in nearly all
American homes. As the price of portables continued to fall—a shirt-pocket set
could be bought for one or two hours wages—people who previously knew
nothing of portables began to buy them and soon found them indispensable.
Portables were no longer confined to the trend-setters and immediate trend
followers, or to young adults and teenagers. Instead, Americans of all ages, in
all walks of life, and in all communities could be found enjoying various
activities with their portable radio companion, much as Zenith had foreseen
decades earlier.59

Looking to the theory, Marshall McLuhan defines radio as a ‘hot’ medium or

one that takes very little training to appreciate. In fact, given the ease with which we

consume it, radio may be the hottest of all the emerging electronic media. Now,

McLuhan explains that radio is both tribal and intimate and the portable radio only

further defined that intimacy, especially when earplugs were added.

Small transistor portables, equipped with ear plugs, elegantly solved the rock
and roll problem. By bestowing these radios as gifts, parents could wall off the
offending music, insulating themselves from its erotic drives. Youth, of course,
was unencumbered by the negative meaning that tiny radios once had for their
parents and grandparents. Teenagers soon discovered that transistor portables,
especially the shirt-pocket variety, gave them and their music unprecedented
mobility. Eventually teenagers came to believe that they were screening off the
rest of the world and crafting their own.60
58
Headrick, Daniel R. Technology: A World History. La Vergne, TN: Oxford University Press,
2010. 130-149.
59
Schiffer, Michael B. The Portable Radio in American Life. Tucson: University of Arizona
Press, 1991. 223.
60
Ibid., 181.

118
The new devices moved the broadcasts out of the public sphere and, because of the ear

plugs, into the heads of listeners. Nothing could be more intimate or personal. In many

ways, radio consumption mirrored book consumption, given the placement of the

media inside the quietness of the human mind; however, there was one major

difference: radio’s temporal aspects, given the commonness and timing of the

broadcasts, brought the listening public together into a uniform place. Radio signals

became media markets and these markets were quickly defined by the sounds emitted

from the radio stations in those markets.

Radio is provided with its cloak of invisibility, like any other medium. It
comes to us ostensibly with person-to-person directness that is private and
intimate, while in more urgent fact, it is really a subliminal echo chamber of
magical power to touch remote and forgotten chords. All technological
extensions of ourselves must be numb and subliminal, else we could not
endure the leverage exerted upon us by such extension. Even more than
telephone or telegraph, radio is that extension of the central nervous system
that is matched only by human speech itself.61

And while radio certainly made its own way into the world, radio also blazed the trail

for television. In fact, many of the stations offering radio programming, simply

amended their signals adding video content and then broadcasted that content over the

same media matrix. So, what radio did to the United States during the first half of the

20th century, television would repeat during the second half, changing Western culture

forever.

Television Takes Over

In July 1969, I was four years old and living in Valdosta, Georgia with my

parents in a small brick ranch-style house not very distant from Moody Air Force Base
61
McLuhan, Marshall. Understanding Media: The Extensions of Man. 398-410.

119
where my father was stationed. At four, most human animals are at the early edge of

memory and one of my first occurred in the kitchen of this small home. I remember

my mother picking me up and carrying towards a corner of the kitchen where she sat

me down in front of a small, beige, plastic black-and-white television set, which was

flickering a grainy, gray signal.

“Watch this,” she said as she sat me down.

Through the analog haze of the ancient television signal, I saw a gray figure

moving against a black and white field and could hear the man—Astronaut Neil

Armstrong—speaking as he held onto the leg of the ladder leading away from the

Lunar Module. As a four-year-old, I barely understood the meaning of what I was

looking at and I’m sure my mother attempted to explain that a man was walking on the

moon, but given my age and my intellectual maturity, the power of the event didn’t

register with me for some years.

Reflecting upon it now—phew!—television has an awesome power.

Consider the impact of that moment and the fact that millions across the planet had

access to that television signal. This was a shared achievement, a moment equal to

when Columbus reached the Americas, but with a twist: The world was watching (or

listening) as the explorer stepped out onto the surface of this undiscovered terrain. One

of those in the audience was a 62-year-old semi-retired inventor living in Salt Lake

City, Utah.

120
Farnsworth’s Gift

Television came to us, delayed and unsteady, during a 20-year period. A young

inventor named Philo Farnsworth labored away in a small, private laboratory in San

Francisco, experimenting with broadcasting tools and reception devices; he was often

uncertain how it would be used, but he persevered, earning patents as he moved

through the process. It was on September 7, 1927, that he finally successfully

transmitted an image—a picture of a triangle—across the width of his laboratory. The

original image was fuzzy and out of focus, but still, he proved that it was possible to

translate light into an electronic signal and transmit it to a receiver that would

reconstruct the image on the other end. It would be at least another decade before a

commercial product was made available to the American public… but the Great

Depression and World War II stalled television’s commercial success until the late

1940s.62

Between 1947 and 1950, television took off. It started with the 1947 World

Series matchup between the Brooklyn Dodgers and the New York Yankees. Across

the Northeast and the Midwest, many bar owners realized that if they brought

televisions into their pubs, the fans would forgo the stadiums, settling instead for a

place where they could watch the game and drink beer.63

A month later, the New York Yankees and Brooklyn Dodgers played an epic
World Series that ran the full seven games. It attracted an estimated three
million viewers from New York to Washington, DC, mostly in bars, to the
sixth game. Ford Motor Company and Gillette Razor Company shared the

62
Schwartz, Evan I. The Last Lone Inventor: A Tale of Genius, Deceit, and the Birth of
Television. New York: Perennial, 2003.
63
Magoun, Alexander B. Television the Life Story of a Technology. Baltimore: Hopkins Univ.
Press, 2009. 95-100.

121
sponsorship for $65,000 after the baseball commissioner rejected Ford’s offer
to buy the next ten years for a million dollars.64

Television also found its first major hit when Milton Berle premiered on “The Texaco

Star Theatre,” an hour-long variety show that featured Vaudeville-like routines,

singing acts and so forth. The show aired from 1948 to 1956 and dominated Tuesday

nights, airing on NBC. It also became so popular, Americans ran out to purchase

black-and-white television sets just to watch this and other programming. In reaction,

CBS launched the Ed Sullivan Show, which also became an American staple. By the

end of Berle’s run, the number of television sets in the United States swelled from just

under 500,000 to well over three million.65

For the next 50 years, television evolved into the dominant medium in the

United States. At TV’s peak, American viewers watched an average of eight hours of

daily television. This as networking systems and programming improved. Terrestrial

television systems got replaced by cable and satellite systems, which also offered

improved and ultimately digitized signals; home recording devices also made it

possible for viewers to record television programming, which they’d watch later at

their convenience. But the power of television is slowly fading; older viewers are

dying and younger viewers are finding other electronic distractions. Those distractions

include video games, social media, streaming online media and so forth.66

64
Ibid., 96.
65
Ibid., 98.
66
January 11, 2017 - by MarketingCharts Staff. "The State of Traditional TV: Updated With
Q3 2016 Data." MarketingCharts. January 11, 2017. Accessed February 25, 2017.
http://www.marketingcharts.com/television/are-young-people-watching-less-tv-24817/.

122
Television is a fascinating medium. Marshall McLuhan described it as a ‘cool’

medium because of the resolution troubles that plagued the first few decades, but

innovations in digital technology and the advent of high-definition resolution have

repaired that problem. For me, television is a ‘hot’ medium, much like radio, but it

does take some training to understand. Marshall McLuhan says that television is

similar to reading because the human eye must learn how to scan content first as a

reader and then later as a television viewer. He also suggests that television, unlike

film, transmits its signal directly onto the retina—the receiver for the optic nerve and

the pathway directly into the human mind—of the viewer.67

The mode of the TV image has nothing in common with film or photo, except
that it offers also a nonverbal gestalt or posture of forms. With TV, the viewer
is the screen. He is bombarded with light impulses that James Joyce called the
“Charge of the Light Brigade” that imbues his “soulskin with sobconscious
inklings.”68

Communication theorist Neil Postman takes an even harsher opinion of

television. He warns us that Western Civilization has been so overwhelmed by the

television and its presence that we are incapable of knowing or seeing a world absent

its influence. In fact, in his rhetorical investigations of television, his own arguments

evolve from the idea that “television is bad” and towards the idea that “the influences

of television are inescapable.”

The problem, in any case, does not reside in what people watch. The problem
is in that we watch. The solution must be found in how we watch. For I believe
it may fairly be said that we have yet to learn what television is. And the
reason is that there has been no worthwhile discussion, let alone widespread
public understanding, of what information is and how it gives direction to a
culture.69

67
McLuhan, Marshall. Understanding Media: The Extensions of Man. 37-50.
68
Ibid., 418.

123
Ultimately, Postman suggests that we’d be better served if television got worse, not

better. He argues that shows like The A-Team and Cheers are no threat to the public’s

understanding of things; instead, he suggests that the real danger comes when

television takes on serious subjects including politics, news, business and the law.70

Postman believed that because television was so damaging to the public’s

understanding of important matters, TV programming should stick to lighter topics

like game shows and situation comedies; he believed that when it took up more

serious matters like drug addition or nuclear war, TV had the unfortunate ability to

make these issues appear cartoonish.

Postman wasn’t alone. In 1961, during the National Association of

Broadcasters Convention, FCC Chairman Newton Minow described the current state

of television as a “vast wasteland”:

When television is good, nothing—not the theater, not the magazines or


newspapers—nothing is better.

But when television is bad, nothing is worse. I invite each of you to sit down in
front of your own television set when your station goes on the air and stay
there, for a day, without a book, without a magazine, without a newspaper,
without a profit and loss sheet or a rating book to distract you. Keep your eyes
glued to that set until the station signs off. I can assure you that what you will
observe is a vast wasteland.

You will see a procession of game shows, formula comedies about totally
unbelievable families, blood and thunder, mayhem, violence, sadism, murder,
western bad men, western good men, private eyes, gangsters, more violence,
and cartoons. And endlessly commercials—many screaming, cajoling, and
offending. And most of all, boredom. True, you’ll see a few things you will

69
Postman, Neil. Amusing Ourselves to Death: Public Discourse in the Age of Show Business.
New York: Viking, 1985. 160.
70
Ibid., 160.

124
enjoy. But they will be very, very few. And if you think I exaggerate, I only
ask you to try it.71

Brave stuff, especially when you’re addressing a room teeming with

broadcasting professionals. Unlike Postman, however, Minow is suggesting that TV

should be more serious. The difference in their thinking amounts to an understanding

of the medium: Postman believes that TV has the effect of a funhouse mirror, warping

everything it touches, while Minow still has hope, believing that producers are just too

lazy to take up serious issues.

So, how did the TV production community respond to Minow’s comments?

Let me clue you in on an insider’s joke. Next time you look at Gilligan’s Island, a

1960s situation comedy about a group of Americans shipwrecked on a Pacific island,

notice the name of the ship. In the opening credits, the video tells the story, introduces

the characters and shows the wreckage of the boat. When producers introduce the

“Skipper,” notice the name written on the ship’s wheel he’s leaning on: It reads, the

“S.S. Minnow,” an obvious wink from the show’s producers at the FCC chairman; and

judging from the Skipper’s reaction, it looks like actor Alan Hale Jr. was in on the

joke too.72

But the age of television wasn’t necessarily all bad.

In his book, Everything Bad is Good for You, author Steven Johnson argues

that decades of watching television made the viewing audience more sophisticated

when watching video narratives. He writes that the breakout program that changed
71
Eidenmuller, Michael E. "American Rhetoric: Newton Minow -- Address to the National
Association of Broadcasters (Television and the Public Interest)." American Rhetoric: Newton Minow -
- Address to the National Association of Broadcasters (Television and the Public Interest). Accessed
March 14, 2017. http://www.americanrhetoric.com/speeches/newtonminow.htm.
72
Edgerton, Gary R. The Columbia History of American Television. New York, NY: Columbia
Univ. Press, 2009.

125
everything was Hill Street Blues, a gritty cop drama that premiered in 1981, which

followed on the heels of shows like Starsky and Hutch and Dragnet.73

With the earlier shows, the audience only had to follow the structural narrative

of the two lead characters as they progressed through a natural timeline of events. The

story had a beginning, a middle, and a conclusion; whatever crisis that had been

started in the opening sequences would be resolved by the end of the program. A week

later, the producers would repeat the formula. By comparison, Hill Street Blues was a

complex fabric of storylines that often featured ten or more characters moving through

a series of incidences that went largely unresolved for episodes at a time.74 Johnson

offers this explanation:

A Hill Street Blues episode complicates the picture in a number of profound


ways. The narrative weaves together a collection of distinct strands—
sometimes as many as ten, through at least half of the threads involved only a
few quick scenes scattered through the episode. The number of primary
characters—and not just bit parts—swells dramatically.75

The end result, Johnson says, is the creation of an audience that can handle

sophisticated and complex storylines. We have certainly come a long way from The

Honeymooners. Looking forward, Johnson says that Hill Street Blues prepared the

viewing audience for much more complicated super-plot structures, which were

realized with programs including The Sopranos, 24 and Game of Thrones. He calls

this effect the “Sleeper Curve,” which he defines as an evolution of complexity in

mass media:

73
Johnson, Steven. Everything Bad Is Good for You: How Popular Culture Is Making Us
Smarter. London: Penguin Books, 2006. 66-72.
74
Ibid., 66-72.
75
Ibid., 66-72.

126
So this is the landscape of the Sleeper Curve. Games that force us to probe and
telescope. Television shows that require the mind to fill in the blanks, or
exercise its emotional intelligence. Software that makes us sit forward, not lean
back. But if the long-term trend in pop culture is toward increased complexity,
is there any evidence that our brains are reflecting that change? If mass media
is supplying an increasingly rigorous mental workout, is there any empirical
data that shows our cognitive muscles growing in response?

In a word: yes.76

This may seem like a small thing, but years of watching television have been

preparing audiences for something more complex and more sophisticated. Basically,

generations of television watching have trained us to understand more complex forms

of story.

Over the last half century of television's dominance over mass culture,
programming on TV has steadily increased the demands it places on precisely
these mental faculties. The nature of the medium is such that television will
never improve its viewers' skills at translating letters into meaning, and it may
not activate the imagination in the same way that a purely textual form does.
But for all the other modes of mental exercise associated with reading,
television is growing increasingly rigorous. And the pace is accelerating-
thanks to changes in the economics of the television business, and to changes
in the technology we rely on to watch.77

Johnson goes on to explain that Newton Minow’s troubles with television still exist,

there are still many shows that have very little substance or social value, but Johnson

also believes that the audience’s intellectual development has allowed for much more

complicated television narratives to emerge and that many television dramas have

reached a level of narrative complexity equal to those found in books. On this point, it

bears repeating that programming including Game of Thrones, The Newsroom, and

True Detective have offered very complex narrative formulas.

76
Ibid., 136.
77
Ibid. 172.

127
All that aside, it’s worth noting that although many historians credit Philo

Farnsworth as the scientists who invented television—his estate holds many of the

initial patents for television—he was ultimately cheated out of his invention by

industrialists with more money and more political clout: after years of violating his

patents, “RCA finally settled a decade-long court battle by agreeing to pay Farnsworth

a $1 million licensing fee.”78 And, like Neil Postman and Newton Minow, Farnsworth

was not impressed with the programming that appeared on television during his

lifetime but he might have changed his mind had he saw what came later.

Farnsworth went on to work on a variety of other technologies—including


radar, incubators, medical imaging devices—and held more than three hundred
U.S. and foreign patents by the time he died. Asked, in a 1957 interview, about
his current projects, he described flat-screen and high-definition television
systems, which would become standard a half-century later. Farnsworth
remained famously unimpressed by the quality of the programs transmitted
using his invention, changing his mind only after watching a live broadcast of
the first moonwalk in 1969. “This,” he said to his wife at the time, “has made it
all worthwhile.”79

Narrative in Oral Media

Like print media, the oral media went through an evolution that began in the

chronicle form but evolved towards the more complex narrative form. This evolution

followed the audiences’ ability to track, or follow along, longer and more intricate

story formulas. As with print, the oral media started with chronicle forms but trended

towards more complex designs.

78
Schwartz, Evan I. The Last Lone Inventor: A Tale of Genius, Deceit, and the Birth of
Television. New York: Perennial, 2003.
79
Bowdoin, Van Riper A. A Biographical Encyclopedia of Scientists and Inventors in
American Film and Tv since 1930. Lanham, Md: Scarecrow Press, 2011. 254-255.

128
With photography, for example, one photograph offers an idea of an event, but

a series of three photographs taken under the same conditions of the same subject

matter create a narrative. As with a writer’s story design, the photographer has the

ability to “emplot” or package a narrative, which he perceives as the essence of the

event. This “emplotment” comes on at least two levels: The first is at the micro level

or when the photographer composes the single photograph; the framing decisions and

uses of light and association of objects within the photograph define the

photographer’s intent. On the second level, the story narrative is defined on the macro

level or by the message imparted by a series of photographs; the first photo may define

the beginning of an action, the second, refines changes in the perspective, and the third

defines still more movement. Taken together, there is a story, a visual narrative.

In film, narrative form is much more complex. Film narratives are also very

similar in design to written forms. As Virginia Woolf points out, film draws much of

its design from its literal counterpart. In fact, the relationship between film and print

has been long established. Often, popular novels are later adapted for the screen and,

in the process of that adaptation, the narrative formula is also transcribed. Further, like

the written forms, there are also fiction and nonfiction aspects of film. Documentary

films, specifically, have very strong narrative formulas rooted in the literary tradition.

Returning again to the “Wall Street Journal Style,” which is a formula that

takes a national issue and assigns a “face” as the subject, this form can be found over

and over again in documentaries. Take for example The Civil War series created by

Ken Burns in 1990. This nine-part documentary series covers the U.S. Civil War

chronologically, but it emplots each phase of the war using aspects of the “Wall Street

129
Journal Style.” With each chapter, Burns often used individual soldier stories to impart

the larger impacts of the war; to do this, Burns would show images of the soldier, his

biography, and passages from the soldier’s letters home to impart the overarching

message. As with the literary form, there is, of course, a beginning, a middle and an

end and the “face” of each chapter and his story is used to place emotional emphasis

on the events as they unfolded.

Ever since the first moving images were recorded, filmmakers have been
aware of the power of their medium to effect historical meaning; the historical
documentary became one of the first identifiable film genres. The popular
model of this form in America today, most clearly exemplified by Ken Burns’
“The Civil War,” has the familiar Western narrative: each program has a
distinct arc, a beginning, middle and an end. The rhetorical structure—also
familiar and now almost universally expected—invariably involves a crisis
situation, a climax, and a clear resolution. Generally there is one prevailing
narrative, one interpretation of the historical facts presented. Overall the tone
set is one of progress. Usually, the narrative is delivered to the audience by an
unseen, yet obviously white, male narrator. So popular is this model that
networks and cable channels, including the public television networks, rarely
show programs that diverge from it—thus the form has become codified.80

Ken Burns’ story form is definitive. He went on to repeat the form used to define The

Civil War series in similar projects about jazz music, the U.S. parks, and American

baseball. As a result, the narrative form for documentary film (at least in the United

States) has been well established.

The same is true with regard to recorded music. The narrative forms for

recorded music went through several variations. With opera, for example, the narrative

form from the stage performance was replicated and placed on the recorded disc. With

pop music, the initial narrative forms were likely rooted in poetry and merged with

church gospel forms and forth.

80
Mateas, Michael, and Phoebe Sengers. Narrative Intelligence. Amsterdam: J. Benjamins
Pub., 2003, 157.

130
With symphony music, theorist Byron Almén suggests that musical narrative

has two basic forms: the first is theme, the second is narrative; he draws his inspiration

from Hayden White and one can see the similarities in design. Almén’s use of the

word “theme” echoes Hayden’s use of the word “chronicle” although the ideas diverge

slightly.

[Northrop] Frye’s book, an acknowledged masterpiece, is a remarkable


taxonomic rewriting of the principles of literary criticism; its most influential
essay, “Archetypal Criticism,” introduces his four mythoi—romance, tragedy,
irony, and comedy—that represent fundamental, pregeneric patterns of
narrative motion. This formulation has influenced countless scholars in many
fields, most notably Hayden White, who has observed (1973) the tendency of
historians to consciously and unconsciously emplot historical events according
to temporal narrative schema. I had been acquainted with these mythoi since
high school, but my first reading of the essay in 1992 convinced me that they
are eminently applicable to music.81

And while the ideas diverge slightly—“theme” addresses the whole of the symphonic

piece, which he describes as being more like a season (fall, winter, spring, summer)

than a story—Almén suggests that musical narrative is also packaged.

In the evolution of pop music, we saw that progression accelerate during the

1960s during the concept album movement inspired by the release of the Beatle’s Sgt.

Pepper’s Lonely Hearts Club Band. In the 1970s, a series of more sophisticated

records came along until finally, in 1979, Pink Floyd released of the rock opera The

Wall. With this album, Pink Floyd tells a complete story of the rise and fall of a semi-

fictional rock ‘n’ roll artist named Pink. There is drama, there is betrayal, there is the

anxiety of a man trapped within himself fighting to find his way out from behind the

wall of his own making.

81
Almén, Byron. A Theory of Musical Narrative. Bloomington: Indiana University Press,
2017. ix.

131
The Wall is a masterpiece of the pop culture narrative story form. It also came

at a point in time when the interest in rock operas was fading and it may stand as the

last of a genre that defined the high-water mark for the aesthetic of pop music.

Radio and television narratives had associated purposes. What started out as a

radio platform migrated across to television in the 1950s and became something

wholly more complex. One could argue that the methods for television narratives were

borne in the radio age, where they were later adapted and enhanced.

Radio peaked in the 1930s and 1940s during the era of narrative drama, which

included many literary sources, including James Fenimore Cooper’s stories.

An example of early dramatic adaptation from the literature of America comes


from 1932. It is a production titled The Leatherstocking Tales. It features two
1932 radio dramas, The Last of the Mohicans and the Deerslayer, which we
examine here. Program producers delivered 13 short episodes with protagonist
Nathaniel “Natty” Bumppo, or “The Deerslayer,” and his Native American
friends, Hawkeye and the Pathfinder. The play is set during the 1740-1745
period in “the settled portions of the colony of New York,” which were
confined to a narrow cross section of the Hudson River. It was a time and place
where conflict between the expansionist colonists, native populations, and the
French was underway. The program, of course, is based on the literary works
of nineteenth-century author James Fenimore Cooper. In the words of the
program’s director, the radio play presents dramatizations faithful to the
original Cooper writings, and that is general true. Plot and characterization are
delivered via an announcer and through character dialogue. Orchestral music
helps to set the mood. Minimal sound patterns such as drumming at the outset
do help to establish the Native American context, but the scantiness of the
acoustical environment limits the dramatic engagement of the early production.
Bumppo is a part of the Delaware tribe, at least by upbringing, and true to
Cooper’s writings, the narrative suggests he and the Delawares are peacefully
oriented yet strong, thoughtful, and of excellent character. The sudden sound
of a bird punctuates the latter stages of the play. The Deerslayer realizes it is
actually a Huron tribal signal mimicking a bird call. It indicates he and his
companion are surrounded and under observation. They are in grave danger of
attack and need to make a quick escape. It sets up the transition to the next
night’s episode.82

82
Pavlik, John V. Masterful Stories: Lessons from Golden Age Radio. New York, NY:
Routledge, 2017.

132
Again, the literary form is transplanted inside the medium of radio but before it could

mature and develop, it was moved over to television. In television, the story form

started out as chronicle but evolved into more sophisticated narrative forms. And

while television story narrative is primarily fictional, nonfiction story forms also

developed.

In 1968, CBS launched 60 Minutes, which became the nation’s first news

magazine and follows a strict journalistic formula. The weekly broadcast features

three or so 12-minute segments about contemporary issues and the producers there use

two primary story forms: the first is a feature profile about a specific person in the

news, the second is the “Wall Street Journal Style” but with a twist: often, producers

would inject the news reporter into the story making them the “face.”

It is not much exaggeration to say that the star of each story was its
correspondent. That might not have been strictly true in a personality profile,
but even then, the stars might have been both Lena Horne and Ed Bradley, for
example. Mike Wallace has defended 60 Minutes against the charge that the
program is more an exercise in show business than in journalism. He regards
such comments as “elitist” and asserts that “we are, after all, a magazine
broadcast that is committed to a multi-subject format, and we have never
pretended to be anything else.”83

Regardless, there is a narrative form here and with complexity comes the opportunity

for aesthetic development. Television, like music and film and literature has the

capacity for artful long-form creative nonfiction story forms. The trouble now is how

to commingle and merge these different art forms into a pleasing and engaging

multimedia production.

83
Barkin, Steve Michael. American Television News: The Media Marketplace and the Public
Interest. Armonk, NY: M.E. Sharpe, 2003. 55.

133
Summary of Oral Media

Despite the 500-year dominance of the written word, oral media made a

tremendous comeback in a movement that culminated—probably—in 1969 when Neil

Armstrong stepped out of the Lunar Module and out onto the surface of the moon.

When he uttered the statement: “That’s one small step for man; one giant leap for

mankind,” a global audience of over 450 million radio listeners were tuned in.84 But

the size of the audience was beside the point; instead, this event marked mankind’s

crowning achievement… and, to a lesser degree, offered an example of the power and

potency of a radio network that presented this event—nearly live—as 1/9th of the

world’s citizens listened, transforming us into a global “tribe” gathered around the

same shared experience. Radio (and soon television) had become a very potent

medium indeed.

In fact, during these last 200 years, oral culture climbed from the ashes,

repairing what was lost when the Incunabula period faded, and the Protestant

Reformation ignited a century-long burning of illuminated manuscripts. But these new

oral media were different: we were now appreciating oral media in a post-literal

world… literacy had transformed our understanding of oral communication and a new

form or a “secondary orality” had risen from the ashes of the old.

Much in the same way the manuscript and the printing press preserved writing:

the camera captured light, the phonograph captured sound, the film camera captured

motion… and radio and television systems found ways to scatter these ‘hot’ media

across the globe. This period marked a catching up of sorts; a confluence of various

84
Heil, Alan L. Voice of America a History. New York: Columbia University Press, 2003.

134
media, which matured, forming their own independent relationships with their

respective audiences. On this point, the narrative structure of the film and the concept

rock album were formed but these forms were ultimately defined by the structure of

literary storytelling. And yet, writing as a storytelling form remained fairly isolated

from the ‘hot’ media forms echoing the literal structure.

The written word, or the experience of the written word, for the most part

remained separated from these media (except for photography). Film and television

and radio operated in separate arenas, forming their own protocols, transmitted over

different devices, alien to each other as well as to the printed book and magazine…

and newspaper. For a time in the United States, it looked like television was going to

completely replace the written word but a new technology—the Internet—arrived and

television has been brought to heel as audiences are finally beginning to look away.

Since 1996, the Internet has been in an experimental phase. As users, we’ve

gone through the various incarnations of Web 1.0 and Web 2.0 and Web 3.0… and

while the use of the tool has evolved, standards for publishing continued to be hazy

and unsettled. Newspapers and television stations have dabbled with the Internet but

there is no clear distinction or design here; everything on the Internet is tenuous. Radio

and film and television and photography and text all dwell here but nothing seems to

appear to be definitive. Instead, these various media exist in independent spheres,

housed in different orbits awaiting some synergistic fusion. Should this happen,

possibly, we will realize a reawakening of the “Illuminated” manuscript in a more

modern form. For now, we dwell inside an incubation period, an electronic evolution:

a Digital Incunabula.

135
Chapter 3

The Digital Revolution

We are currently in the middle of a digital revolution. The media world around

us is being transformed from an analog system into a digital one. The process is called

digitalization and it’s been underway for some decades, except now, with the advent

of personal computers and the Internet, digitalization has accelerated exponentially.

It seems like a harmless thing: the idea that documents, film, images, objects and

sound are being converted into strings of digital coding, which can then be injected

into a computer. To do this, computer scientists created something called the binary

code, which—in its simplest explanation—means that elements of the item are

converted into binary symbols represented by the numbers 0 and 1; given the

complexity of the original medium, the number of binary symbols, or bytes, needed to

replicate the medium can amass into the millions, which are then assembled to

recreate that medium on a computer platform.85 For example, the phrase “Mary had a

little lamb” can be captured as an analog audio file and then translated into a digital

format—the latest is called MP3—which is then stored in an electronic database as

digital code, or as an accumulation of bytes of data, which can be shifted around from

database to database globally by waves of electricity.

Since the 1940s, with the invention of the transistor, the process of

digitalization has been underway; and following the inventions of the silicon chip, the

personal computer and the Internet, we have been digitizing everything aggressively.86

85
Riordan, Michael, and Lillian Hoddeson. Crystal Fire: Invention of the Transistor and the
Birth of the Information Age. W.W. Norton, 1999. 195-224.
86
Ibid., 195-224.

136
Today, with half of the world’s population connected to the Internet in some way, 87

we are creating volumes of digital information on a massive scale. Theorist Bob

Merritt sums it up this way, writing that “…EMC and other companies have estimated

that the world created approximately 1.8 zettabytes (or 10#$ ) of data in 2011, and that

by 2020 the world will generate 50 times that amount of information.”88

Clearly, everything is going digital… but what is that doing to society?

Reflecting upon the ideas of Walter Ong, Marshall McLuhan and Terry

Eagleton, I’d like to suggest that we are at the beginning of a great paradigm shift. So

far, I’ve written about the Primary Orality, Literacy and the Secondary Orality… this

move towards the digital is a new epoch in media communication. Just as the first

three periods altered human cognitive and communication practice, this digital age has

spawned a new transition in communication, which I submit is happening in two

phases: the first is called the “tertiary” or “digital orality” and the second is a more

complicated age of “infusive” communication.

In his book, Understanding New Media: Extending Marshall McLuhan,

theorist Robert Logan says:

Mimetic or gestural orality is nonverbal and unspoken. Primary orality is


spoken, in which the semantics and syntax are characteristic of oral culture.
Secondary orality is also spoken, but the semantics and syntax are
characteristic of literature culture. And finally, tertiary or digital orality is
written, in which the semantics and syntax are characteristic of digital
culture.89

87
"World Internet Users Statistics and 2017 World Population Stats." Internet World Stats.
Accessed May 24, 2017. http://www.internetworldstats.com/stats.htm.
88
Merritt, Bob. "The Digital Revolution." Synthesis Lectures on Emerging Engineering
Technologies 2, no. 4 (2016): 1-109. doi:10.2200/s00697ed1v01y201601eet005. 16.
89
Logan, Robert K. Understanding New Media: Extending Marshall McLuhan. New York:
Peter Lang, 2016. 104.

137
Logan takes us to the threshold of the new age but I don’t think he goes far enough.

With the advancement of augmented reality and virtual reality technologies and the

very real prospect of Artificial Intelligence, we are really at the dawn of a new digital

consciousness and this process of change, this Digital Incunabula, will alter the way

we share information, exchange ideas and tell stories in new and exciting ways.

Before we get to that, we should consider the history.

The Computer + the Internet

Given the complexity and the sophistication of the modern personal computer,

it’s hard to imagine that computing has been around, in some form, for centuries. At

its very basic, the purpose of the computer was to conduct repetitive tasks, replacing

the need for humans to do this work; as the computer evolved, its ability to do these

tasks escalated exponentially, which allowed for greater sophistication of the work.

With the advent of software programming, we had a technological languages that

trained the computer’s working patterns. Software was the machine’s intellect and as

software engineering evolved, so did the computer’s ability to do more. The evolution

of thinking machines began long before we domesticated electricity.

In 1801, French industrialist Joseph-Marie Jacquard developed a loom that

used punch cards to craft elaborate patterns in textiles. He did this by “positioning a

series of holes punched into cardboard cards in such a way that each hole

corresponded to a particular thread to be woven,” wrote historian Daniel Headrick.90

90
Headrick, Daniel R. Technology: A World History. La Vergne, TN: Oxford University Press,
2010. 95.

138
In his book The Code Economy, author Philip E. Auerswald wrote about Jacquard’s

machine with greater detail:

Jacquard’s innovation was to combine the automatic function of hardwired


drawlooms with a mechanism for controlling the loom that could be
reprogrammed with relative ease using wooden cards perforated with precisely
punched holes—a technology that came to be known as punched cards. The
principle was simple. At every stage in the weaving process, an array of rods
with hooks on their tips would descend to the punch card specified for that
step. Where a hole was present, the rod passed through the card and lifted the
specified thread, which allowed the weft to slide through perpendicularly.
Where no hole was present, the rod encountered resistance and did not descend
further.91

Although the system was steam powered and lacked any sophisticated digital

programming, the punch cards acted like a computer program, guiding the machine

through a series of tasks independent of human interaction. This is the soul of

computing. Over the succeeding years, more sophisticated computing machines were

crafted.

In 1822, English inventor Charles Babbage (who is known as the “father of

computing”) began experimenting with steam-powered machines that could calculate

a series of math problems systematically. One of his inventions—called the Analytical

Engine—employed Jacquard’s punch card system, which allowed the machine to be

programmed for extended periods of time. Babbage experimented with the device for

the next 50 years.92

Babbage’s vision of the Analytical Engine was inspired by none other than
French digital computing pioneer Joseph Marie Jacquard. Babbage eventually
removed the Difference Engine from his hallway and replaced it with a portrait
of Jacquard, then challenged his guests to guess the medium used to create the
portrait. Most guests—including the Duke of Wellington—guessed that it was

91
Auerswald, Philip E. The Code Economy: A Forty-thousand-year History. New York, NY:
Oxford University Press, 2017.
92
Ceruzzi, Paul E. Computing: A Concise History. Cambridge (Mass.): MIT Press, 2012. 1-22.

139
an engraving. Babbage was delighted by such incorrect guesses, for the portrait
was in fact a finely woven tapestry produced using an automated Jacquard
loom. For Babbage, the ability for a computer to produce a product
indistinguishable from that of a human clearly suggested the potential power of
the Analytical Engine he envisioned.93

Babbage’s success here was his machine’s ability to duplicate human work in a way

that made it nearly indistinguishable from handcrafted materials. Thing wouldn’t only

advance from here.

Clearly, the invention of electricity was a key component of the computer’s

development and throughout the 20th century we saw the computer mature into being.

In the 1930s and 1940s, as part of the war movement, British philosopher and

mathematician Alan Turing experimented with ideas that would ultimately form the

basis for modern computer science. His work, of course, centered around code

breaking and his research led him to develop a series of computing machines that

defined the science.94 But one of the byproducts of his research was his theory—

known as the “Turing test”—which defines computing this way: computing is a

machine’s ability to exhibit intelligent behavior in a form that is indistinguishable

from human achievement.95

Alan Turing, when he formulated his test, was confronted with people who
believed AI was impossible, and he wanted to prove the existence of an
intelligence test for computer programs. He wanted to make the point that
intelligence is defined by behavior rather than by mystical qualities, so that if a

93
Auerswald, Philip E. The Code Economy: A Forty-thousand-year History. New York, NY:
Oxford University Press, 2017. 47.
94
Clark, Liat, and Ian Steadman. "Turing's Achievements: Codebreaking, AI and the Birth of
Computer Science." WIRED UK. May 23, 2016. Accessed February 25, 2017.
http://www.wired.co.uk/article/turing-contributions.
95
Baum, Eric B., Marcus Hutter, and Emanuel Kitzelmann. Artificial General Intelligence:
Proceedings of the Third Conference on Artificial General Intelligence, AGI 2010, Lugano,
Switzerland, March 5-8, 2010. Amsterdam: Atlantis Press, 2010.

140
program could act like a human it should be considered as intelligent as a
human. This was a bold conceptual leap for the 1950’s.96

Turing wasn’t the only one working on the problem. Concurrently, German engineer

Konrad Zuse was also working to build a computer and ultimately completed the first

programmable computer, which he finished in 1941 and named the Z3.97

Zuse’s machines, however, embody many of the concepts of today’s


computers and seem more modern than their American counterparts—an
astonishing achievement for someone working in relative isolation, and who
was inventing and reinventing everything he needed on the way to what he
called his life’s achievement: the computer.98

Scientists in the United Kingdom and the United States were also working on

their own systems and—by 1951—the first mass produced computer—the UNIVAC

I—was in production and being sold at $1 million each to corporations around the

United States.99

The UNIVAC system was integral because it was the first computer system

that could handle numbers and letters in its computations. Accelerating its advance

into the market is the fact that the U.S. Census Bureau ordered a total of 46 UNIVAC

computers during the 1950s. For a time, UNIVAC was so popular that the name

“‘UNIVAC’ came very close to becoming a generic word for digital computer in the

way ‘Kleenex’ and ‘Xerox’ are so often used to refer to competitive products.”100 By

96
Ibid.
97
Rojas, Raul, and Ulf Hashagen. The First Computers: History and Architectures.
Cambridge, MA: MIT Press, 2002. 261.
98
Ibid., 261.
99
Johnson, L. R. "Coming to Grips with Univac." IEEE Annals of the History of Computing 28,
no. 2 (April 2006): 32-42. doi:10.1109/MAHC.2006.27.
100
Reilly, Edwin D. Milestones in Computer Science and Information Technology. Westport,
Conn.: Greenwood Press, 2003. 265.

141
1960, IBM had supplanted UNIVAC’s dominance of the market and became the new

standard for business computing.101

In 1955, companies began replacing vacuum tubes in electronics with a new

device called the transistor. The transistor was smaller, more efficient and cheaper to

produce.102 Ten years later, an even smaller device called the integrated circuit or

“silicon chip,”—which had the capacity to replicate the transistor technology

exponentially—replaced the transistor. The integrated circuit evolved from several

theorists but ultimately the patent and the credit for the invention of the silicon chip

goes to Robert Noyce, an engineer working at Fairchild Semiconductor.103 From here,

nearly anything was possible.

As proof, consider this: “By 2014, Intel [the lead chip manufacturer] was

squeezing up to 5.7 billion transistors onto a single chip,” wrote Janet Slingerland on

the topic. She then quotes tech engineer Gordon Moore on how quickly the chip

technology advanced: “If the auto industry advanced as rapidly as the semiconductor

industry, a Rolls Royce would get a half a million miles per gallon, and it would be

cheaper to throw it away than to park it.”104

In the 1980s, IBM’s plans for the ‘personal computer’ came to fruition and a

generation of “clone” computers emerged; “clones” were computers manufactured by

other companies using IBM’s manufacturing specifications. By today’s terms, the PC

101
Ibid.
102
Bahl, I. J. Fundamentals of RF and Microwave Transistor Amplifiers. Hoboken, NJ: Wiley,
2009. 61-90.
103
Reilly, Edwin D. Milestones in Computer Science and Information Technology. Westport,
Conn.: Greenwood Press, 2003.
104
Slingerland, Janet. Nanotechnology. Minneapolis, MN: Essential Library, an Imprint of
Abdo Publishing, 2016. 60.

142
was really just a “word processor” or simply a sophisticated typewriter with

sophisticated storage and editing features. Over time, improvements in the technology

allowed for us to calculate spreadsheets and other office-related materials and to play

video games but it wasn’t until the computer was hardwired into the Internet that the

computer’s potential as a communication tool truly evolved.

Computers were also beginning to develop a sophistication of their own.

As it stands now, computer scientists are exploring something called “Artificial

Intelligence,” or the ability to create a machine that can not only respond to commands

but actually learn to improve its own ingenuity; basically, the computer is learning

from its own mistakes. Ultimately, the computer will evolve through trial-and-error to

a point where it will develop sophisticated problem-solving skills that include

repetitive exploration, adaptive learned histories, and, some believe, the ability to

think creatively.105

And there have been many experiments in AI: “Over the last twenty-five years,

machines have beaten the best humans at checkers, chess, Othello and Jeopardy!,”

wrote Cade Metz for Wired magazine in 2016. “But this is the first time a machine has

topped the very best at Go—a 2,500-year-old game that’s exponentially more complex

than chess and requires, at least among humans, an added degree of intuition.” The

name of the computer is AlphaGo and it’s a Google project designed to enhance AI.106

During the five-game tournament, AlphaGo won the first three games; but lost after it

105
Haugeland, John. Artificial Intelligence: The Very Idea. Cambridge, Mass.: MIT-Press,
1996. 2-14.
106
Metz, Cade. "Google’s AI Wins Fifth And Final Game Against Go Genius Lee Sedol."
Wired. March 15, 2016. Accessed March 14, 2017. https://www.wired.com/2016/03/googles-ai-wins-
fifth-final-game-go-genius-lee-sedol/.

143
became confused by its opponent aggressive play in the fourth game. By game 5, the

computer had fine-tuned its gaming strategies and beat the Korean grandmaster.107

The Google machine repeatedly made rather unorthodox moves that the
commentators couldn’t quite understand. But that too is expected. After
training on real human moves, AlphaGo continues its education by playing
game after game after game against itself. It learns from the vast trove of
moves that it generates on its own—not jut from human moves. That means it
is sometimes making moves no human would. This is what allows it to beat a
top human like Lee Sedol. But over the course of an individual game, it can
also leave humans scratching their heads.108

The article concludes saying that AI is here but that it hasn’t realized its full

potential. But as I’m writing this, computers around the globe are just beginning to

learn how to train themselves at the craft of “deep learning,” and if the theorists are

correct, computer could one day develop the skills organic to the human brain (or so

some might describe it).

Telephony

In the early 1800s, developments in our understanding of electricity naturally

triggered the ideas that created the telegraph. At its simplest, the telegraph is a bare-

bones electrical circuit—complete with a power source, wiring and a switch—that

allows the user to complete a circuit. To send a signal, the user pressed down on the

switch—known as the telegraph key—which forms a connection linking the power

supply to the network; this connection triggers an electrical current to flow down the

telegraph line to the receiving station; once there, the electrical current magnetically

draws the receiving telegraph key downward, clicking it; the sender has the ability to

107
Ibid.
108
Ibid.

144
send sharp or long jolts of current, which forms dots and dashes on the receiving end;

it is through these series of dots and dashes that the message is transcribed.109

In the United States, it was inventor and painter Samuel Morse who perfected

the most popular telegraph system. His interest in telephony began with a family

tragedy.

In 1825, Samuel Morse received word that his wife, Lucretia, had died while

he was in Washington DC. Because Washington was four-days travel from his home

in New Haven, Connecticut, it took that long for the message to reach him. Morse

believed that there must be a better way. Years later, he was aboard a ship returning

from Europe when he encounter a like-minded inventor and a conversation on the

issue of transmitting electricity over a wired network led Morse to ultimately invent a

commercially viable telegraph system. He also developed Morse Code, an electronic

alphabet—a system that used short and long electronic signatures (called “dots” and

“dashes”) to represent letters—for the transmission matrix.110

By the 1840s, Morse was wiring the major cities along the Eastern Seaboard of

the United States and charging 25-cents for one 10-word message. Overtime, the

popularity of the network grew and Morse expanded telegraph links westward finally

connecting with California.111

The power of the telegraph was apparent. Before the telegraph, one needed a

transportation network to send handwritten or oral messages from one point to

109
Elements of Telegraph Operating ; Telegraphy (parts 1-3). Scranton: International
Textbook Company, 1906.
110
Standage, Tom. The Victorian Internet. London: Weidenfeld & Nicolson, 1998. 25-30.
111
Ibid.

145
another; and it took the length of time needed to move between those points to

transmit that message. Communication theorist James W. Carey wrote that the

telegraph separated telephony from transportation.112

The most important fact about the telegraph is at once the most obvious and
innocent: It permitted for the first time the effective separation of
communication and transportation. This fact was immediately recognized, but
its significance has been rarely investigated. The telegraph not only allowed
messages to be separated from the physical movement of objects; it also
allowed communication to control physical processes actively.113

The telegraphed message was also much faster. In the Washington-Baltimore corridor,

the time it took to transmit a message diminished from a few hours to a few minutes

and that fact joined the cities together. Along the transcontinental route, from St.

Louis, Missouri to Sacramento, California, it took the Pony Express 10 days to deliver

a message; a telegraph could do this in just a few hours.114

James Carey said the telegraph forced Americans (and the world) to better

understand their relationship with time and place. Times zones were formed;

communities were clustered around telegraph systems; and the United States formed a

more unified sense of itself. We were no longer a collection of individual states; we

were a united nation.115

Searching for proof, all one needs to do is consider President Abraham

Lincoln’s obsession with the telegraph; he spent countless hours at the telegraph

office, making daily visits there, waiting for messages from the front:

112
King, Elliot. Key Readings in Journalism. New York, London: Routledge, 2012. 40-54.
113
Ibid., 41.
114
Ibid., 40-54.
115
King, Elliot. Key Readings in Journalism. New York, London: Routledge, 2012. 40-51.

146
The American military even set up a parallel telegraph system in an effort to
secure its communications, and telegrams became such an integral part of the
conduct of the war that President Lincoln often spent hours or even days in the
Army Telegraph Office in Washington. Lincoln’s visits and complete
fascination with the telegraph were later chronicled by the office’s manager.
David Homer Bates, in a lengthy book, Lincoln in the Telegraph Office.116

The telegraph was certainly an important part of the war effort. It also allowed citizens

in the North to follow, nearly daily, the progress and problems of the Union Army

during the war.

Marshall McLuhan took his observations further, calling the telegraph “the

social hormone,” suggesting that it actually acts as an extension of the human nervous

system. One can think something in New York and transmit it to Baltimore nearly

immediately. But the telegraph was a transmission system that worked between

agencies: one had to go to a telegraph office to send the message to another telegraph

office. The telephone made the process more personal.

The Telephone

Although various forms of voice transmission were in development, Alexander

Graham Bell is credited with having created the first commercially viable telephone

and was awarded his first patent in 1876. Bell’s invention was actually a pairing of

two inventions: He first created a transmitter, which used a metal diaphragm to

translate language into electronic signals; and then he created a receiver, which used a

metal diaphragm to translate those electronic signals into language. He ultimately

paired these two inventions together into a horn-shaped instrument that allowed the

user to apply the receiver to their ear and to place the transmitter in front of the

116
Lane, Frederick S. American Privacy: The 400-year History of Our Most Contested Right.
Boston: Beacon, 2011. Print.

147
mouth.117 Clearly, the innovation here was the mechanical translation of human speech

from oral to electronic to oral again, which was a vast improvement over the telegraph

formula which entailed taking a written message, transcribing it into an electronic

format, which was then transformed again into a written form. Telephone technology

removed the “literacy” component from the network. As the system matured, the

telephone allowed for people at private residences to communicate independent of an

intermediary; it also allowed for businesses to do the same. But Marshall McLuhan

warns us that the telephone can be rather intrusive:

The telephone is an irresistible intruder in time or place, so that high


executives attain immunity to its call only when dining at head tables. In its
nature the telephone is an intensely personal form that ignores all the claims of
visual privacy prized by literate man. One firm of stockbrokers recently
abolished all private offices for its executives, and settled them around a kind
of seminar table. It was felt that the instant decisions that had to be made based
on the continuous flow of teletype and other electronic media could only
receive group approval fast enough if private space were abolished.118

With the invention of the cellular telephone in the 1980s and 1990s, the distance

between people only got smaller and more intimate. Now, instead of calling houses

and buildings, when we ring a cellphone, we are calling a person. In many ways,

surrendering a telephone number to a person is a form of intimacy; armed with that

information, the person has the ability to reach you directly at any point, day-or-night,

and at any place.

117
Cefrey, Holly. The Inventions of Alexander Graham Bell: The Telephone. New York:
PowerKids Press, 2003. 10-15.
118
McLuhan, Marshall. Understanding Media: The Extensions of Man. 364.

148
The Internet

If Marshall McLuhan thought the telegraph was “the social hormone,” one can

only imagine what he’d say about the Internet. Looking at the progression of

telephony: the telegraph linked cities together, the telephone cobbled households and

businesses together, and then the Internet came along and connected people. Further,

while the first two forms of communication were eminently temporal, the Internet

seems to dwell in a place where every time is the same time… and no time: before the

Internet, telephone users had to share a time to have an exchange of ideas; after the

Internet, modern telephony tools including email and text messaging made it entirely

possible for that conversation to occur independent of coordinated timing.

And on the issue of distance, James W. Carey argued that the telegraph made

distance irrelevant; the sender could be miles away and still reach the receiver. Given

the haste of the message, as compared to the hand-delivered note, distance was

subtracted from the process.119 With the Internet, the idea of distance became

exponentially smaller as transmissions—on a global scale—became nearly

instantaneous.

So how did we get here?

The Internet is a byproduct of the Cold War. In the 1960s, the U.S. Department

of Defense worried about the vulnerability of the commercial telephone system.

Basically, it wondered if the AT&T telephone system—which was a monopoly at the

time—could withstand a nuclear attack, believing that absent the ability to

119
King, Elliot. Key Readings in Journalism. New York, London: Routledge, 2012. 40-51.

149
communicate from Washington, the President would be incapable of launching a

retaliatory strike.120

The solution was to build a proprietary network and the Defense Department

looked to the Defense Advance Research Projects Agency, or DARPA, to create a

solution. Scientist Lawrence Roberts determined that a message could be broken up

into small pieces and each piece would be transmitted over a computer network to a

destination point where it would be assembled again. This process was called “packet

switching” and the network that hosted this technology was called the ARPANET,

which went live with four computers in 1969.121

The exchange took place when UCLA professor Leonard Kleinrock and his
students and programmer Charley Kline set up a message transmission to go
from the UCLA SDS Sigma 7 Host computer to another programmer, Bill
Duvall, at Stanford Research Institute.

The programmers attempted to type in and transmit the word “login” from
UCLA to SRI, but the system crashed right after they typed in the “o.”

The first message sent over the Internet, 45 years ago today, was: “lo.” The
programmers were able to transmit the entire “login” message about an hour
later.122

Although the first transmission was really just a technical glitch, many see the

message “lo” as a fortuitous (but abbreviated) reference for “lo and behold…” as in

“low and behold the future!”

For the next 20 years, the Internet existed as this government-only matrix and

its potency was increased as more and more research institutions signed on. In 1989,

120
Headrick, Daniel R. Technology: A World History. La Vergne, TN: Oxford University
Press, 2010. 140-150.
121
Ibid.
122
News, CBS. “45 Years Ago: First Message Sent over the Internet.” CBS News. October 29,
2014. Accessed December 19, 2017. https://www.cbsnews.com/news/first-message-sent-over-45-years-
ago/.

150
when the Cold War ended, the U.S. Congress began wondering what it should do with

its military technologies and it was during this time that Senator Albert Gore (D-

Tennessee) proposed the idea of privatizing the Internet for commerce and wrote the

bill that ultimately became the law that did that very thing.123 In 1996, President Bill

Clinton signed the Telecommunication Act of 1996 into law and this document

included sweeping changes to the communications culture in the United States; among

its many provisions, the act authorized the commercialization of the Internet. Similar

movements were underway around the world as Historian Daniel Headrick suggests:

In 1989, Tim Berners-Lee of the European Nuclear Research Center in Geneva


created the World Wide Web, allowing anyone with minimal skills to access
not only data in the form of words and numbers but pictures and sound as well.
By 1992, a million computers were connected, and by 1998, 130 million could
access the Web. Half of them were in the United States, but even China had
several million networked computers. By the early twenty-first century, there
were more than a billion computers in the world.124

During those early days, Internet access required the user to connect a personal

computer to a data-translation device called a router and then plug the router into a

traditional telephone jack. Once the hardware was in place, the user had to open a

connection with the Internet through a third-party telephone company called an

“Internet Service Provider” or ISP; to do this, the user opened a piece of software on

their computer and, by clicking on it, the router was activated and a connection signal

was transmitted over the phone line to an Internet server. Once connected, the user had

to open a web browser program, which allowed the user to dial in data server

addresses. Early on, download rates were very slow; often it would take several

123
Elmer, Greg. Critical Perspectives on the Internet. Lanham: Rowman & Littlefield
Publishers, 2002. 174.
124
Headrick, Daniel R. Technology: A World History. La Vergne, TN: Oxford University
Press, 2009. 143.

151
seconds or more for a single photograph to download onto the desktop of a user’s

computer.

One of the first companies to streamline and capitalize the Internet process was

America Online, a small gaming company based in Vienna, Virginia. AOL, as it was

later called, fused together the ISP and the browsing software into one seamless

program that required users to create an email address and to download credit card

information. During the early 1990s, AOL became the primary resource for Internet

access and millions of Americans found their way online via the AOL network.

Business troubles would ultimately undo AOL’s domination of the Internet but other

companies would emerge and share the space.125

Over the next two decades, hardware and software advances improved the

Internet experience. Upload and download speeds made it possible to listen to

streaming audio and to watch streaming video. In time, it was absolutely possible to

consume high volumes of data, streaming high-definition Hollywood films on home

devices that could project the signal on a flat panel television set in the living rooms of

millions of paying customers. The Internet has matured to a point of saturation.

In her book Words Onscreen, author Naomi Baron addresses the issue of how

the Internet has changes our reading habits. She warns that given the inherent

multitasking nature of the Internet, we have become slaves to the distractions that a

multimedia device can offer.

The siren call of internet distractions comes with an extra twist. Each time we
hear our mobile phone or email client ping to say we have a message waiting,
our brain delivers a squirt of dopamine, an addictive neurotransmitter and

125
Klein, Alec. Stealing Time: Steve Case, Jerry Levin, and the Collapse of AOL Time Warner.
New York: Simon & Schuster Paperbacks, 2004.

152
keeps us returning for more. Which would you prefer: staying glued to Plato’s
Apology or having a shot of dopamine?126

She’s suggesting that our brain chemistry has adjusted to the narcotic attractions

generated by our cellphones and other electronic devices and that we’ve taken to

skimming instead of intensely reading digital content. She identifies this addiction as

“hyper attention” and warns that hyper attention only facilitates “hyper reading” and

something she calls “power browsing.”127

Using log reports from libraries offering access to online journals, scholars
from UCL and the University of Tennessee examined the viewing habits of
users who downloaded full-text versions of articles. How long did they look at
each article? An average of 106 seconds.128

This realignment in the process of reading is akin to Terry Eagleton’s assertion that

reading is a form of schizophrenia and Walter Ong’s belief that phonetic alphabets

have destroyed our natural oral/aural state. So, the Internet specifically and digitized

content generally have altered the human ability to concentrate on specific tasks at

hand. Baron wonders if this is a negative state or a turn towards evolutionary

improvement:

Can we really answer email, watch Netflix, and draft a complicated report at
the same time? Might it turn out that today’s tech-saturated adolescents
actually can successfully juggle tasks that older generations can’t?129

She suggests that the emerging generation are building a world that will likely

embrace multitasking as a skill and believes that the human brain is highly adaptive

126
Baron, Naomi S. Words Onscreen: The Fate of Reading in a Digital World. New York:
Oxford University Press, 2016. 158.
127
Ibid., 165-168.
128
Ibid.
129
Ibid., 158.

153
and plastic. So, like the printing press, the Internet and digitized content are changing

the human mental condition.

Oddly, despite its sophistication, the Internet is merely a broadcasting system

similar to its predecessors including the telegraph, the telephone, radio and television.

Like radio, for example, the Internet has a transmission source and a reception source

and these end points have similar results: the radio receiver and the video receiver

create unique point-of-reception copies of the original digital product. This

proliferation of media has had a lasting impact on Western Civilization.

Media theorist Paul Virilio attaches the human interest in the Internet to its

insatiable need for speed. In his book Negative Horizon, Virilio argues that the

Internet helps humanity satisfy its need for instantaneous information and that

sprinting media data is just an extension of our outward movement dating back to the

horse and buggy.

A true cultural revolution of the modern West, the transportation revolution


actually introduced the ‘information revolution.’ With the proliferation of the
means of ‘communication’ (train, automobile, plane, radio, telephone,
television) made possible by the industrial revolution, the power of information
increased with the same rhythm as the information of power. Now we are in
the era of ‘press agencies’ but also of the scientific and international
development of the police, that is, of ‘intelligence services’ (both civil and
military).

Today’s computers and data communication systems only serve to complete a


cycle sketched out a century ago with the telegraph and the railway system.130

Basically, our need to strike forth has taken a digital turn. We are, as Virilio warns,

hurling forth unshackled and unencumbered by the hesitations of what might be lost;

instead, we are flying forward in this frantic search for something other than what is

130
Virilio, Paul, and Michael Degener. Negative Horizon: An Essay in Dromoscopy. London:
Continuum, 2008. 154.

154
behind us… and as our speed increases, distance falls away, the now-ness of here

disappears and time becomes the only metric by which we measure the existence of

life; and our last and only message is the message of movement… of speed.

Aiding us is an improvement in the data receivers. The desktop computer

continues to be a staple but we have found other tools to help us explore or surf the

Internet: the cellphone has become a ready conduit, so too has the tablet computer.

Both are advanced innovations in the idea of consumer electronics and without them,

we would not have the media culture we have come to enjoy.

Web Browsers and Search Engines

As the Internet developed, it became necessary to create a method for reaching

out to web servers to find data. To do this, the software community crafted something

called a “web browser,” which was an interface users needed to punch in web

addresses and receive webpage coding; the browser, then captures that coding and

displays the results before the user on the computer’s monitor. Over the years, the web

browser became much more sophisticated as other media elements were added to

webpages including video, photographs and so forth. The software to manage those

elements continues to evolve and the number of different browsers continues to vary.

As I’m writing this, the top web browsing software programs are Google’s “Chrome,”

Apple’s “Safari,” Mozilla’s “Firefox,” and—to a lesser degree— Microsoft’s “Internet

Explorer.” Each program does basically the same thing—it contacts servers using web

addresses and then downloads HTML coding from those sites, which are displayed as

webpages on the user’s computer—but variables in the software create slightly

155
different visual experiences. So, a site downloaded on Chrome will be slightly

different than a site downloaded on Safari. Also, as HTML protocols change and

browser software evolve, sites that fail to update often report corrupted files to the

browser.131

Not long after the invention of the browser, engineers created the “search

engine,” which helps the web browser locate information on servers across the

Internet. To activate a search engine, the user must type in some “key words” related

to the content they are seeking and then the search engine skims the Internet

cataloging sites assigned with those “key words” and creates an index; once the index

is accumulated, the search engine prioritizes a list based on the quantity and quality of

the traffic each web address has received.132

For a time, Yahoo!, America Online, Excite, Alta Vista, and a handful of

others were the top search engines on the Internet but that all changed when a pair of

Stanford University PhD candidates decided they wanted to catalog the entire Internet

and did so by borrowing server space from other grad students. The students, Larry

Page and Sergey Brin, believed they could write a piece of software that could analyze

the relationship between websites and create a better search experience. As their

project developed, they moved much of their research off the campus and into the

131
"How Do Web Browsers Work and How Are Web Pages Displayed?" How Do Web
Browsers Work and Display a Web Page? Accessed March 11, 2017.
http://www.webdevelopersnotes.com/how-do-web-browser-work.
132
"BBC Bitesize - How Do Search Engines Work?" BBC News. Accessed March 11, 2017.
http://www.bbc.co.uk/guides/ztbjq6f.

156
garage of the house they were renting. It was from that garage that they incorporated

the name of the company: Google.133

In 2004, Google went public and the company swiftly became the standard for

search engines. Today, the word “Google” has become a verb as in “Go Google that,”

meaning, “if you want to know more, go find it on the Internet.” Its parent company,

Alphabet, has swollen, growing in size and today is one of the world’s biggest

companies. Beyond the search engine, Alphabet has moved into other software and

hardware programs including projects on Artificial Intelligence and self-driving cars.

Social Media

Right now, one of the most popular websites in the world is Facebook. If you

don’t know it, Facebook is a social media site, or a semi-public sphere where Internet

users gather and create digital autobiographical web pages. On these pages, users post

photos of themselves and friends and family members; written summaries of their

lives; and interact with friends or colleagues who are invited to look at their pages.

Facebook is a global and historical phenomenon simply because it’s on track to have 2

billion monthly users.134 When, in the history of the world, have 2 billion people ever

done the same thing at the same time, together? But Facebook wasn’t the first social

media site.

Before there was Facebook, there was America Online.

133
Battelle, John. The Search: How Google and Its Rivals Rewrote the Rules of Business and
Transformed Our Culture. London: Brealey, 2008.
134
Wednesday, On. "Facebook Tops 1.9 Billion Monthly Users." CNNMoney. Accessed May
31, 2017. http://money.cnn.com/2017/05/03/technology/facebook-earnings/.

157
In the early 1980s, America Online launched (under a different name) as an

online computer gaming company, which issued gaming units that connected players

over the telephone network. The players would log in, connect the game box to the

telephone system and then a central computerized telephony network matched players;

given the complexity of the system and the public’s understanding of online gaming at

the time (at a time before commercialized Internet access), America Online went

through many changes and corporate shake ups. In one of those managements shifts, a

young marketing executive named Steve Case ascended into a management role and

ultimately became the president of the company. In that position, he renamed the

company—as “America Online”—and changed his title to “founder and CEO”.135

In the transition, America Online created a marketing campaign that saturated

the U.S. postal system with free compact discs hosting the AOL software system; all a

user had to do was place the CD in his computer’s disc drive, enter a registration name

and passcode, a credit card number, click a few buttons and the software would

activate the computer’s modem connecting them to the Internet. Because the company

streamlined user access, by 1994 AOL became the dominant conduit to the Internet…

with a catch. For a time, AOL users could only visit the AOL user site and the actual

Internet was out of reach.136

But it was on the AOL site that many interesting things occurred. AOL was

primarily a user-forum where people gathered in small groups under a given topic and

“chatted” by typing out comments to one another; overtime, people began employing a

135
Klein, Alec. Stealing Time: Steve Case, Jerry Levin, and the Collapse of AOL Time Warner.
New York: Simon & Schuster Paperbacks, 2004. 26.
136
Ibid.

158
vernacular language for online conversation, which included abbreviations like LOL

(for “laugh out loud”) and BRB (for “be right back”) among many others.

These users also began using an embryonic visual-based language called the “emoji.”

The world’s first emoji was very simple: It was a colon with a closed parenthesis,

which together form the image of a smiley face: :). Today, it might seem like a slight

thing, but the idea of the emoji has actually evolved over 20 years into an alphabet (a

visually-based, ‘hot’ media language) with thousands of hieroglyphic-like characters.

Emojis are a throwback to the pictographic origins of writing. It was not until
the ancient Greeks that abstract letters were organized into a written language
system. Before that, picture systems such as Egyptian hieroglyphics were used.
Emoji characters were first created in the 1990s by a Japanese mobile-phone
provider. The word itself is a combination of the Japanese words for picture (e)
and the character (moji). As we are so good at recognizing emotional
expressions in faces, emojis are an excellent shortcut to emotional expression,
without the need to hunt for the right word to express the tone or context to our
message. Vyvyan Evans, Professor of Linguistics at Bangor University, has
suggested that emoji characters are now the world’s fastest-growing
language.137

AOL also discovered that it didn’t necessarily have to provide content for its

users… and left it to the users themselves to create content in that space; this was the

beginning of a digital innovation we now recognize as User-Generated Content, or a

semi-public digital sphere where the users themselves create the experience. Clearly,

the idea worked and by the late 1990s, AOL was the dominant social network for a

variety of reasons: first, it helped the world discover access to the Internet; second,

AOL’s software assigned email addresses (typically the first accounts anyone ever

had); third, users discovered the value in chat groups and text messaging and finally,

137
Bridger, Darren. Neuro Design: Neuromarketing Insights to Boost Engagement and
Profitability. London: KoganPage, 2017. 114.

159
the User-Generated Content made these spaces self-enriching simply because, in this

forum, people discovered they could share ideas, information, photos and so forth.

Looking at the theory, Communication Theorist Robert Logan saw the

exchange of text-written messages over the Internet as an advancement beyond Walter

Ong’s “secondary orality” and called it the birth of the “tertiary orality” or the “digital

orality.” In Logan’s explanation, he wrote that the “digital orality” is when we

broadcast text written words and he considered blog entries, text messages and instant

messages examples of this work.

Tertiary or digital orality is the orality of emails, blog posts, listservs, text
messaging, which are mediated paradoxically by written text transmitted over
the Internet.138

Logan saw this form of communication as interactive and conversational.

Again, this form of communication is the soul of “user-generated content,” and

AOL capitalized on this. In 2001, AOL’s stock market value escalated so high, the

company ultimately purchased Time Warner, the world’s largest content provider;

Time Warner’s holdings included an extensive magazine division, a home cable

operation, a film and cartoon division and several cable channels including HBO and

CNN.139 Ultimately, unforeseen market shifts made the AOL-Time Warner merger

untenable and the companies later split apart; in the process AOL was vanquished

reduced to a second-rate player in a rapidly changing media market.140

138
Logan, Robert K. Understanding New Media: Extending Marshall McLuhan. New York:
Peter Lang, 2016. 13.
139
Klein, Alec. Stealing Time: Steve Case, Jerry Levin, and the Collapse of AOL Time Warner.
New York: Simon & Schuster Paperbacks, 2004. 69-106.
140
Munk, Nina. Fools Rush In: Steve Case, Jerry Levin, and the Unmaking of AOL Time
Warner. New York: HarperCollins, 2005. xiii.

160
In the introduction of her book Fools Rush In, journalist Nina Monk describes

the merger as a “train wreck” and a “shipwreck,” that caused both companies to lose

$200 billion in market share in just a few short years.141

But, in the process of his undoing, AOL founder and CEO Steve Case revealed

a global appetite for social media and the value of User-Generated Content. In the

years following the AOL-Time Warner demise, several other key media players

emerged including Facebook, Instagram, Snapchat, Twitter, Wikipedia and YouTube,

among many others. And while these various social media platforms do vary widely in

concept, they all share a common ideal: users create the experience, while the

company simply provides them with a space to perform and rules of performance. It’s

the AOL model over and over again.

As I’m writing this, Facebook continues to be the leading site for social

interaction; oddly, the Facebook model is similar to AOL… especially with regard to

User-Generated Content; Facebook also happens to be a space dominated by text and

photographs. YouTube, is similar it is primarily a video-dominated sharing space.

Wikipedia, conversely, is primarily a text-dominated sharing space. Again, these are

semi-public spheres where the users themselves create the experience adding and

evaluating content.

Of all the social media sites, Twitter may be the oddest. This platform allows

users to send messages to a global forum but it limits content to just 140 characters

(now 280 characters). Given this limitation, users must use pithy phrases and visual

icons to issue statements, which are then broadcasted out over a network that serves

millions.
141
Ibid., xii-xiii.

161
Now, here is the odd thing. While all of these social media platforms seem to

be teeming with content, users and cultural influence, it is my belief that social media

will likely evolve, leaving behind the current players. Facebook, YouTube and Twitter

are merely societal fads and their users are going to grow bored and move away from

them. User-Generated Content will find new forms and forums. This is just the nature

of things.

Looking to the theory of “group dynamic,” the progression looks like this:

There are five key steps in the development of a group, which include forming,

transitioning, trusting, working, adjourning.142 Right now, Facebook, YouTube and

Twitter are all dwelling in the “working” stage; they’ve each established their

credibility; they’ve built a user base, and the users are now aware of how each social

media platform works. The first two, Facebook and YouTube have both moved

forward into the corporate ranks: Facebook is a publicly traded company, and

YouTube is a division of Alphabet. Which leaves Twitter: Twitter has been planning

its “initial public offering,” a move that will turn the private company into a publicly

traded one. Doing so makes the company beholden to fluctuations in the stock

exchanges, the Securities and Exchange Commission, and shifts in the state of the

global economy. It also makes these companies less nimble and more bureaucratic.

Bureaucracy is a symptom that inside an organization the sprightly developmental

nature of the entrepreneur has concluded. What remains is the drudgery of maintaining

the status quo.

142
Haynes, Norris. Group Dynamics Basics and Pragmatics for Practitioners. Lanham:
University Press of America, 2012. 8-19.

162
Social media, like most fads, start as a trend that emerges inside a small

subgroup of society. In the case of Facebook, the social media site started out—in

2004—as a networking site open only to Harvard University students, but it grew from

there, ultimately allowing anyone with an email address that ended with the suffix

“.edu” the ability to join.143 Two years later, they opened the enrollment process up to

everyone.

As it goes with the history of hegemony, the fad starts out in a small sub-class

of society before it is appropriated for a larger group. Ultimately, the majority takes

over and the sub-group is left to find something new. Such was the case of African

American music here in the United States: Jazz, and later R&B, Rock ‘n’ Roll and

Hip-Hop started out as entertainment in the African American communities before it

moved to a broader and whiter national audience.144

In the case of Facebook, today, the social media site, which was once popular

with college-age students 19 to 22, is attracting a broader, more mature audience of

25- to 34-year-olds.145 Apparently, college-age students are now moving on to

Instagram instead.146 Facebook certainly has the numbers to be a player for years to

come, but—as with all cultural fads—it too will fade, as audiences find other ways to

interact.

143
Mezrich, Ben. The Accidental Billionaires: The Founding of Facebook, a Tale of Sex,
Money, Genius and Betrayal. Bridgewater, NJ: Distributed by Paw Prints/Baker & Taylor, 2011. 11-91.
144
Paehlke, R. Hegemony and Global Citizenship: Transitional Governance for the 21st
Century. S.l.: Palgrave Macmillan, 2016.
145
"Top 20 Facebook Statistics - Updated May 2017." Zephoria Inc. May 08, 2017. Accessed
May 31, 2017. https://zephoria.com/top-15-valuable-facebook-statistics/.
146
Duggan, Maeve. "The Demographics of social media Users." Pew Research Center:
Internet, Science & Tech. August 19, 2015. Accessed May 31, 2017.
http://www.pewinternet.org/2015/08/19/the-demographics-of-social-media-users/.

163
Searching for examples, let me return to America Online. By 1996, AOL was

the top Internet Service Provider in the United States; it was also the top destination

point on the Internet. But AOL was expensive: It charged $21.95 a month for

unlimited access to the network; or you could pay $9.95 a month for the first three

hours of access and an additional $2.95 for every hour consumed there after.147 Given

the expense, many users discovered they could find a cheaper Internet Service

Provider or they could merely wait and log on at work. AOL was also perceived as the

training ground for the “real Internet.”

In 2002, Journalist Douglas Rushkoff put it this way in an article in The

Guardian:

AOL was a training ground: an introduction to the internet for people who
didn’t know how to deal with FTP. None of us thought it could last, because
once the technological barriers to entry for the internet had been lowered, no
one would need AOL’s simplistic interface or it’s child-safe, digital content
wading pools. People would want to get on the “real” internet, using real
browsers and email programs.148

When the public’s understanding of the Internet finally matured, they left AOL

swiftly. The numbers amounted to this: In 2000, AOL was a $9.5 billion company

with 15,000 employees and 23 million dial-up subscribers; in 2015, AOL was a $2.5

billion company with 4,500 employees and 2.2 million dial-up subscribers.149

Certainly, the American consumer base got more sophisticated and it outgrew AOL,

moving on to cheaper and more effective Internet contact solutions. Sixteen years

147
"AOL Hikes Monthly Fee." CNNMoney. Cable News Network, n.d. Web. 31 May 2017.
148
Rushkoff, Douglas. "Signs of the times." The Guardian. Guardian News and Media, 25 July
2002. Web. 01 June 2017.
149
Heisler, Yoni. "AOL’s Fall from Grace, by the Numbers." BGR. N.p., 13 May 2015. Web.
31 May 2017.

164
later, AOL is still around but will soon be acquired by Verizon and merged with

Yahoo!, another 1990s powerhouse website.150

To avoid AOL’s fate, Facebook, YouTube, Twitter and others will have to

grow with the changes or die. Compounding things further is the fact that I believe

social media are merely a half-step forward as we move through the Digital

Incunabula. If the Internet is truly “disappearing” as Alphabet CEO Eric Schmidt

suggests, it’s going to take with it some of these social media tools. Facebook,

YouTube and Twitter all have their own software Apps but even these appear

somewhat tenuous as our approach to the Internet and digital media becomes more

sophisticated.

The Rise of Amateur Video

On April 23, 2005, a video archiving platform named YouTube went live with

its first video, which is entitled “Me at the zoo,” and features one of the site’s

cofounders, Jawed Karim, standing inside the San Diego Zoo talking about the

elephants. The 19-second video is underexposed, the camera shakes, the audio is

weak, and the speaker, Jawed Karim, doesn’t present any clear message. In short, the

video is an absolute disaster… and yet, it may be the most important video on the

Internet. As I’m writing this, the video has 36.7 million views.

The reason the video is so important is the fact that it transformed the culture

of video production and consumption. Before YouTube, someone wanting to publish

video content had to approach a television station or a movie studio or shoot the

150
Bort, Avery Hartmans and Julie. "AOL and Yahoo Plan to Call Themselves by a New
Name after the Verizon Deal Closes: Oath." Business Insider. Business Insider, 03 Apr. 2017. Web. 31
May 2017.

165
footage themselves and present it as a ‘home movie.’ Now, YouTube allows anyone

with Internet access to produce video and find an audience. Harvard Law Professor

Lawrence Lessig describes this as “read-write” culture, or the ability for someone to

read content and then respond by crafting their own reaction to it. Within two years of

it going live, viewers were looking at 100 million video clips per day.151

Today, there are several other video hosting sites. YouTube is certainly the

leader, but another key site is Vimeo, which tends to attract ‘art school’ video

producers who are focused on the aesthetic of video. And while these hosting sites are

integral to the video experience, consumer video production wouldn’t be possible

without advance in video camera technologies.

During the last decade, the top consumer electronics companies have been

working together to create a video protocol called “Advanced Video Coding High

Definition,” or AVCHD, which allows producers to record digitized video onto the

hard-drive of the camera. In 2007, the top companies, Sony and Panasonic, released

AVCHD hard-drive cameras into the marketplace and the video culture transformed

overnight.152

In 2009, Canon released the Canon EOS 7D camera, which is a DSLR, or

digital single-lens reflex camera, capable of shooting still photographs and video. The

camera looks very much like a traditional still-photography camera but it does have

the added ability to shoot high-definition video, which is recorded onto the hard-drive

151
Lessig, Lawrence. Remix: Making Art and Commerce Thrive in the Hybrid Economy. New
York, NY: Penguin Books, 2009. 51-83.
152
"AVCHD INFORMATION WEB SITE." AVCHD INFORMATION WEB SITE. Accessed
March 08, 2017. http://www.avchd-info.org/.

166
media inside the unit. When it was released, the Canon 7D was groundbreaking.153

Given its relatively low price tag of $1,800 (compared to the standard TV news field

camera, the Sony EX3, which sold for $10,000), the camera was affordable; it was

also capable of producing high-resolution video images, which could be used

professionally. The Canon 7D also set the standard for DSLR cameras produced by

the other top consumer electronics companies and a whole host of copycats followed.

What AVCHD did was convert light into digital data, which could be edited in non-

linear software programs—including Final Cut Pro and Adobe Premiere—and

published online. Improvements in the technologies over the last decade have only

further enhanced the production quality and convenience. It is now entirely possible

for a novice photographer to shoot, edit and publish a 90-minute documentary and

his/her only tools are a Canon 7D camera and a laptop computer. The introduction of

the DSLR camera has transformed the presence of video globally.

Concurrently, the hardware companies for the cellphone industry have added

video cameras to their handsets and improvements have been made steadily over the

last decade. Today, the top smart phones are capable of shooting HD-quality video

and, because of the telephony networking, the user can publish that video online in a

matter of just a few seconds; it is for this exact reason that the YouTube website has

cataloged more content during its short life than all of the legacy television production

houses have created since the inception of the TV industry in the 1930s.154

153
Team, Imaging Resource. "Canon 7D Review: Full Review." Imaging Resource. December
16, 2016. Accessed March 08, 2017. http://www.imaging-resource.com/PRODS/E7D/E7DA.HTM.
154
Dijck, Jose Van. The Culture of Connectivity: A Critical History of social media. Oxford:
Oxford University Press, 2013.

167
This is a wonderful example of the power of User-Generated Content: what it

lacks in quality it makes up for in quantity.

Digital Content

On the Internet the main currency there is digitized information. If we hadn’t

found a way to transform and convert various media into a digital form, we would not

have the ability to shift text, photos, sound or video so freely. And like the invention

of writing—which separated the thought from the thinker—digitized content is truly

the disembodiment of the medium from its ‘analog self;’ when we digitized sound, for

example, we removed it from its analog host forms, which included vinyl records and

magnetic tape among others. Before digitalization, audio sound was preserved in a

tangible, physical form—we had the LP record and the cassette tape to cling to—

which gave the user a sense of possession; cassette tapes could be bought and sold

because there was a physical transaction taking place: cash for the plastic cassette and

its packaging. But when we began digitizing these things—text, photos, audio,

video—we converted the physical form into electricity, thus making these media

ethereal and absent the sense of tangibility.

Modern photographers musing over the artistic value of digital photographs

have seized upon the phrase “aura of thingness,” which is a tribute to Walter

Benjamin’s idea about mechanical reproduction. Author Jonas Larsen, in his research

on tourist’s photography, noted that many people will freely delete unwanted digital

photographs without care or concern for the lost content.

While their afterlife is uncertain, many tourist photographs are visible, mobile
and tied up with everyday socializing upon various networked screens. And,

168
we may add, disposable. Lack of an ‘aura of thingness’ partly explains why so
many digital photographs are short-lived, but also why they are valued as a fast
mobile form of communication. Digital photographs are a crucial component
of mobile-networked societies of distanciated ties and screens sociality. While
many digital images exist virtually, digital photography is not without a
material substance, and some digital images do materialize as objects with an
‘aura of thingness.’155

Larsen does suggest that it is possible for digital photographs to have an “aura” but he

doesn’t quite explain how that is. Still, it’s worth noting that digital content certainly

dwells in this state of incomplete appreciation and that incompleteness has diminished

its value. Maybe tangibility is a byproduct of the pre-digital age, an anachronism, but

tangibility certainly is a catalyst affecting our appreciation for digital content.

In the world of commerce in the days before the Internet, tangibility was a key

component of the process of capital exchange; even when we purchased something

like life insurance, we were given a physical product, a document, demonstrating the

existence of the purchase and the value of the product. But when the world became

aware of digital media content, we moved aggressively into the age of electronic-

commerce, or e-commerce, where we traded credit-card-number payments for

digitized media files that were delivered in an instant electronically and today

ultimately exist only as lighted symbols on a computer screen.

In their book Virtual Economies: Design and Analysis, authors Vili

Lehdonvirta and Edward Castronova address this form of digital commerce by

identifying these products as “virtual goods,” or goods that exist only in a digital

world.

The instrumental usefulness of an object, stripped of intangible social and


hedonic considerations, is usually understood to be a measure of its ability to

155
Urry, John, and Jonas Larsen. The Tourist Gaze 3.0. London: SAGE Publications, 2011.
186.

169
cater to basic human needs. There are many theories of human needs, but the
bottom line in all is that humans have certain basic physiological requirements,
such as the need for energy and oxygen. Food is useful because it provides
energy, while a fishing pole is useful because it overcomes the problem of
obtaining more food. Virtual goods do, of course, also have usefulness of a
sort. They are useful in overcoming problems presented by the game world, in
fulfilling the needs of game characters, and as materials in users’ virtual
crafting projects. But these are artificial problems created by a designer, not
real problems. Virtual goods cannot fulfill real needs; only material goods can.
Right?156

In this case, Lehdonvirta and Castronova are addressing the exchange of real money

for digital properties in an online gaming environment. They write that people will pay

for digital items to solve problems inside a digital realm; one example they offer is the

purchase of a faster digital racecar for a player-versus-player digitized racing game.157

They also address the issue of “collecting” items online.

Collecting is a popular pursuit in virtual environments that feature a large


variety of goods. Perhaps the most extreme example of a virtual collector is a
player known as Entity, who has set himself the task of collecting one of each
virtual item that exists in the sci-fi universe of EVE Online. After steadily
accumulating his collection for over nine years, he is now in possession of over
9,000 different items. Needless to say, Entity has become a bit of a virtual
celebrity figure. Even though starting a collection is usually motivated by
nothing more than personal interest, it often takes on aspects of social status
competition among accomplished collectors.158

So, clearly a portion of the public are learning to become comfortable with electronic-

commerce and paying real money for virtual goods and services. But consider the

oddness of this transformation: In digital realms including World of Warcraft and

Second Life, people are purchasing digital items with permanent intent. Specifically, in

Second Life—an online world populated by user-created avatars—players are buying

156
Lehdonvirta, Vili, and Edward Castronova. Virtual Economies: Design and Analysis.
Cambridge, MA: MIT Press, 2014. 52.
157
Ibid., 41-56.
158
Ibid.

170
plots of land and then selling subdivided plots to other users; this relatively mature

society also has a stock exchange called SLCapex:

…in Second Life, participants were able to set up an entire virtual stock
exchange called SLCapex, complete a regulatory commission and self-licensed
stockbrokers. Despite its almost complete lack of legal protections against
fraud or insider trading, entrepreneurs using SLCapex succeeded in raising the
equivalent of approximately $145,000 from investors. This was possible
because of exceptionally strong trust developed within Second Life’s relatively
mature community. However, it turned out that at least some of this trust was
misplaced. The market value of the investments grew to $900,000 before
eventually plummeting, while at least some entrepreneurs shirked their duties.
SLCapex continues to exist and operate to this day, however.159

While the Second Life commerce experiment did implode, it did work to build a

reinforced confidence in digital commerce, which in turn helped the users in that

community develop a confidence in virtual goods. In fact, looking at the Second Life

website, a company news posting reports that in 2016, it processed credit card

payments converting $60 million into “Lindens,” which is the Second Life currency.160

This is in addition to an active digital society—or country—with upwards of 50

million registered members and a global economy valued at over $500 million.161

Clearly, people are willing to pay money for virtual items, or ones they will never use

beyond the boundaries of the digital universe.

Please note that I am identifying these products as “virtual goods,” which is a

designated subset of “digital goods.” A “virtual good” only exists inside a digital

landscape, a “digital good” includes digitized media like digital books, film and music.

159
Ibid., 162.
160
"Updates to LindeX and Credit Processing Fees." SecondLife Community. June 13, 2017.
Accessed June 26, 2017. https://community.secondlife.com/blogs/entry/2187-updates-to-lindex-and-
credit-processing-fees/.
161
"The Second Life Economy in Q3 2010." SecondLife Community. October 28, 2010.
Accessed June 26, 2017. https://community.secondlife.com/blogs/entry/46-the-second-life-economy-in-
q3-2010/.

171
But, again, a confidence in one fortifies a confidence in the other; and all this will one

day reach a critical mass that could soon affect the way people engage digitized

products; and that engagement could translate into a reaffirmation of “aura” for digital

artifacts including sound, photographs and so forth.

Trouble is, we rushed to this point, this place for e-commerce very quickly,

and in our haste we may have left many people standing behind, unaware, ill-equipped

and certainly unprepared. Begging the question: Were we ready?

Technology has always driven the advances of mankind but often these

technologies arrive with such energy, we have very little time to measure or

comprehend the value of the advancement and weigh it against the cultural troubles

that were soon to follow.

Are We Ready?

Thinking machines, independent cars, ethereal media… ours is an exciting

time to be alive. But are we really ready for the digital revolution? Digital content is

easy and cheap to store, it can be moved rapidly around the planet and it can be nearly

ubiquitous. But the steps leading us here have been complicated. Consumers had to

learn to understand how computers work, how the Internet functions and—in the case

of e-commerce—we had to learn to trust the digital realm as a productive and safe

place to conduct commerce. Before we knew it, we were producing digital images of

ourselves and our friends and family and we were posting them in a public space

where they could be viewed by friends and strangers; corporations emerged with ideas

about social media and social contact, and before we understood the implications,

172
some of us were already exposed and easy victims for people who better understood

the darker aspects of electronic communication.

This age of digitation has changed everything. The Internet has altered entire

industries including academia, banking, film, gaming, photography, real estate,

television, telephony, tourism… to name a few.

But again I ask: Were we ready? Theorist Bob Merritt addresses the issue:

The worldwide growth in the ability to generate, transport, and store that much
data leads directly to the third challenge—which remains the most critical
impact of all. The worldwide structure supporting digital forms of information
makes it much more cost-effective to distribute that information to the broader
general population. The growing issue is that the rate at which new information
is becoming available creates conflicts between those who have embraced
certain new concepts versus those who fear that this new information and
knowledge will cause cultural conflicts. The issue isn’t necessarily related to
the technology itself, but results from the concern that the rate of cultural
change and social impact is too fast to be comfortable absorbed into
communities.162

He goes on to offer examples of people driving and texting and the dangers these

behaviors have created; government, he continues, passes laws to prevent people from

harming others (and themselves) but it’s clear that the technology has arrived in the

hands of people who aren’t quite prepared for the power they possess.

In his book The Whale and the Reactor, author Langdon Winner takes an even

harsher view. He suggests that like most engineers, computer engineers haven’t for a

moment reflected upon their actions, and that lack of self-reflection has consequences:

In the busy world of computer science, computer engineering, and computer


marketing such questions seldom come up. Those actively engaged in
promoting the transformation—hardware and software engineers, managers of
microelectronics firms, computer salesmen, and the like—are busy pursuing
their own ends: profits, market share, handsome salaries, the intrinsic joy of
invention, the intellectual rewards of programming, and the pleasures of

162
Merritt, Bob. "The Digital Revolution." Synthesis Lectures on Emerging Engineering
Technologies 2, no. 4 (2016): 1-109. doi:10.2200/s00697ed1v01y201601eet005.

173
owning and using powerful machines. But the sheer dynamism of technical
and economic activity in the computer industry evidently leaves its members
little time to ponder the historical significance of their own activity. They must
struggle to keep current, to be on the crest of the next wave as it breaks. As one
member of Data General’s Eagle computer project describes it, the prevailing
spirit resembles a game of pinball. “You win one game, you get to play
another. You win with this machine, you get to build the next.” The process
has its own inertia.163

Winner concluded his thought this way:

Hence, one looks in vain to the movers and shakers in computer fields for the
qualities of social and political insight that characterized revolutionaries of the
past. Too busy. Cromwell, Jefferson, Robespierre, Lenin, and Mao were able
to reflect upon the world historical events in which they played a role. Public
pronouncements by the likes of Robert Noyce, Marvin Minsky, Edward
Feigenbaum, and Steven Jobs show no similar wisdom about the
transformations they so actively help to create. By and large the computer
revolution is conspicuously silent about its own ends.164

Certainly other industries have suffered from our haste to move forward. In the

world of publishing, most of the trouble has come from the rapid advance of digitized

content. The newspaper community was certainly unprepared for the rapid rise of

digital news; so too was the record industry.

But in our haste to appreciate the instantaneous and cost-effective benefits of

digital content, we have placed many of our most precious cultural possessions in

danger: specifically, with the move towards the digital story, we may be poised to

leave behind centuries of tradition and craft.

163
Winner, Langdon. The Whale and the Reactor: A Search for Limits in an Age of High
Technology. Chicago: University of Chicago Press, 1986. 101.
164
Ibid., 102.

174
The Digital Orality

When I opened this chapter, I suggested that we had moved into a new epoch

in human communication that dwelled in a digital realm. It is no secret that human

communication has quickly been dominated by a digital intercourse of email, text

messaging, instant messaging, social networking, electronic commerce and so forth.

Theorist Robert Logan suggests that this transition in communication marks a shift in

the way we consume electronic media (specifically, radio and television) and digital

media (email, text messaging, Twitter, video) in that we interact with digital media.

Although the dissemination of digital information parallels in some ways that


of electronic information, there are some very important differences. The users
of electronic media are merely passive consumers of information, whereas the
users of digital media can interact actively with information they access. They
can also use these digital media to reorganize and remix information and create
new forms of knowledge. There is a cognitive dimension to the use of
computers and the Internet that is totally missing from mass media and the
telephone.165

Logan calls this the “tertiary” or “digital orality” and it’s an important distinction in

our understanding of cognitive communication practice. But when he wrote this book,

“smart phones” hadn’t quite matured to their current sophistication and the desktop

computer was still very much the conduit to the Internet. For these reasons, I believe

his ideas don’t go quite far enough and that we are still evolving through the digital

age and beyond the “digital orality.”

His ideas seem to dwell on text-based media and don’t quite embrace the other

media forms, which suggest that we need to further explore the idea of multimedia

storytelling. Given the fact that we have reduced all of the media forms to a common,

165
Logan, Robert K. Understanding New Media: Extending Marshall McLuhan. New York:
Peter Lang, 2016. 30.

175
digital building block, there are wholesale opportunities for enhancement in

multimodal communication.

Narrative in the Digital Age

One of the key problems of storytelling in the digital age is the fact that there

are so many media platforms to choose from. A producer can tell text stories on

Facebook and Twitter; a photographer can post images on Instagram and Snapchat;

and video producers can post on YouTube, Vimeo and, for a time, Vine. In each of

these cases, the audiences and the formats are different. Looking for examples, the

dynamic between YouTube and (the now defunct video platform) Vine offer a stark

difference. YouTube is an open platform where users can post videos of varying

lengths including ones that can play for an hour or more; while Vine had a strict 6-

second platform. The disparities should be apparent. On YouTube there is room for

nuance of expression but on Vine, if you wish to tell a story, it must be brief.

This is clear example, I think, of Marshall McLuhan’s idea that the “media is

the message.” These platforms define the media form. This is true throughout the

Internet.

In the digital age, there is a clearly defined split in storytelling. Looking to

Hayden White again, we have chronicle and narrative structures. Most social media

platforms are designed to favor chronicle.

Twitter is the clearest example: On this platform, producers can write a total of

280 characters. (The previous sentence is 97 characters. So, one can imagine how brief

Twitter messages must be.) When using Twitter, the writer can include photographs,

176
video and links to websites, but primarily, producers write brief statements and the

limitations here prevent context. This is the definition of chronicle storytelling. To

understand the producer’s meaning, the audience must be aware of the context of the

message geographically, historically and temporally. A “tweet” about a protest, for

example, might require the audience to understand that there is a “Black Lives Matter”

event taking place in Boston.

Many of the other social media platforms require this same sense of

understanding and, for the most part, most social media are vehicles for chronicle

stories. This is unfortunate because it is inside the narrative form where we find the

artfulness of story. Thankfully, there are advances here as well.

Although The New Yorker magazine has been around for decades, this literary

fiction and nonfiction media group has been making pronounced inroads into

multimodal story forms. Specifically, the magazine has long been a leader in literary

nonfiction storytelling, but with the 2011 release of its iPad version, The New Yorker

has been experimenting with addenda including video, photographs, audio and other

“hot” media. Looking at the current iPad offerings, The New Yorker often comes with

a creative video cover, long-form stories with supplemental photographs and graphics

and audio tracks of poets reading their poetry.

These advances are setting the stage for what may be coming in the not-too-

distant future. All that aside, the clearly delineated break in story form has created a

world filled with the patter of chronicle and the opportunity for creative long-form

narrative. To me, the chronicle content is noise—information pollution—, which is

177
dominating the public’s attention, while the long-form narrative is exploring the

inventiveness of story form designed for a digital world that’s eager for more.

Summary of the Digital Revolution

During the last two decades, most people have celebrated the Internet as the

catalyst changing global culture where in fact I believe a portion of the truth lies

elsewhere. The Internet has absolutely shifted global culture: we now have the ability

to connect to people around the world but this is just one aspect of the Digital Age.

But it is the digitation of media that has really transformed the way we do things. By

transforming analog information down into bytes, we have altered the structure of

commerce and communication.

Let’s look at the binary code for a moment: As I explained earlier, binary code

is reducing data down into basic digital building blocks of 0s and 1s. So the text

phrase: “Eat at Joe’s” translates into “01000101 01100001 01110100 00100000

01100001 01110100 00100000 01001010 01101111 01100101 00100111 01110011.”

In this case, each group of numbers represents a letter, a symbol or a space in the

phase “Eat at Joe’s”. These are the basic building blocks for the Digital Revolution

and, in fact, binary code is merely just another alphabet… like cuneiform, Arabic,

Times New Roman and Morse code. With each advance in literal communication, a

subsequent alphabet was also created: Handwriting begot cuneiform; printing begot

Roman typeface (and later, dot matrix); the telegraph begot Morse code; and now

computing has binary code.

178
With each advance, the alphabet got more sophisticated and, with the case of

binary code, it has the ability to transform not only text but photos, sound and moving

images into a digital form as well… and this is where the storytelling shift truly takes

hold. Because text media and video have both been reduced to the same building

blocks—binary code—we now have the ability to produce, package and deliver these

media together. This is what the Digital Incunabula is all about; we are in the process

now of experimenting and learning how these media can be presented together.

So, taken altogether, the Internet is really just a channel for storage and

delivery; it is the binary code that has actually begun shifting the way we

communicate, tell stories, and share histories. The next step, of course, is how we

receive these media. Until 2007, most digital media were consumed on desktop

computers. After all, the desktop computer was initial conduit to the Internet, and the

Internet was the communication and delivery device; therefore, it was necessary to

receive data at the desktop computer, or the so-called “terminal workstation.” But in

2007, Apple and other technology companies began introducing smart phones that

shared many of the same abilities of the desktop computer (with the added

convenience of portability) and over the next decade, many users began using their

phones and other portable consumer electronic devices to consume media. This is a

vital next-phase in the development of the Digital Age.

How exactly should we consume digital media? We need a promontory digital

tool, a landmark consumer electronics device to be the platform for this new story

form; we also need a method—a form of packaging—that will ease the consumption

by an evolving media-literate audience.

179
Chapter 4

The Perfect Thing

In 2010, Apple Computer released its fourth generation iPod Shuffle, which is

an MP3 player; it is smaller than a book of matches and weighs less than half an

ounce; the device hosts roughly 2 gigabytes of memory, which means it can store an

estimated 500 songs—or the equivalent of 50 LP records—and the unit has a 15-hour

battery life. Today, the device has exactly four buttons and a headphone jack, it comes

in six colors and retails for $49. It does not have a display screen but it does have a

clip that allows the user to clip it like a piece of jewelry to a jacket lapel or shirt

pocket.166 To observe this small piece of electronics, to hold the tiny iPod Shuffle in

the palm of your hand, is something truly to behold. It’s a marvelous, intricate piece of

technology.

When author Steven Levy decided to write about the iPod back in 2006, he

called his book The Perfect Thing.

It’s worth pausing here to note a couple of things about the device that Apple
wanted to make, and why the elements of success were in place at that very
moment. With the tiny hard drive and Fadell’s compact form factor, the iPod
would be small—easily acing the “in your pocket” standard. With the high-
speed FireWire technology, the device would be fast; it would load songs at
lightning speed, eliminating one big complaint about previous players. With
the scroll wheel—and the inevitable clever software touches that Apple would
add—it would be easy to use. With the iTunes software from the Macintosh
built in—and with the iPod seen as a satellite of that software, instead of a
foreign device that required complicated high-tech handshaking—it would
sync effortlessly with a music library. And if Apple’s industrial design team
performed its usual witchcraft, it would be utterly beautiful. It was a recipe for
something, well, perfect.167

166
"IPod Shuffle." Apple. Accessed May 25, 2017. https://www.apple.com/shop/buy-
ipod/ipod-shuffle?afid=p238%7Cs3RnWdLgP-
dc_mtid_1870765e38482_pcrid_164139570870_&cid=aos-us-kwgo-ipod--slid--product-.
167
Levy, Steven. The Perfect Thing. London: Ebury, 2006. 39.

180
Levy does a wonderful job itemizing why this device is a stand out in the development

of consumer electronics. It is light, small, fashionable, portable, personal, user-friendly

and it associates nicely with existing networks. These standards were created decades

before by a small Japanese startup and they still hold true today. One of the key

components of media is delivery and innovations in consumer electronic devices have

blazed the trail for wholesale shifts in the global culture. But the iPod, despite Levy’s

assertions, is NOT a perfect device; instead, the iPod is a building block in the

progression of devices that came soon after, and the devices yet to come.

We live in interesting times—the idea that one small device can house the

entire content of a record collection is astounding to me—but we haven’t quite found

the Holy Grail of consumer electronic devices. Instead, we are on a steady path of

exploration and with each advance, we see consumers doing new and exciting things.

Still, it seems important that we reflect on what came before.

Sony TR-63

After World War II, the United States and the Soviet Union engaged in a

technological scramble to see who could invent better weapons. Concurrently, as U.S.

forces occupied Japan, the Japanese were forbidden from rebuilding its military. The

result? In the United States, the leading scientists and technologists began developing

tools for the military, while, in Japan, the leading scientists and technologists began

developing tools for commerce.168 It didn’t take long for Japan to become a global

leader in commercial technology.

168
Headrick, Daniel R. Technology: A World History. La Vergne, TN: Oxford University
Press, 2010. 138-144.

181
Other countries have wondered what the secret of Japanese commercial
success is. At first, there was certainly some copying of foreign technology.
But the Japanese people were technologically literate, and after the war, Japan
was forbidden to rebuild its armed forces. As a result, its best engineering
minds went into consumer-oriented businesses rather than into the arms of
industries as in the United States and the Soviet Union.169

During this period of time, two unemployed scientists—Masaru Ibuka and Akio

Morita—opened a small radio repair shop in Tokyo and began experimenting. When

they learned of the invention of the transistor, they approached Westinghouse, which

was one of the major manufacturers, and purchased a license to manufacture

transistors in Japan. In 1957, they introduced the TR-63, a small, portable, transistor

radio, which they retailed for $39.95 (which would be the equivalent of $346

today).170 This company ultimately became Sony Electronics.

Forgive my returning to this, but the TR-63 is a landmark piece of consumer

electronics. Whether it was implied or by accident, this radio established the model for

modern consumer electronics. It was small, it was affordable, it was stylish, it was

easy to operate, and it made use of an existing (radio broadcast) network. Because it

was portable, it had an intimate relationship with its user. Prior to its invention, the

standard box radio was large and, like a piece of furniture, designed to be placed in a

stationary spot in a corner of the living room and left there. Basically, the traditional

box radio was for family consumption, while the TR-63 could be carried around into

bedrooms, to the beach, into the parks and so forth. This portability made it less

available to others and more available to the owner. One cannot help but notice how

the new modern listening device—the MP3 player—has many of these same

169
Ibid., 139.
170
Ibid., 139.

182
attributes: it is small, it is light, it is stylish, it is affordable, it is portable, and it is

intimate.

Ibuka wanted something smaller than the TR-55. In March 1957, Sony
released the world’s first “pocketable” transistor radio, which sold 1.5 million
units and established Sony as the market leader. In truth, as the device was
slightly too large to fit inside a normal shirt pocket, Morita ordered custom-
made shirts with outsized pockets for his domestic and U.S. sales forces. By
1957, driven by the radio business, Sony’s revenues had grown to $2.5 million,
and the company employed 1,200 people.171

If we unpack the Sony Electronics story a little bit, we can also find a business model

here. Basically, Ibuka and Morita launched their own technology business through

enterprise and guile. They also built a global brand and they did so from scratch in a

small retail space. In the late 1970s, Steve Jobs and Steve Wozniak launched Apple

Computers in much the same fashion. They started with off-the-shelf technologies,

built their own computer architecture and found a market for their device. In most

cases, their products were small, stylish and—relatively speaking—affordable.

Birth of Apple

In the 1970s, Steve Wozniak and Steve Jobs were a pair of technology geeks

living and working in and around San Francisco. From the outset, the pair of “Steves”

had very different skills: Wozniak was a tech wizard and Jobs had a natural skill for

salesmanship. In 1976, Wozniak built a homemade computer from scratch using old

electronics and a computer chip called the MOS Technology 6502, which he

discovered at the Wescon electronics show a year before.172 The guts of his invention

171
Nathan, John. Sony: The Extraordinary Story behind the People and the Products. London:
HarperCollinsBusiness, 1999. 35.

183
were very simple: He used a standard Qwerty keyboard, a box-frame and wired it all

to a standard television, which acted as the monitor. When Jobs saw the system, he

approached Wozniak with a plan to produce bare circuit boards, which they would sell

to computer hobbyists for $50 each. Wozniak agreed, and they built a prototype.173

Searching for customers, Jobs took the circuit board prototype to the Byte Shop, which

may have been the first computer retailer in the country, and asked the owner if he

wanted to stock the circuit boards. Seeing the potential, the owner ordered 50 units but

asked—instead—that Jobs and Wozniak deliver complete computers, instead of just

circuit boards. Jobs agreed and a $25,000 order was created. This was a key moment

in the development of the personal computer and the unofficial birth of Apple

Computers.174

Armed with the success of this first deal, Jobs began scaling up, borrowing

more money and building lines of credit with suppliers; Wozniak worked to improve

upon the computing technology. A year later, the Apple II was created. For software,

Jobs contacted Microsoft and bought the licensing rights to put BASIC in all of the

new Apple machines. Although the machines cost over $1,000 each, they found wild

success in the marketplace.175

By 1983, Apple had matured into a full-on computing company. It had

passively allowed for the Apple II to die, while it placed emphasis on its new

Macintosh line. Angered by the transition, Wozniak quit the company and Jobs

172
Linzmayer, Owen W. Apple Confidential 2.0: The Definitive History of the World's Most
Colorful Company. San Francisco, CA: No Starch Press, 2008. 4.
173
Ibid., 1-26.
174
Ibid., 7.
175
Ibid., 1-26.

184
ascended to a leadership role. It was from this position that Jobs determined the

direction of the Apple product line and—more importantly—determined that the

company would actively discourage third-party tech firms from developing software

and hardware for Apple computers. It was over this and other troubles with Steve

Jobs, that the Apple board ultimately removed Jobs from any management position

and replaced him with an executive from Pepsi Cola named John Sculley;176 Jobs

ultimately quit to form NeXT Inc. That was in 1985.177

By this point, the personal computer era was well on its way. IBM had also

designed the architecture for a personal computer and then released the hardware

schematics for its computer to the public. The idea here was to encourage third-party

tech companies to write companion software for the system.178

What happened next, no one expected. Searching for a core operating system

for its computer, IBM contracted with Bill Gates and his software-startup Microsoft

for its BASIC program, and later for its MS-DOS operating system.179 This decision

empowered Microsoft to take the lead as the central software company in the United

States.

Gates also argued to IBM that since Microsoft was at risk for potentially
having its operating system on the PC replaced by a competitor’s, Microsoft
should be free to sell its operating system to other hardware manufacturers.
IBM bought the argument and opened the door for clones. Gates was acutely
aware of the experience of the Altair with its open architecture, which quickly

176
Writer’s note: The author is not related to John Sculley.
177
Linzmayer, Owen W., and Owen W. Linzmayer. Apple Confidential 2.0: The Definitive
History of the World's Most Colorful Company. P. 154. San Francisco, CA: No Starch Press, 2004. 196.
178
Carlton, Jim. Apple: The inside Story of Intrigue, Egomania, and Business Blunders. New
York: Times Business/Random House, 1997.
179
Ibid.

185
led to clones. The open architecture of the PC meant that third parties could
also clone the IBM PC’s hardware.180

As a result, because most computers operate with a basic operating system, MS-DOS

became the platform on which all other software would rise. To gain access to any PC

performing on the MS-DOS operating system, third-party software companies had to

pay licensing fees to Microsoft for that access. By 1995, Microsoft was the leading

software company in the country and IBM had ceded most of its Personal Computing

market share to a legion of “clone” PC companies; so, if you were using a personal

computer in the United States, chances are you were working on MS-DOS.181

One of the first clone makers was also one of the most successful. Compaq
Computers was founded in 1982 and quickly produced a portable computer
that was also an IBM PC clone. When the company began to sell their portable
computers, their first year set a business record when they sold 53,000
computers for $111 million in revenue. Compaq moved on to building desktop
IBM PC clones and continued to set business records. By 1988, Compaq was
selling more than $1 billion of computers a year. The efforts of Compaq, Dell,
Gateway, Toshiba, and other clone makers continually drove down IBM’s
market share during the 1980s and 1990s. The clone makers produced cheaper
microcomputers with more power and features than the less nimble IBM
could.182

As Microsoft began to dominate the software industry, Apple—with its guarded

architecture—failed to compete. CEO John Sculley attempted to open the Apple

platform up to third-party software and hardware companies, but Microsoft’s near-

monopoly over computer operating systems equated to a near-monopoly over the

computer-tech industry and that stalled Apple’s rise. By 1994, things at Apple were

180
Swedin, Eric Gottfrid., and David L. Ferro. Computers: The Life Story of a Technology.
Baltimore: Johns Hopkins UP, 2007. 101.
181
Carlton, Jim. Apple: The inside Story of Intrigue, Egomania, and Business Blunders. New
York: Times Business/Random House, 1997.
182
Swedin, Eric Gottfrid., and David L. Ferro. Computers: The Life Story of a Technology.
Baltimore: Johns Hopkins UP, 2007. 101.

186
dire; John Sculley was fired and the new CEO Gil Amelio began laying off

employees. In an effort to revitalize the company, Amelio bought Steve Job’s venture,

NeXT, because of its superior operating system and he hired Job’s to act as a

consultant. In 1996, the executive board, angered by years of losses and a record-low

stock price, fired Amelio and replaced him with Jobs. Jobs immediately took to

reinventing the company.183

During the next five years, Jobs returned Apple to profitability. He elevated the

stock price and he redefined the consumer electronics industry with new gadgets and

services. In 2001, he opened the first Apple retail stores and six months later, unveiled

the first iPod, a portable MP3 player. In 2003, Jobs introduced Apple’s iTunes Store,

an online content retailer that cross-integrated with the iPod, and suddenly, Apple was

a leader again.184

Beyond the Paradigm

Steve Jobs’ success as an entrepreneur and technology innovator seems like a

long shot. Clearly, he wasn’t received very well in Silicon Valley. His background, his

lack of formal education and his personality were factors. Jobs was brash, charismatic,

inventive and brazen; he was also wildly competitive, prone to fits of jealousy and

rage, and capable of acts of pettiness. He also had a natural gift for sales and

salesmanship. In short, he was the perfect man to turn the global computing industry

on its head.

183
Linzmayer, Owen W., and Owen W. Linzmayer. Apple Confidential 2.0: The Definitive
History of the World's Most Colorful Company. P. 300-303. San Francisco, CA: No Starch, 2004. 196.
184
Ibid.

187
At 20, Jobs was very young when he got involved with Steve Wozniak. At that

time, most of the Silicon Valley industry was focused on defense and corporate

technologies. The nation’s leading companies included firms such as AT&T, Eastman

Kodak, Fairchild Semiconductor, General Electric, Hewlett-Packard, IBM, Motorola,

Polaroid, Rand, RCA, Texas Instruments, XEROX, and a host of others. At IBM, for

example, the culture was so buttoned down that workers were told they could only

wear blue, brown or gray suits; brown or black shoes; and blue, white or yellow shirts.

This was certainly not the kind of environment that would attract a guy like Steve

Jobs.

Basically, Jobs was born to lead a paradigm revolution in personal computing.

As Thomas Kuhn described in his book The Structure of Scientific Revolutions, often

it was the outsider who upset the traditions of a given scientific paradigm. Again, Jobs

was the perfect outsider: he had no college degree, he wasn’t from a distinguished

scientific family and he had no ties to the leaders of the Silicon Valley tech

movement.185 He was like Einstein, a Jew in a Christian world, who worked as a

patent clerk but found time to transform the physical sciences. Jobs did brazenly

contact William Hewlett, but nothing really came of that contact.186 So Jobs really was

a true technology innovator—who demanded simplicity of design and usability in all

the Apple products—and his influence continues to last.

185
Kuhn, Thomas S. The Structure of Scientific Revolutions. Chicago: University of Chicago,
1970.
186
Linzmayer, Owen W., and Owen W. Linzmayer. Apple Confidential 2.0: The Definitive
History of the World's Most Colorful Company. P. 1. San Francisco, CA: No Starch, 2004.

188
iPod Culture

When Apple revealed its first iPod digital music player in 2001, it changed the

way people listened and stored music. At the time, the iPod was little more than a

small data storage device that could house hundreds of digital music files, known as

MP3 files. The design of the first unit was very simple: it was a sleek stainless steal

unit, with a small viewing screen, and a headphone jack. What made the iPod curious

was the simplified interface: On the front of the unit, there was a single button, which

was surrounded by a circular click wheel. To operate the unit, one pushed the central

button and then dragged their thumb around the touch wheel to scroll through songs,

to raise and lower volume, and to pause or play music.

In his interview with the designer of the iPod, Rob Walker seemed almost

astounded with its simplicity:

The surface of the iPod, white on front and stainless steel behind, is perfectly
seamless. It's close to impenetrable. You hook it up to a computer with iTunes,
and whatever music you have collected there flows (incredibly fast, thanks to
that FireWire cable) into the iPod—again, seamless. Once it's in there, the
surface of the iPod is not likely to cause problems for the user, because there's
almost nothing on it. Just that wheel, one button in the center, and four beneath
the device's LCD screen. (The look, with the big circle at the bottom, is
reminiscent of a tiny stereo speaker.)187

At the time, the product was just two years old but it had already redefined American

culture. The iconic white headphone chords, for example, became a symbol of the new

cool in America. That—and everyone was attaching a lowercase letter “i” to any word

it wanted to associate to the hipness of the Apple culture.

Like the Sony TR-63 transistor radio, which came 50 years before, the iPod

was small, it was portable, it was sleek, it was easy to use and it was affordable. What

187
Walker, Rob. "The Guts of a New Machine." The New York Times 30 Nov. 2003: n. pag.

189
it did lack, however, was a seamless infrastructure. The TR-63 had the global AM

radio network; the iPod was really just an external hard-drive, which had to be

attached to a computer and the user had to move songs from the computer to the iPod.

To get songs, the user had to download sound files from CDs to the hard-drive of the

computer and then, from there, over to the iPod. Something was missing… and Apple

was working on a solution.

What came next was iTunes, the Apple digital retail store. To access it, users

had to download free software to their computers and, once activated, the software

networked into the iTunes retail site online. From here, users could download music

files at 99-cents per file.188 Now, it was possible to network into a huge storehouse of

music and download freely. The iTunes + iPod matrix would change everything. In his

book Smart Things: Ubiquitous Computing User Experience Design, author Mike

Kuniavsky wrote about Apple:

Two events provided the final pieces to the iPod’s early success: the port of
iTunes to Microsoft Windows and the iTunes Music Store, both in 2003. The
resulting system offered a single user experience of digital music from the
point of music discovery to purchase, organization, management, consumption
and sharing. 189

With iTunes, the Internet shopping experience became seamless, easy and ever

present; and that development transformed the entire retail sector.

Future of Digital Commerce

The birth of retail must have begun about the same time man began

domesticating livestock and producing crops. With these developments came the

188
Ibid.
189
Kuniavsky, Mike. Smart Things: Ubiquitous Computing User Experience Design.
Amsterdam: Morgan Kaufmann Publisher, 2010. 121.

190
reality of surplus—or over abundance—which certainly lead to sharing or, more

importantly, bartering. This need for accounting created the technology of writing and

from their numbers, words, grammar and bureaucracy. Searching for a date, the

technology of writing dates back to 3200 BC and was begun in the Fertile Crescent

region in Asia Minor.190

Retail was a likely offshoot of bartering, or the trading of two things of equal

value. With the invention of money, barter became retail. Because retail was done

face-to-face, retail centers were formed at centralized locations and trading districts

sprung from there. It wasn’t until the invention of a dedicated postal system that

individual commerce could be done over a long distance. The telegraph only further

dissociated the relationship of face-to-face retailing, as did the telephone.

Marshall McLuhan says the telegraph made the entire world accessible,

meaning that anyone can speak to any other person provided each had access to the

telegraph. Again, McLuhan called the telegraph “the social hormone of the world” and

he warned that the move towards electronic communication threatened to destroy the

technology of the literate world. It also had the added effect of destroying the idea of

space. Now, through the telegraph, two people could communicate with each other

even if they are located hundreds of miles apart.

When a group of Oxford undergraduates heard that Rudyard Kipling received


ten shillings for every word he wrote, they sent him ten shillings by telegram
during their meeting: “Please send us one of your very best words.” Back came
the word a few minutes later: “Thanks.”191

Was this digital commerce? It was certainly an early example of its potential.

190
McClellan, James E., and Harold Dorn. "Pharaohs and Engineers." Science and Technology
in World History: An Introduction. Baltimore, MD: Johns Hopkins UP, 1999.
191
McLuhan, Marshall. Understanding Media: The Extensions of Man. N.p.: n.p., n.d. 340-342.

191
The commercialization of the Internet only escalated globalism even further. In

1996, Bill Clinton signed into law the Telecommunications Act. What this law did

was update the existing communication standards; it also designated which companies

could offer cable television and telephone services; and most importantly, this law

commercialized a government-owned network called the Internet.

McLuhan argued that the telegraph was an extension of the human nervous

system; one might argue that the Internet was the fusion of humanity into one thinking

organism. The Internet certainly took the middleman out of the equation. To send a

telegraph, one had to visit a physical office and hand the text of the message over to an

operator who typed it into a system that transmitted it and then transcribed it into text

on paper; with the telephone, the user had to dial into a network and intercept the

“receiver” of the call in real time: for telephone conversations to work, both parties

had to be engaged with the process at the same time; and then came the Internet. With

the birth of email, a sender could explain in great detail issues on a specific topic or

topics and then send that message to a receiver. On the receiving end, the receiver did

NOT have to be present and, in fact, the message would be stored and waiting for the

receiver in a digital mailbox.

The Internet is now in the process of changing everything: the way we

communicate; the way we bank; the way we shop; the way we make reservations; the

way we pay our taxes; the way we pay our bills… and so forth. It’s also becoming a

storehouse of human thought. In his book Glut: Mastering Information Through the

Ages, author Alex Wright argues that the Internet is becoming a storage space for

collective human thought and that this information—while fleeting in the human

192
experience—is lasting in the realm of the digital. He likened the Internet to a beehive,

where the life of an idea can last much longer—twice the lifespan of a honeybee—and

be sustained in the public conscience of the hive. The Internet, he argues, is our

collective thought.192

Shopping tools including iTunes and others are making our lives much easier.

In 2003, iTunes became a key conduit for dissociated real-time retail. It was possible

for a customer to log on to the site, shop for music and, with a few clicks, commence

downloading media and consuming it. Again, the price was 99-cents per song and

upward of $10 per album.

In his book Everything is Miscellaneous, David Weinberger argued that the

way consumers consume music has changed substantially. When album music—or

pre-packaged music arrived in the retail community in the mid-1960s—consumers

began buying entire collections of songs. Album music included 10 or more songs,

which were loosely associated to one another; but, traditionally, there would be brief

pauses in sound between each track. With iTunes, it was now possible to simply buy

individual songs. So, instead of buying the Beatle’s entire “White Album,” it was

possible to simply buy one or two tracks from the record.

Bundling songs into long-playing albums lowered the production, marketing


and distribution costs because there were fewer records to make, ship, shelve,
categorize, alphabetize, and inventory. As soon as music went digital, we
learned that the natural unit of music is the track. This was iTunes born, a
miscellaneous pile of 3.5 million songs from over a thousand record labels.193

192
Wright, Alex. Glut: Mastering Information through the Ages. Washington, D.C.: Joseph
Henry Press, 2007. 13-14.
193
Weinberger, David. Everything Is Miscellaneous: The Power of the New Digital Disorder.
New York: Times, 2007. 9.

193
Since the iPod was first revealed in 2001, Apple has sold over 350 million units (as of

September 2012).194 What’s more astounding is the volume of business the iTunes site

did. On February 25, 2010, iTunes sold its 10 billionth song to a senior citizen in

Woodstock, Georgia. The song? “Guess Things Happen that Way,” by Johnny Cash.

Three years later, iTunes logged the sale of its 25th billion song.195 Today, Apple is

one of the top three Internet retailers globally. In 2012, it had $8.8 billion in retail

sales, making it third behind Staples and Amazon.196

Given the success of the iPod and its iTunes network, Apple moved quickly

into cellular telephones.

The iPhone Revolution

In June 2007, Apple released its first cellular telephone, the iPhone, and the

“smart phone” movement was launched… and it was a long time in the making.

Dating back to the late 1960s, the idea of cellular telephone communications had been

around for decades: In a race with AT&T, Motorola was the first to develop a portable

telephone device back in 1973. The phone was the size of a brick, it weighed 28

pounds, it had a battery life of one hour and it cost $3,995; and it only worked in

Baltimore or Washington DC.197 Its inventor was an engineer named Martin Cooper,

who said his inspiration came from the television series, Star Trek.

194
"Total Number of IPods Sold All-Time." About.com IPhone / IPod. N.p., n.d. Web. 14 Apr.
2014.
195
Owens, Jeremy. "Apple Celebrates 25 Billion ITunes Song Downloads with Gift to
Downloader." MercuryNews.com. N.p., n.d. Web. 14 Apr. 2014.
196
"The Long Haul." Industry Statistics. N.p., n.d. Web. 14 Apr. 2014.

194
“Suddenly, there was Captain Kirk talking on his communicator. Talking!

With no dialing! That was not a fantasy to us… to me, that was an objective,” said

Martin Cooper.198 Cooper argued that because human beings are inherently mobile,

the idea of creating a mobile communication device seemed natural.

The development of the modern cellphone actually had two major components

to it: first, there needed to be a cellular network; second, there needed to be a viable,

affordable communication device, or phone.

In the 1980s, the Federal Communication Commission began designating

strips of radio spectrum for the cellular device industry. To dole out licenses, it

actually started by issuing lottery tickets; anyone could buy in; and if the FCC pulled

your ticket, you had the option to have a controlling interest in the radio spectrum

inside a designated media market. In doing so, the FCC hoped to allow commercial

interests to build out the cellular telephone infrastructure, and, in an effort to foster

competition, it issued two competing licenses per media market. So, when lottery

tickets were selected for the Baltimore-Washington LATA network, two licenses were

actually granted. These heavily regulated duopolies existed until 1994, which is when

the FCC decided to open all the markets up to third and fourth party venders. Two

years later, the Telecommunications Act of 1996 opened the door for the Regional

Bell Operating Companies, or RBOCs, to enter the market; that law also determined

that all participating cellular companies had to reach 90-percent of their customer base

inside that established market by 2006. After much wrangling, many mergers, LATA

197
Fuller, R. Buckminster. Inventors and Inventions. New York: Marshall Cavendish, 2008.
359.
198
"How William Shatner Changed the World - Martin Cooper, Mobile Phone Inventor."
YouTube. YouTube, 08 Dec. 2009. Web. 14 Apr. 2014.

195
exchanges and other horse trading, the cellular telephone market boiled down to four

dominant cellular service providers; those include AT&T, Sprint, T-Mobile and

Verizon.199

At the same time, hardware makers began manufacturing cellphones. For the

most part, the first devices were only good for telephone calls. Most were large—

about the size of a traditional house phone—with big clunky buttons, a liquid screen

that showed the phone number and an antenna. Overtime, as the hardware

technology—chips and so forth—improved, the phones got smaller and lighter, the

buttons got easier to use, and the liquid screens became larger. The prices for these

units also lowered as companies began offering more and more services. By the early

2000s, it was possible to purchase a cellphone that could do email, web searches, and

some mapping; by 2005, it was possible to get a cellphone that offered subscription

television and some music.

When a customer purchased a phone, he/she also had to sign a user contract

that dictated a finite amount of user time, data use, and the contract came with two and

three year term agreements. In exchange, the companies would offer discounts on the

hardware.

In 2007, when Apple launched the iPhone (the iPhone 1) users had to subscribe

with AT&T. At the time, nothing on the market even remotely resembled the Apple

device. This phone was relatively small, it was light and like the iPod, it was a

seamless piece of stainless steel with a large glass front and it only had one button on

its front surface. To activate the device, the user simply pressed the button, and the

199
(Editor), International Engineering Consortium. "Carrier IP Telephony 2000." Alibris.
Accessed March 04, 2017. http://www.alibris.com/Carrier-IP-Telephony-2000/book/10367423.

196
screen would light up. From there, the user would just touch the screen, pushing

buttons and sliding digital leavers. In addition to telephony, the iPhone also worked as

an MP3 player, a clock, a calendar, a camera, a mapping device and web browser.

Again, nothing on the market was remotely like it. Further, given the touch-screen

surface, there were no other moving parts on the unit; instead, users could type out

emails using a touch-screen digital keyboard. Finally, the unit also had a gravitational

device that allowed it to realize when the phone was being held horizontally or

vertically.

There was a lot of commercial hype ahead of the iPhone’s release, and many

tech writers took shots at the phone, but David Pogue, the tech writer for The New

York Times, offered this review:

As it turns out, much of the hype and some of the criticisms are justified. The
iPhone is revolutionary; it’s flawed. It’s substance; it’s style. It does things no
phone has ever done before; it lacks features found even on the most basic
phones. Unless you’ve been in a sensory-deprivation tank for six months, you
already know what the iPhone is: a tiny, gorgeous hand-held computer whose
screen is a slab of touch-sensitive glass. The $500 and $600 models have 4 and
8 gigabytes of storage, respectively — room for about 825 or 1,825 songs. (In
each case, 700 megabytes is occupied by the phone’s software.) That’s a lot of
money; then again, the price includes a cellphone, video iPod, e-mail terminal,
Web browser, camera, alarm clock, Palm-type organizer and one heck of a
status symbol. The phone is so sleek and thin, it makes Treos and BlackBerrys
look obese. The glass gets smudgy — a sleeve wipes it clean — but it doesn’t
scratch easily. I’ve walked around with an iPhone in my pocket for two weeks,
naked and unprotected (the iPhone, that is, not me), and there’s not a mark on
it. But the bigger achievement is the software. It’s fast, beautiful, menu-free,
and dead simple to operate. You can’t get lost, because the solitary physical
button below the screen always opens the Home page, arrayed with icons for
the iPhone’s 16 functions.200

Pogue does point out that there are flaws in the initial design and warns that the

biggest downside was the AT&T cellular network, which was (at the time) anemic
200
Pogue, David. "Matching the Ballyhoo, Mostly." The New York Times. The New York
Times, 26 June 2007. Web. 14 Apr. 2014.

197
compared to its competitor Verizon. That aside, he certainly loved the device and he

didn’t even touch on the potential growth.

Like the iPod, the iPhone could be networked through iTunes into the store.

Suddenly, the iPhone was part of a larger digital enterprise. And, following Sony’s

TR-63 transistor radio model, the iPhone was small, it was sleek, it was user-friendly

and it associated itself with not one but two established networks: the AT&T cellular

network and, in time, the Internet.

The iPad Arrives

The first iPad was revealed at an Apple conference in January 2010 and

released to the public in April that same year. To many critics, the iPad looked just

like an enlarged iPhone minus the telephony. The hardware was stainless steel with a

glossy glass surface and a single button at the bottom of the front. Dimensionally, it

was roughly half the size of a MacBook Pro and twice the size of an iPhone and it was

this shape and size that made its functionality unique. But because it was bigger than

the iPhone, it allowed the user to do more things with it; one example would be word

processing, because it had a touch-screen technology, the user worked with it much

differently than they did with the MacBook Pro.

Now, the laptop does offer more versatility: it has a traditional-size keyboard,

more memory, a larger visual working space, and the potential to multitask. But, when

most people merely use their computer to search the Internet, check email and make

online purchases, the iPad begins to make more sense simply because it does all of

198
these things and the touch-screen technology makes the interface experience more

friendly.

Portability is another benefit. Given its size and weight, the user can place it in

a backpack, briefcase or book bag and tote it around with them. Computers can be

handled the same way but they tend to be heavier, bulkier and, given transient use, not

entirely necessary in the field. Finally, Apple was working to create a software

solution that would open its software architecture to third-party venders.

The Third Screen

The cellphone and the tablet computer are known as “third screen”

technologies. The idea is formed this way: Television was the first screen, the

computer was the second screen, the cellphone and the tablet computer are the third

screen. Each of these technologies transformed the way we consume media: The

television brought information into our living rooms; the personal computer brought

information into our offices; and the cellphone allowed us to have information where

ever we go.201

In his book on the subject, author Chuck Martin tells us that the third screen is

transforming the book experience:

Each new medium is generally adopted in two phases. In the first phase,
traditions and practices of the previous medium are transferred to the new
medium. For example, a local newspaper makes its online pages look the same
as its print edition or a half-hour TV episode is posted online.

201
Martin, Chuck. The Third Screen: The Ultimate Guide to Mobile Marketing. Boston:
Brealey, 2015.

199
In the second phase of a medium’s transformation, companies experiment with
and exploit the characteristics of the new medium. They reinvent the way
things are done, reinterpreting them for the new medium.202

Martin’s observations affirm that the transition towards a new norm establishes a

baseline and a standard for future forms for this platform. In other words, as

topographical protocols are formed, a standard forms for this new medium. During the

last decade, many protocols have been established for third-screen technologies. In

fact, inside the Apple Application store, one will find very different Apps crafted by

the same companies for the iPhone and the iPad, which suggests that the screen sizes

of these respective consumer devices will dictate different user experiences. That

aside, given the ubiquity of the cellphone, many of our digital processes are moving to

these portable devices and, given the advent of Apple’s iTunes store, many of our

digital needs have been converted into downloadable programs. As of June 2016,

Apple had over 2 million Apps on its iTunes store.203

One of the first breakthrough third-screen technologies to move readers away

from the book and towards the eReader was Amazon’s Kindle. Launched in 2007, the

Kindle was small, portable, affordable, lightweight, and networked to a database that

offered millions of book titles. But the critics lined up as quickly as the Kindle found

its way into the hands of readers. Author Ezra Klein took aim at the Kindle

immediately:

The Kindle, here, is no better than the traditional book, and is in fact a bit
worse. The Kindle’s screen, though a remarkably impressive technology, is a
soft gray, and lacks the contrast of a book’s sharp, white pages. Moreover,
there’s an added risk to using the Kindle: if you drop a book, it doesn’t break.

202
Ibid.
203
"App Stores: Number of Apps in Leading App Stores 2016." Statista. Accessed March 05,
2017. https://www.statista.com/statistics/276623/number-of-apps-available-in-leading-app-stores/.

200
If you drop your Kindle, your heart catches in your throat till you examine the
damage. If you drop it in the bathtub, you’re out $400. The reading
experience—in this case, enjoyable—would be better served by investing in a
comfortable chair.204

Klein goes on to explain that the Kindle’s failing is the fact that it “tries to compete

too directly with paper. It attempts to electronically mimic the experience of reading a

book.”205 Beyond that, the Kindle does offer some technological nuance including the

ability to change the text size, data search for passages, add digital notes and so forth.

The success in the Kindle is the fact that it is possible to reach the transcendental state

of “deep reading” with this device, which is a break from our relationship reading

from a computer screen.

Returning again to theorist Robert Logan’s idea about “Digital Orality,” the

Kindle is the perfect incarnation of this idea: Logan believed that the Digital Orality

was about broadcasting the written word, which is the very essence of the Kindle’s

functionality and there were some benefits to be had. The digital book offers a swifter

connection between the author and the audience. One doesn’t need to visit a physical

bookstore to buy a new book; the Kindle is wired to a network that delivers new books

electronically.

And Ezra Klein notes that authors writing in the traditional publishing format

must go through a process that can take months as editors and designers work to finish

the printed version; while a digital version can be published and summary updated as

more facts develop related to the subject matter.206 He also notes that reading,

204
Klein, Ezra. "The Future of Reading." Columbia Journalism Review. 2008. Accessed
March 07, 2017. http://archives.cjr.org/cover_story/the_future_of_reading.php.
205
Ibid.

201
traditionally is a solitary matter, while electronic publishing allows for social

interaction between readers through the use of social media and comments sections.

He writes: “How much easier a dense work of philosophy would be if we could

communicate with others struggling through the same chapters, and even be helped

along by the author. Indeed, once we were open to the idea, much of what we do with

books could be dragged into the public sphere.”207

Over the last decade, the Kindle has been improved upon and the device has

transformed from merely being an electronic-reading device and into more a tablet-

like computer. It runs on an Amazon operating system and has the ability to host email

software programs and other companion media. But, because Amazon is more of a

retail company than a consumer electronics company, its latest offering—the Kindle

Fire HD—has been described as lackluster, at best.208

In 2010, Apple Computer introduced its first iPad and the world of tablet

computers took a turn. Now, I’ve written about the iPad earlier, but it bears noting that

when Apple released this piece of consumer electronics, it was not the first of its kind

in the market. The Kindle was already three years on the market and Microsoft and

other companies had released their own versions of lightweight, portable mobile tablet

computers. The thing that made the iPad special was the fact that the hardware, which

was just a few centimeters thick and featured a touch-screen glass technology, came

loaded with a user-friendly operating system and the ability to network with the

iTunes media store. At $500, the first iPad was expensive but it performed nearly as

206
Ibid.
207
Ibid.
208
Ibid.

202
well as the iPhone and the digital surface, which was larger, allowed the user to do

more things on the iPad.

From the moment the iPad came out, the critics assailed it, describing it as an

enlarged iPhone or as an unnecessary portable computer without the benefits of a

traditional laptop. PC Magazine’s John Dvorak wrote this:

I was hoping that some new paradigm would arise, so this device doesn’t
become yet another flop (albeit a pretty flop). But it’s not much prettier, and I
cannot see it escaping the tablet computer dead zone any time soon. It’s a black
hole that was created 20 years ago with machines like the Momenta. It was
reignited by Bill Gates some years back, when he predicted that these things
would rule the roost by 2003, or some such thing.209

He then goes on to say that no one will buy this product.

Since the iPad’s release in 2010, Apple has produced six versions of it and

have sold roughly 300 million units to date and, today, the iPad is 60-percent of the

tablet computer market.210 The critics continue to predict the demise of the iPad and

use every quarterly earnings report to retread the idea that tablet computers were

merely a fad technology. And yet, the iPad lingers in the marketplace.

The interesting thing about computers generally and the tablet computer

specifically is the fact that these consumer electronics are a conduit for a variety of

purposes. On the iPad, the user can read news, shoot photographs, listen to music,

watch television and films, read books, produce media, write emails, text message,

make voice and video conference calls, manage banking accounts, pay bills, shop

online, conduct browser searches and play video games. Compared to all the consumer

209
Dvorak, John C. "Apple's Good for Nothing IPad." PCMAG. February 02, 2010. Accessed
March 07, 2017. http://www.pcmag.com/article2/0,2817,2358684,00.asp.
210
"Apple: IPad Sales 2010-2017." Statista. Accessed March 07, 2017.
https://www.statista.com/statistics/269915/global-apple-ipad-sales-since-q3-2010/.

203
electronic devices that have come before, the iPad is a fusion product that unites all of

the media tools that came before. Basically, the iPad is a multiple-media device,

although it’s just not yet recognized as such.

It’s also a gateway consumer electronic device leading us towards what may

soon be the next phase of media communication and storytelling. Right now, that

budding technology is in its early stages but augmented reality and virtual reality may

soon be dominant communication forms by the end of the decade.

Multimedia Experiments for the Third Screen

As legacy newsgroups including The Washington Post and The New York

Times experimented with online storytelling, other publishing houses have been

exploring the use of application software or apps with varying degrees of success.

In 2009, The New Yorker went live with its own version of an iPad-only

publication. To do this, they took the content from its existing print publication and

then added digital enhancements. Experiments include having authors and poets read

portions of their work and then making those audio tracks available along side the text.

The New Yorker app also offers video and multimedia slide shows and animated cover

art. Because of the potency of this legacy media group, this app has been widely

successful and The New Yorker developers keep updating and improving the app.

Today, it remains the standard for magazine-style multimedia storytelling.211

Two years later, Rupert Murdock’s News Corp. launched an iPad-only

newspaper called The Daily. The publication was one of the first of its kind and

211
The New Yorke. "Jason Schwartzman Introduces The New Yorker IPad App." The New
Yorker. August 13, 2014. Accessed March 08, 2017. http://www.newyorker.com/news/news-
desk/jason-schwartzman-introduces-the-new-yorker-ipad-app.

204
entirely electronic. To read it, users had to download the app from iTunes and pay the

subscription rate. From there, new editions of the electronic publication would appear

daily. Now, the publication did read like the USA Today, which is considered among

professional journalists as the bottom rung for in-depth reporting. However, the staff

of The Daily did experiment with multimedia stories and would commingle video and

text throughout the publication. It even explored using video as the cover art for some

editions. A year after it was launched, News Corp. flagged the publication as

unprofitable and announced that it was losing $30 million annually. In 2012, News

Corp. killed the experiment.212 213 Odd as this might seem to say, The Daily was a

powerful first digital-only option and it’s a shame that News Corp. decided to abandon

it. The trouble might be that it came too soon in the arc of digital publishing.

In 2016, The Boston Globe, one of the nation’s oldest legacy newspapers,

launched an iPad app of its own. After the user downloads the software, she merely

clicks on it and software reveals a PDF file, or a picture file, of the current issue of the

newspaper. If you are looking for a wholesale lack of imagination, this approach to

digital storytelling telling is the epitome of it. When I first encountered the application,

I found it to be akin to asking a friend to lunch and then faxing him a steak sandwich.

Looking to Marshall McLuhan for answers, all one can say is that The Boston Globe

never considered the idea that the content for their print edition might have to take a

new form when transmitted digitally.

212
Webster, Andrew. "The Daily Reportedly Put 'on Watch' as News Corp. Looks to Cut
Costs." The Verge. July 12, 2012. Accessed March 08, 2017.
http://www.theverge.com/2012/7/12/3155678/the-daily-on-watch-cost-cutting.
213
Ingraham, Nathan. "First IPad-only Newspaper 'The Daily' Shutting down on December
15th (update)." The Verge. December 03, 2012. Accessed March 08, 2017.
http://www.theverge.com/2012/12/3/3721544/the-daily-ipad-news-mag-shutdown-december-15th.

205
iTunes App Store

As I wrote earlier, with the introduction of the iPhone, Apple amended the

iTunes store to include third-party applications or Apps. With this digital tool,

software engineers outside the Apple operations could write their own code, and

publish their software applications on the iTunes site. To facilitate the process, Apple

created something called the ‘Software Development Kit’ or SDK program, which

helps these external engineers meet the standards of Apple’s design protocols.214

The interesting thing here is the fact that Apple is a notoriously secret company

and their initial obsession with securing their technology was what ceded market share

to IBM and Microsoft and Intel in the 1980s and 1990s. Back at the beginning of

Apple’s Macintosh evolution, the company decided that it would limit access to its

complex operating source code, a decision that made it difficult for anyone other than

Apple engineers to write software for Apple computers. On the other side of the

argument were Bill Gates and Microsoft, which were sharing integrating technologies

for its operating system with third-party software companies. As a result, computers

running on the Microsoft operating system—called MS-DOS—had more software

choices; users with these “personal computers” had a whole host of software programs

that could be downloaded and used on the computer. Giving that, and the expense of

Apple products, the PC companies began dominating the home computing market. By

1995, PCs had control of over 90-percent of the consumer market.215

214
Inc., Apple. "Resources - IOS - Apple Developer." Resources - IOS - Apple Developer.
Accessed March 07, 2017. https://developer.apple.com/ios/resources/.

206
On the verge of collapse in 1997, Apple needed a change, and it turned to its

founder, Steve Jobs, who had been ousted years before, and brought him back into the

company. It was under Jobs that Apple made its miraculous turn around. Jobs was

instrumental in the development of the iPod, the iPhone, the iPad and the Apple watch.

He also orchestrated the creation of iTunes and its cyclical revenue stream. Opening

the iTunes store to third-party applications demonstrated a near reversal in the Apple

software sharing protocol and that decision has translated into a financial boon for the

company.216

As of 2016, Apple’s App Store had over 2.2 million Apps and the store

generated $28 billion in sales during that same year.217 That volume of digital content

also translates into variety. There are apps for all sorts of things stemming from the

very useful to the odd and silly. That variety also allows iPad owners to host a

complex array of applications on their devices, making each as different from every

other.

iBook Author

In 2012, Apple released a new electronic publication editing software program

called iBook Author, which allows anyone to create an interactive book. Apple also

released an updated version of iBooks, which is the company’s digital bookstore and

215
Ceruzzi, Paul E. A History of Modern Computing. Cambridge, Mass: MIT Press, 2003. 281-
306.
216
Linzmayer, Owen W. Apple Confidential 2.0: The Definitive History of the World's Most
Colorful Company. San Francisco, CA: No Starch Press, 2008. 289-306.
217
Dignan, Larry. " Apple's App Store 2016 Revenue Tops $28 Billion Mark, Developers Net
$20 Billion." ZDNet. January 05, 2017. Accessed March 07, 2017.
http://www.zdnet.com/article/apples-app-store-2016-revenue-tops-28-billion-mark-developers-net-20-
billion/.

207
library. The two programs are designed to work in tandem: the author creates the book

and publishes it online using iBook Author; and the reader (pays for and) downloads

the book using iBooks; both the editing software and the library/bookstore are free

Mac-friendly downloads. There are a few other Apple programs attached to this digital

matrix, but these two are the guts of the production-packaging-delivery experience.

As with any book project, the process of creating a multimedia book is rather

complex and complicated. Before opening the iBook Author software, the producer

(or production team) should assemble all the key components of the book: text,

photographs, audio, video, graphics and so forth. The program does allow producers to

write the text content but the interface is stiff and inaccurate; there exist better writing

software programs with more sophisticated editing components. The producer must

also craft the video, photo and audio components before opening the iBook Author

software. So, the first step in the creation process is to assemble all the media.

Writing is the easy part. If the producer has writing skills, he/she should be

able to fashion content using word processing software. Photography is also a

relatively adaptable process too. Audio and video production, however, tend to be

complex and difficult. The process of learning video production, for example, can take

years simply because of the variables related to lighting, composure, audio and

composition. Like writing, the skills for professional video production are hard earned.

However, once the components are created and assembled, the producer can move to

the iBook Author software.

The genius of this program is the ease of its use. When opened, iBook Author

offers a variety of templates designed for various content projects. Should the

208
producer decide to use a template, the program opens with dummy type for text and

headlines, place savers for photographs and video and so forth. The program is a true

WYSIWYG—what-you-see-is-what-you-get—drop and drag program.218 All the

producer has to do is copy and paste the text into the formatted pages and rewrite

headlines; there are spaces for video and audio components too; and given some

experimentation, a producer can learn the intricacies of the software in a few hours.

Once the book is created, the producer must push a “publish” button and the package

is translated into a publishable bit of programming, which is ultimately placed on

iTunes complete with publishing details and author credits.

Once online, the book can only be downloaded to a Mac-based operating

system, which means it can only be seen on iPads, iPhones and/or Macintosh

computers.219 This fact isolates the content from any PC computer or tablet.

The benefit of iBook Author software is the fact that is a simple user-friendly

program that allows any producer to commingle multimedia in a common space. It is

entirely possible for a digital book to host 100-pages of text, several minutes of video

and audio content; and scores of photographs. The cumulative work must have less

than 2 gigabytes of data.220

The other benefit is packaging. Once the book is assembled and exported for

publication, it is “packaged” as a whole and complete product that cannot be corrupted

by changes in Internet software protocols. Why? Because the data is housed inside

218
Baig, Edward C., and Bob LeVitus. iPad for Dummies. Hoboken, NJ: John Wiley & Sons,
2015.
219
Ibid.
220
Ibid.

209
Apple’s iTunes store, it doesn’t live in the web-browser-based dominion of HTML

coding. Instead, the Internet merely becomes a place for marketing and a delivery

system for the work; a fact that preserves these projects from the threat of deletion or

digital corruption.

It’s also fairly easy to use. Author and Apple Distinguished Educator Monica

Burns had this to say in her blog:

In iBook Author, even teachers just scratching the surface of this powerful tool
can create an impressive product. This program includes a range of templates
allowing users to drag and drop text, images and videos on the page. As your
comfort level increases, you can record audio or add music, put review
questions at the end of a section, and place Keynote presentations for students
to access.221

Monica Burn’s assessment is fairly accurate.

Now, there are concerns about iBook Author. Specifically, because it’s trapped

inside the iTunes environment, only Mac users can read it. Also, the content itself can

look formulaic and stilted. Additionally, the software tends to favor text media, or

‘cold media,’ over the other ‘hot media.’ Finally, the final work pairs video with text

but doesn’t conjoin them, which is the true essence of multimedia.

Augmented Reality/Virtual Reality

The concepts of virtual reality and augmented reality are about enhancing or

changing or “tricking” the conditions of human perspective. As human animals, we

are served by our five senses—sight, smell, sound, taste, touch—which are all

offshoots from the organic development of the human being. To measure our

environment, we rely on these senses to determine our proximity to objects, animals,

221
Burns, Monica. "5 Reasons to Try iBooks Author." Edutopia. January 27, 2014. Accessed
March 08, 2017. https://www.edutopia.org/blog/5-reasons-try-ibooks-author-monica-burns.

210
shelters, the environment and other physical conditions; these senses were certainly

developed for survival purposes but, as the human animal evolved and learned to

control its environment, these senses took on other properties including recreational

uses. To the pre-modern man, the sense of taste, for example, was developed as a

source for self-preservation: poisonous foods have bitter or harsh flavors and textures.

Modern man has learned to enjoy the sense of taste by mixing and matching various

flavors and textures through advanced culinary preparation techniques. Given our

sensory shift from the necessary towards the recreational, a series of technological

advances have been created to bring us pleasure. The same could be argued for sex.

In his book The Singularity is Near, author Ray Kurzweil offered this explanation:

Sex has largely been separated from its biological function. For the most part,
we engage in sexual activity for intimate communication and sensual pleasure,
not reproduction. Conversely, we have devised multiple methods for creating
babies without physical sex, albeit most reproduction does still derive from the
sex act. This disengagement of sex from its biological function is not condoned
by all sectors of society, but it has been readily, even eagerly, adopted by the
mainstream in the developed world.222

So, human advancement from a state of survival to a state of recreation has opened the

door to a variety of sensory experiences that we are just beginning to explore.

augmented and virtual reality devices are becoming a conduit for that exploration.

Although the ideas of augmented reality and virtual reality have been with us almost

as long as photography, these technologies today are advancing in exciting and new

ways. On a spectrum of perception, starting with the “real environment,” the first step

is “augmented reality” followed by “augmented virtuality” and finally “virtual

reality.” Imagine walking deeper into a darkened room; as one moves further and

222
Kurzweil, Ray. The Singularity Is Near: When Humans Transcend Biology. London:
Duckworth, 2016. 301.

211
further away from the light, the immersive experience becomes greater. So,

“augmented reality” is a layer of information added to the existing “real” landscape;

“augmented virtuality” is about taking sensory recordings from the “real” landscape

and presenting them in a virtual realm; and “virtual reality” is fully immersing the

human in an artificial world.223

To achieve these experiences, producers must create content that can be

presented first intimately and later exclusively to the user’s senses. The first film and

music recordings, for example, were an “augmented virtuality.” Advances in

technology have finally made the immersive experience that much more dramatic.

Take, for example, the film Avatar. Released in 2009, Avatar was a science-

fiction story about Earthlings colonizing a distant planet to mine for ore and, in the

process of doing so, the colonists go to war with the indigenous people of that planet.

When it hit the theaters, the film came in traditional and 3-D formats.

3-D movies have been around for several decades, fading into and out of vogue

as a cultural curiosity; when Avatar was released in 3-D, the producers finally created

a technological experience that moved beyond the realm of gimmick and towards a

more immersive “augmented virtuality.” In other words, this version of the 3-D

technology worked; it succeeded at transforming the audience’s perception of self—or

as Martin Heidegger called it: “Dasein”—in the world.

When I saw the film, I found myself sitting and watching the event transform

from a static two-dimensional image and into a more complex three-dimensional

realm. I knew that the experience was an illusion and I realized that my own Dasein,

223
Jerald, Jason. The VR Book Human-centered Design for Virtual Reality. San Rafael:
Morgan & Claypool, 2016.

212
or sensory experience, was a physical experience of sitting in a Providence movie

theater roughly 30-feet from the movie screen. And yet, in one scene, the film showed

a forest fire burning across the face of a fictional planet and as I sat there watching,

with my 3-D glasses on, I actually moved my hand, raising it up before my face to

brush away a glowing ember that had drifted into my line of vision. In that moment,

the “augmented virtuality” of Avatar had truly transported me into the realm of this

fictional world. Suddenly, I was no longer merely a witness to the film; I was a

participant, existing in some free-floating state between what was real and what was

fantasy.

Author Naomi Baron makes a similar observation about the way our brains

react while reading:

Reading isn’t done in a vacuum. Since the neural tools for reading are cobbled
together from structures designed for other purposes, it is not surprising that
reading activates areas related to what the text is about. Say you are reading a
scene in a novel in which the hero is running to escape the villain. As you read,
the motor area of your brain lights up—even though you’re curled up in a
chair, not moving.224

Is she addressing Walter Ong’s idea about “secondary orality” here? Could it be

possible that my experiences reading adventure novels throughout my life prepared me

for the AR/VR experience I discovered while watching Avatar? The human mind is a

marvelous thing.

Also, the technology for augmented experiences continues to advance.

In 2013, Google released a piece of hardware called Google Glass, which was

an “augmented reality” device. The purpose of these glasses was to allow the user to

see and read information, which was presented as an overlay to the real world (text

224
Baron, Naomi S. Words Onscreen: The Fate of Reading in a Digital World. New York:
Oxford University Press, 2016. 160.

213
messages and small icons appear on the lenses of the glasses). In a video

demonstrating the technology’s potential, the audience sees the world from the

perspective of a Google Glass user moving through his day. We see him eating

breakfast, checking the weather and making plans to meet a friend at a bookstore. As

he moves from the Gramercy Park area of New York City downtown to Strand Books,

we see through his glasses a virtual GPS map and other notations; he also takes

pictures of things he’d like to remember and he sends brief text messages to his

friend.225

Throughout the demonstration, it is clear that the user is living and moving

through the real world and Google Glass uses visual cues to enhance the user’s living

experience. Clearly, the Google Glass technology is “augmented reality” or a layer of

data presented as a supplement to the real world. In this situation, the immersion is

slight and associative and the Google Glass technology only seems to take the data

experience away from the smart phone, placing it in a more passive area, where the

user can see it without making any sweeping gestures. In other words, you don’t have

to retrieve your phone from a pocket or backpack and lift it to eye level to receive the

information; instead, the data is right there at eye level projecting off the eyeglasses.

Innovators are working on a similar technology that could be added to contact lenses,

a fact that takes augmented reality a step closer to the receptors in the retina and the

optic nerve.

How about taking the experience of watching somebody on stage giving a


presentation and trying to make the follow-up actions simpler? Using facial
recognition you could automatically be shown the speakers’ online profiles,
previous work and other similar experts. Another example scenario could be

225
Huzaifahhb74. "Google Glasses Project." YouTube. May 07, 2012. Accessed April 21,
2017. https://www.youtube.com/watch?v=JSnB06um5r4.

214
that you have gone for a walk in the woods and see a snake. Your augmented
reality contact lenses could identify the snake, take a picture and post it to your
social networks to share your experience, and most importantly, tell you if it is
dangerous or not.226

Virtual reality is a much more complex experience. In the current

technological form, the user must wear virtual reality goggles, which strap onto the

user’s head, placing a video screen over the user’s eyes. Imagine a pair of blackened

scuba diving goggles, and you’ll have a fair understanding of the device. Once the

user has the VR goggles on, she is dominated with visual and audio signals that

bombard his senses. The experience is all-consuming and immersive; further, as the

user moves his head around, the scenery shifts around him, creating the illusion of

transporting the user into a new virtual realm. The VR experience directs the user’s

cognitive experience inward; while the AR experience directs the user’s cognitive

experience outward.

And while these concepts are diametrically opposed, the AR and VR

experiences do have some key things in common: First, these are “fourth screen”

technologies that have the added dimension of mobility. When using AR and VR, the

ability to move and the technology’s adaptability to this motion creates a

transformative experience that has the potential to move beyond the realm of the

gimmick and towards something more integrated: a human-technological

enhancement of sorts.

In 2016, an augmented reality game called “Pokéman Go” was released to

game enthusiasts. To play, users had to download the software application to their

smart phone and key in some registration data. The game itself was a mixture of the

226
Rowles, Daniel. Building Digital Culture: A Practical Guide to Business Success in a
Constantly Changing World. Place of Publication Not Identified: Kogan Page Stylus, 2017. 13.

215
real world layered with augmented images. The rules of the game were this: users

were sent out into the real world to hunt imaginary Pokéman animals. To see the

animals, they had to look on their smart phones for maps and other clues that led them

to targets; in doing so, the user had to move around their community visiting parks,

buildings and other public venues searching for the Pokéman targets. The AR

experience is apparent: to play the game in Washington DC, for example, players

found themselves moving around the city, across the National Mall, and through the

parks surrounding the White House and the Capitol Building looking for the digital

Pokéman targets. The Pokéman Go developers say the program sent millions of people

scrambling around their communities, exercising as they hunted the digital animals.

The success of the program inspired a series of tech companies—Apple,

Facebook and Google among them—to invest heavily in AR.

The interesting thing about AR is the potential it has for technologies outside

the realm of gaming and exhibition. One could, for example, create an AR tour of the

Civil War battlefield at Gettysburg (Penn.), for example, or the Museum of Modern

Art in New York City, or the Appalachian Trail. These, of course, are just sample

ideas.

Virtual reality also has great storytelling potential.

In 2014, the Columbia Journalism Review published an article declaring that

virtual reality journalism could be the next big thing. To make her case, reporter Erin

Polgreen writes about a VR producer who produced a series of stories about children

who are Syrian refugees. As part of the VR experience, the user gets to walk through a

Syrian refugee camp and has the ability to look about—up, down, side to side—to see

216
the conditions of the camp. The project is called “Project Syria” and the producer

Nonny de la Peña calls it “an immersive journalism experience.” The work is called

“empathetic journalism” or news that acts upon the viewers sense of empathy.227 And

that’s the second thing AR and VR share in common: they both elevate the sense of

“empathy” between the storyteller and the audience. This may seem like a minor

thing, but as these technologies enhance, the relationship between author and audience

will grow closer.

As for the use of “empathy” as a tool for nonfiction storytelling, this is a new

idea and it’s unclear how the journalism purists feel about the loss of objectivity in a

project designed to appeal to the viewer’s emotional senses. In a video about the work,

one of the viewers removes the VR goggles to reveal that she’s been crying.228

On the other hand, one can only imagine what Richard Wagner would think

about an immersive experience of this nature. virtual reality is transcendent by design

and given some aesthetic appreciation, could become a formidable media experience.

Wagner, however, worked to separate the audience from the experience, while AR and

VR technologies struggle to bring the production and the audience closer together.

Summary of Multimedia

As the Internet emerged as a commercial marketplace, many of the tools of

multimedia had already found their way into world culture. Text, as a medium, lashed

out on its own as a “disembodiment” of the human idea; writing something down

227
Polgreen, Erin. "Virtual Reality Is Journalism’s next Frontier." Columbia Journalism
Review. November 19, 2014. Accessed April 22, 2017.
https://www.cjr.org/innovations/virtual_reality_journalism.php.
228
Ibid.

217
exhumes the idea from the human mind, transcribing it into something that exists on

its own. The printing press simply amplified the volume and the reach to a reading

audience. Photography was next and it replicated the power of the written word by

capturing the light in a moment, transforming it into a tangible thing; the photograph is

the convergence of time and place in the form of a recorded artifact. The phonograph

does the same thing to sound; ripples of sound waves are transformed into scrapes on a

wax cylinder, which—when rotated and amplified—can replicate the sound. Film is a

repetition of the still image running in series to create the illusion of motion; again, it

is captured and cataloged and designed for repetition. Television and radio are media

without form; radio and television messages are the transmissions of audio waves,

which are captured by receivers that reform the messages reconstituting radio waves

back into audible and ocular formats. All these media—print, photograph, recording,

film, radio, television—have some passive relations to each other and some share

closer associations with other various forms; however, they each tend to coexist alien

from the other forms. With the exceptions of text and photography and music and

film, there are slight instances where one form commingles with another. Instead,

these media have dwelled in their own places.

The Internet, binary code and advances in consumer electronics have created

an opportunity to finally commingle or conjoin media forming more complicated story

forms. The problem now is finding a model that works to both inform and entertain

the audience; that model must also package and deliver that story in a complete and

uniform way. During the last two decades, there have been experiments towards this

greater multimedia purpose.

218
Part III: Multimedia

Chapter 5

‘Total Work of Art’

In Atlanta, Georgia, near the center of Grant Park, there was an odd exhibition,

a piece of Americana called The Cyclorama, which was a performance illustrating the

events of the Civil War “Battle of Atlanta.” The work was located inside the Atlanta

History Center and was a culmination of several sensory experiences designed to

transport visitors to 4:30 p.m., Friday, July 22, 1864 and this important moment in

American history.

The experience was this: Patrons were guided into a darkened theater, which

had a centralized elevated seating area; the audience was seated on the tiered platform

and opposite was the painting depicting a portion of the battle scene from the Civil

War; in addition to the painting, affixed in the foreground were life-sized diorama

characters extending towards the audience and this staging included trees and bushes,

horses and other animals and figurines of soldiers all frozen in conflict. When the

performance began, the lights went down and an audio narrative started, which

included a spoken history, music and sound effects depicting elements of battle; at this

point, the tiered platform began rotating, spinning the audience in a slow, measured

clockwise turn which, as it moved, revealed more and more of the mural and the

diorama; of course, the narration, the music and the sound effects continued to fill out

the experience. As part of the performance, as the painting was revealed, lights and

other effects drew attention to components of the painting; examples included the

illumination of Major General John “Black Jack” Logan charging ahead of his Calvary

219
unit towards the center of the painting; and the figurine of a dying man modeled to

look like actor Clark Gable.

The Cyclorama was actually a confluence of several media—image, music,

motion, sculpture, sound—which were brought together and assembled at this location

in 1921. The key component to the performance was the mural, which is a massive—

42-feet high and 358-feet long—panoramic painting commissioned in the 1880s to

illustrate the important moments during the 1864 Civil War Battle of Atlanta. In its

original setting, the painting was mounted on a curved wall facing inward and the

audience was directed into the theater by following a pathway beneath the wall, which

led them ultimately to the seated area on a rotating platform located at the center of the

performance venue. To enhance the sensory experience, designers added a diorama—

complete with 128 figurines—to the foreground, which attempted to unify the painting

with a foreground to create a 3-dimensional effect of being on the battlefield at this

moment in history.

In her book Immersive Words: Mass Media, Visuality, and American

Literature, 1839-1893, author Shelly Jarenski describes the Cyclorama this way:

When you walk into the Atlanta Cyclorama (1886) today, you are immediately
surrounded by four-story circular walls covered floor to ceiling with a painted
battle scene. The lights are kept low to intensify your focus on the painting and
to intensify the experience of immersion that its size and content evoke. In the
space between the rails of the auditorium and the beginning of the painting
itself, 128 dioramic figures dramatize the scenes depicted in the painting, often
merging seamlessly with the backdrop. For example, a cloud of smoke rises on
the canvas, indicating a blast, surrounded by figures (some of paint, others of
plaster) cowering and reeling around it.1

1
Jarenski, Shelly. Immersive Words: Mass Media, Visuality, and American Literature, 1839 -
1893. Tuscaloosa: Univ. of Alabama Press, 2015. 74.

220
The success of this event was a mixture of performance elements, which included the

low lighting, the rotating audience, the melding imagery uniting foreground with

background, the oral recitation, the music, the sound effects, the story itself and the

emotional elements of this time period. The aesthetic of the project attempted to

transport the audience to a new and distant experience.

In panoramas, this immersive effect is achieved because the viewer is able to


imagine what she is seeing in both spatial and temporal terms: spatial because
of the use of three-dimensional elements, depth, and, perhaps most important,
movement; temporal because of the use of narrative.2

This, of course, was a dated model for multimedia. The Atlanta Cyclorama was a

fusion of sight, sound and motion, which was designed to immerse the audience in a

multimodal sensuous experience. Did it really work? For something created in 1921,

the Atlanta Cyclorama was a curious spectacle that attempted to communicate with the

audience on many sensory levels. Given the fact that one of the most powerful

elements of the painting as an artful medium is the fact that paintings freeze time; in

this case, the Atlanta Cyclorama captured the whole of the three-day battle for Atlanta,

creating a visual narrative that celebrated the winners and losers of the conflict and

memorialized the dead; it also succeeded to recreate a visual or spatial representation

for the battlefield, which (unlike the battlefield at Gettysburg) had long since been

destroyed by generations of Atlanta’s urban development. So, yes, given these

aesthetic elements and the fact that this project was created decades before

digitalization, I’d consider the Cyclorama to be a triumph in the idea of multimedia

storytelling and a possible model for what was to come.

2
Ibid., 79.

221
The aesthetic of the Cyclorama is actually a subset of the 19th century painting

movement towards creating panorama landscapes. Works from the movement include

paintings that are typically very large and subject matter that often features grand vista

views of mountain ranges or waterways; the idea was to design a visual event that

illustrated a visual story or visual narrative; when the work is presented, curators often

add other media elements—music, lectures, artifacts—to enhance the experience.

All panoramas included other representational forms in the performance, such


as a narrative pamphlet which served as a type of guidebook to the painting, a
lecture, musical accompaniment, and three-dimensional objects, which turned
these paintings into multimedia sensations.

What is distinctive about the panorama among other forms of nineteenth-


century art and spectacular entertainments is its reliance on immersive
aesthetics. The exhibition space of the panorama was designed to isolate
viewers from other sensory perceptions and involve them completely in the
experience of the image. The presence of other representational forms
heightened the effect. These forms included music, three-dimensionality,
motion, and narration.3

In other words, multimedia practice was designed to create a transcendent media

experience, which transported the audience to a time and place far more distant than

the physical and temporal realms defined inside the museum. By assembling a series

of companion media and coordinating their contact, multimedia artists and curators

attempted to elevate the work, crafting something where the final presentation was

much greater than the sum of its parts. To do this, something else must be added: the

passion of the patron, whose emotional engagement only works to enhance the

experience. Clearly, the multimedia aesthetic aspires for more and, for a time,

European artists began reaching for something the Germans defined as “the total work

of art.”

3
Jarenski, Shelly. Immersive Words: Mass Media, Visuality, and American Literature, 1839 -
1893. Tuscaloosa: Univ. of Alabama Press, 2015. 79.

222
Gesamtkunstwerk

As it happens, the idea of multimedia storytelling or multimedia performance

has been with us for nearly two centuries. One of the earliest descriptions of

multimedia came from German philosopher Karl Friedrich Trahndorff who, in essays

about aesthetics in 1827, suggested that sound-based arts and visual-based arts must

come together to form a unified or total art form:

He posited that the four main artistic enterprises—word-sound (Wortklang),


music, expressive gesture (Mimik), and dance—might flow together in one
artwork, following a core “aspiration toward a Gesamt-Kunstwerk” that was
common to all arts.4

He called this concept Gesamtkunstwerk, which translates into “total work of art” and

he perceived it as the holy grail of aesthetic expression.

Twenty years later, German composer Wilhelm Richard Wagner (who wrote

“Flight of the Valkyries”) seized upon the concept and made it his life’s work to

achieve a sense of Gesamtkunstwerk, which he described as a harmonious fusion of

dance, music and poetry. Looking for examples, he drew his inspiration from the

ancient Greek tragedies.

According to Wagner, this success of the ancients could act as a model to


modern artists, encouraging them to engage in a Quest to bring about, in a
suitably updated form, a similar process of integration of those major art
forms, music and drama, which were considered to be especially suitable. The
acquisition of separate, clearly defined boundaries between these art forms, it
seemed, had in no way staved off their present-day decline—and the remedy
seemed clear.5

4
Imhoof, David Michael, Anthony J. Steinhoff, and Margaret Eleanor. Menninger. The Total
Work of Art: Foundations, Articulations, Inspirations. New York: Berghahn, 2016. 186.
5
Brown, Hilda Meldrum. The Quest for the Gesamtkunstwerk and Richard Wagner. Oxford:
Oxford University Press, 2016. 2.

223
Wagner (1813 to 1883) became obsessed with the idea and in his writings equated the

spirit of Gesamtkunstwerk to that of a theological mission. He saw this form of artistic

balance as a tool for building a better future and his fixation with dance, music and

poetry reflected what he saw as the three important aspects of human existence.

In his book The Total Work of Art in European Modernism, author David

Roberts writes at length about Richard Wagner:

Aesthetic redemption in the Gesamtkunstwerk is comprehended as an act of


loving self-sacrifice that mirrors the truth and necessity of the tragic action. In
and through this sacrificial act the arts find their freedom as art in the dramatic
union of the three purely human art forms: dance, music, and poetry—the
language of the body, the language of the heart, and the language of the spirit.
Opera, by contrast, is dismissed by Wagner as nothing but the occasion for
displaying the egoistic rivalry of the three sisters. United however, dance,
music, and poetry draw the other—plastic—arts into their redemptive orbit:
“Not a single richly developed capacity of the individual arts will remain
unused in the Gesamtkunstwerk of the future.” The statue is brought to life in
the dance; the colored shadows of painting, whether of the human figure or of
historical scenes, will give way to the depiction of nature as the setting for
dramatic action; architecture, enriched by sculpture and painting, will attain its
true destiny in building the theatre of art, the temple of the people without class
distinctions.6

Searching for models from history, Wagner considered William Shakespeare

and Ludwig van Beethoven the masters of their respective art forms and saw a fusion

of their works as the penultimate incarnation of his dream of Gesamtkunstwork. To

Wagner, Shakespeare’s work was the embodiment of “absolute literature” and

Beethoven’s work—especially Symphony No. 9—was the embodiment of “absolute

music.”

…And perhaps the most important dimension of the Wagnerian synthesis, the
introduction of the musical language of Beethoven into the drama through the
orchestra: the living body of harmony, which immerses audience and dramatic
action in the sea of shared feeling. This endless emotional surge finds its

6
Roberts, David. The Total Work of Art in European Modernism. Ithaca, N.Y: Cornell
University Press, 2011. 75.

224
redemption in the poetic word, just as the poetic intention is simultaneously
extinguished and realized in the living stage of presentation.7

During his career, Wagner never completely fulfilled his vision for

Gesamtkunstwerk, at least not aesthetically. Instead, Roberts describes him as an

average artist with a big vision but Wagner’s ideas certainly left a standing legacy at

least in musical theater and performance. It was Wagner who decided to “sink the

orchestra pit” below the audience’s sight line offering them a less obstructed view of

the stage; he also sought to create what he called “…a ‘mystic gulf,’ from which the

music would seem magically to emanate.”8 This move physically elevated the actors

above the orchestra, framing them as a still photograph, a move that makes the players

appear much larger.9 He also plunged the theater into darkness, leaving only sparse

gas lighting; he did this to further advance the distancing effect. Finally, he went as far

as to reshape the theater itself, designing it to have a “precise acoustic shaping” that

worked to bathe the audience in sound.10 Taken together, Wagner’s vision was to lift

the performance, use stage lighting to frame it into a flattened two-dimensional space

and then inject a radiant wave of acoustic energy—music—into the forum. This was

spectacle of film theater decades before the invention of film (and one can see

elements of the dated Cyclorama model and the modern IMAX movie theater

experience as byproducts of this vision). During the latter half of the 19th century, this

was new—as in, no one had ever been done this before—and Wagner’s overarching

7
Ibid. 75.
8
Imhoof, David Michael, Anthony J. Steinhoff, and Margaret Eleanor. Menninger. The Total
Work of Art: Foundations, Articulations, Inspirations. New York: Berghahn, 2016. 186.
9
Ibid., 56-78.
10
Salter, Chris. Entangled: Technology and the Transformation of Performance. Cambridge,
MA: MIT Press, 2010. 3.

225
purpose was to evoke a transcendent experience, a dream-like state that communicated

sensuously with the audience.

As Gesamtkunstwerk, Wagner’s music drama generated a relationship between


the aural, the visual, and the kinetic that was more important than any
referential relationship that reached beyond the bounds of the work: the
“reciprocal motivation of different sensory fields” was his prime aesthetic
objective, not context-dependent meaning.11

The future of art, to Wagner, was a transcendent multimodal performance that fused

visual, physical and aural media to create a larger, more impactful aesthetic

experience. In other words, art wasn’t just about what you saw, it was about how it

made you feel and this is the vital, final ingredient in the multimedia formula. To

Wagner, the idea of multimedia performance was a meeting of the heart (song), the

mind (poetry), the body (dance) and finally… the transcendent soul. Now, I could

continue on trying to explain this in English, but there is a better word in Spanish that

articulates this idea: Duende.

Duende is the “heightened state of emotion” created by a work of art. Walter

Benjamin touches on this when he describes the “aura” of artwork, but the Spanish

word is more complete: Duende is about the relationship between the object and the

subject. The artwork is defined by the emotional influence of the observer. To

Wagner, this is when the audience begins to participate in the performance and with

Duende the totality of the cumulative media work becomes complete with song,

poetry, dance and emotion. This was key: The audience had to be part of the

performance.

11
Imhoof, David Michael, Anthony J. Steinhoff, and Margaret Eleanor. Menninger. The Total
Work of Art: Foundations, Articulations, Inspirations. New York: Berghahn, 2016. 188.

226
This concept was groundbreaking and Wagner’s vision inspired a century of

artists—specifically, the European modernists—who worked towards this greater idea

of the “total work of art.”

…the total work of art seeks to convey a world vision, anticipate a future
utopian or redeemed state of society, and act as the medium of such a
transformation. We find a comparable formulation in Marcella Lista’s
definition of Wagner’s artwork of the future: the totalizing union of the arts, as
the reflection of the deep unity of life, is directed to the goal of making
aesthetic experience the yeast of society to come. It was Wagner above all who
made the idea of synthesis of the arts in the service of social and cultural
regeneration a central focus for aesthetic modernism. But if his works and his
writings provide the dominant reference for subsequent developments, these
appear primarily in the form of a search for alternatives to his own theory and
practice. As Lista puts it, Wagner set in train a new exploration of the stage as
the site of the totalization of aesthetic forms.12

In the modern age, Wagner’s Gesamtkunstwerk is an articulation of multimedia

or the idea that several existing media can be coordinated to create a total artful

experience. Of course, Wagner’s suggestion that artists must find a “suitably updated

form” hints at the idea of Walter Ong’s “secondary orality” in that we are now living

in a post-literate society while the Greeks were crafting their performances in the pre-

literate age. Again, Wagner is credited with having some success at his mission but,

150 years later, things have gotten a whole lot more complicated. The integration of

media has been a long sought after but unrealized passion of many modern

storytellers. Theorists Max Horkheimer and Theodor Adorno knew this and wrote

about it:

The alliance of word, image, and music is all the more perfect than in Tristan
because the sensuous elements which all approvingly reflect the surface of
social reality are in principle embodied in the same technical process, the unity
of which becomes its distinctive content. This process integrates all the
elements of the production, from the novel (shaped with an eye to the film) to

12
Roberts, David. The Total Work of Art in European Modernism. Ithaca, N.Y: Cornell
University Press, 2011. 13-98.

227
the last sound effect. It is the triumph of invested capital, whose title as
absolute master is etched deep into the hearts of the dispossessed in the
employment line; it is the meaningful content of every film, whatever plot the
production team may have selected.13

Given that we finally have a common medium—digitized content—and a series of

digital tools for production, packaging and delivery, we stand ready to realize a

modern or digital Gesamtkunstwerk. We must also address the modern incarnations of

Wagner’s basic media: poetry, or media for the mind, has been replaced by digitized

text; song, or media for the heart, has been replaced by oral/aural media including

music and the spoken word; and dance, or media for the body, has been replaced by

the motion created by video. And for the Wagnerian vision to be complete, the final

multimodal project must inspire the audience to be aware and excited. There must be a

synergy—an elevation of the aesthetic beyond the presentation—for the work to be

successful and that elevation is defined by the audience and its emotional reaction to

the work. This is the modern Gesamtkunstwerk.

Of course, the task now is to create a model that accounts for all those media

factors and there have been many recent attempts at multimedia storytelling. Some are

coming from the professions, but some are also coming from universities; in either

situation, the purpose is clear: to discover a way to make digital storytelling a viable,

relatable form and to present it in a way that it will find an audience and, in doing so,

communicate with them in a multi-sensory way.

13
Adorno, Theodor W., and Max Horkheimer. Dialectic of Enlightenment. London: Verso,
2016.

228
Experiments with Content

In 2015, three professors from the University of California-Berkeley published

on the university website their collective understanding of what they entitled

“Tutorial: Taxonomy of Digital Story Packages,” and then proceeded to itemize their

favorites.

As part of the introduction, they identified two primary story forms, which

they described as follows: Linear stories, which run with a traditional narrative arc;

and Christmas Tree stories, which have a text story running through the center of the

website, and companion media—video, photographs, audio—are added on alongside

like ornaments hung on a tree.14

Before we get into the examples, let me talk to you about one of the glaring

mistakes that took place early on. In 2010, Rolling Stone Magazine landed an

interview with General Stanley McChrystal, who was the commander of U.S. and

NATO forces in Afghanistan, and apparently the general was quite forthcoming about

his role in the U.S.-led war on terror. In the analog edition of the magazine, the story

was planned for and ultimately appeared as the cover story, with a long 7,800-word

essay entitled “The Runaway General” published in the “feature well” of the

magazine. As the magazine went to the presses, the editors knew they had a story a lot

of people would want to read and they decided to cross-publish the entire story on the

Rolling Stone website. There are a handful of controversies attached to the content of

the article, but for the purposes of this thesis, I’d like to address the problem Rolling

Stone faced when the editors preempted their own publication by posting the story

14
Grabowicz, Paul, Richard Hernandez, and Jeremy Rue. "Taxonomy of Digital Story
Packages." Berkeley Advanced Media Institute. July 27, 2015. Accessed March 10, 2017.
https://multimedia.journalism.berkeley.edu/tutorials/taxonomy-digital-story-packages/.

229
online. Before readers could scramble and purchase the analog version of the

magazine, the content was already available—for free—online, an action effectively

crippling newsstand sales. Beyond that, many of Rolling Stone’s competitors lifted

excerpts from the article and published it in their own publications and on their own

websites; that fact only further exacerbated the troubles. At the time, the editors at

Rolling Stone explained that they didn’t want to be “wedded” or trapped to the

editorial cycle of the analog version and they wanted to get the information out there

quickly.15

However, the mistake they made was anachronistic in nature: instead of

rushing to publish the content designated for the analog version, they should have

crafted something associative, possibly video or audio snippets from the interview,

and published those elements on the website. Instead, they just cut and pasted content

designated for a print publication, and put it online.

At the time, there were no standards and very few models for online journalism

and Rolling Stone should be commended for attempting to do something other than

wait for the publication to hit the streets. It’s just unfortunate that they didn’t

demonstrate a fair sense of respect for the digital audience; repackaging print content

and posting it online doesn’t quite acknowledge the potency and the meaning of

McLuhan’s mantra: “The medium is the message.”

At the heart of the problem—this anachronism—was either a glaring lack of

respect for the online community or a downright ignorance of the needs of this

audience. Had Rolling Stone considered the idea that it was moving content from the

15
"The Secret to Rolling Stone's Success." Columbia Journalism Review. Accessed March 12,
2017. http://archives.cjr.org/behind_the_news/the_secret_to_rolling_stones_s.php.

230
print medium to the online one, it should have rethought the text design and the use of

companion media. These two media—the print magazine, the browser-based

website—have entirely different appearances, textures and relationships with the

audience.

That fact aside, most legacy news groups are guilty of disrespecting the digital

audience. CNN, Fox News, NPR, The New York Times, The Washington Post and a

host of others, have taken to merely copying print content or cutting video from one

medium and posting it into another. Just look at the dueling CNN and Fox News

online offerings: Their respective web-video stories are often just content culled from

the cable television signal; and that content includes visual icons including the words

“live” and “breaking news,” and/or audio-video cues—culled by production editors—

that include chopping off a reporter’s or pundit’s message midsentence to “package”

the online video artifact.16 17 These facts are embarrassing for two reasons: First, pre-

recorded video posted online can never be “live” and is often hours older than the

“breaking news” moniker suggests, which means these words are inaccurate; second,

serving up this content to the legacy audience and then rehashing the materials for an

online audience demonstrates a glaring lack of forethought and an awesome display of

disrespect for an audience that will most likely be the core audience in just a few

years.

16
"Mexican Lawmaker Says He Scaled Border Fence - CNN Video." CNN. Accessed March
14, 2017. http://www.cnn.com/videos/world/2017/03/03/mexican-lawmaker-scales-border-fence-sje-
orig.cnn/video/playlists/donald-trump-immigration/.
17
"Brady: GOP Plan Doesn't Force Americans to Buy Insurance." Fox News. Accessed March
14, 2017. http://video.foxnews.com/v/5359032041001/#sp=show-clips.

231
Looking forward, legacy media should change their mind about their approach

to the online community; the information posted online should be original to the

online audience and, frankly, better planned, better prepared, better executed, better

packaged and better delivered. At this point, legacy media continue this practice but

the seams in the fabric of their presentation are less apparent.

That said, the more successful online multimedia experiments have been

original to that audience. Beyond the Rolling Stone debacle, other news outlets would

look in new and different directions and one of the first to strike out with original

content for the Internet was The Rocky Mountain News.

‘The Crossing’

In 2007, The Rocky Mountain News published on its website the longest

investigation piece in Denver history. The work was called “The Crossing,” and was

about a 1961 school bus accident that killed 20 students. The final work, a long-form

nonfiction feature story, ran 34-sections, which included text, pictures, video, music,

interactive graphics, a narrative voiceover and supplementary materials. And while

there were other digital experiments underway at the time, this work, written and

reported by journalist Kevin Vaughan, was one of the first real definitive works of

multimedia storytelling. Why? At the very least, given the volume of time it took to

create, the editors of The Rocky Mountain News demonstrated for the first time in the

history of legacy media, a commitment to long-form multimedia storytelling produced

specifically for the Internet. Nothing like it had ever come before but it would become

232
the model for what would come after it. Looking to the Berkeley list, “The Crossing”

was crafted in the Christmas tree form.

In the opening section, Chapter 1, the webpage is cast against a black

background and a yellow textbox is revealed; the animation on the page lowers the

textbox to reveal a viewing window for video and a red-arrow icon appears,

encouraging the viewer to look at the 100-second video introduction. The video

includes still photographs of news clippings, photographs from the scene of the

accident, audio quotations from eyewitnesses and others, as well as cello and violin

music, which sets a somber tone.18

The most interesting thing about the work—a complete multimedia tour de

force—is how early in the history of electronic publishing it appears. The

Telecommunication Act was signed into law in 1996 and, 11 years later, in 2007 The

Rocky Mountain News presents this multimedia-publishing event, which includes

video and music. While the publication is just a decade old, the Internet has evolved

substantially since then; in fact, average Internet speeds have improved four-fold from

3.7 megabytes per second in 2007 to in excess of 15.2 mbps in 2016.19 Downloading

video in 2007 would have been a long and tedious process, which might discourage an

audience from looking at 34-digital chapters of content. Still, The Rocky Mountain

News moved ahead with the project, which was ultimately published in a series

starting in January of that year. On average, each chapter included 1,500 words, two

18
Vaughan, Kevin. "The Crossing Story." The Crossing Story. 2007. Accessed March 04,
2017. http://thecrossingstory.com/chapters/1.html.
19
"Average Internet Connection Speed in the U.S. 2007-2016 | Statistic." Statista. Accessed
March 04, 2017. https://www.statista.com/statistics/616210/average-internet-connection-speed-in-the-
us/.

233
minutes of video, a half-dozen photographs, supporting documentation and a social

space for public discussion.20

Now, as a point of criticism, one should note that the various media—text,

photo, video—are all separated and the user must click on icons for each medium to

see these things. In the world of topography, the layout of “The Crossing” is similar to

a traditional newspaper in that regard.

In 2009, The Rocky Mountain News closed its doors for good; the final edition

had the banner “Goodbye, Colorado.” Its parent company, E. W. Scripps Company,

surrendered the archives of the newspaper to the public library system but, in doing so,

did nothing to preserve “The Crossing,” which disappeared from the Internet that same

year. The Internet, it seems, is a tenuous place. In an Atlantic Monthly article on the

fragile nature of the Internet, journalist Adrienne LaFrance wrote:

The life cycle of most web pages runs its course in a matter of months. In
1997, the average lifespan of a web page was 44 days; in 2003, it was 100
days. Links go bad even faster. A 2008 analysis of links in 2,700 digital
resources—the majority of which had no print counterpart—found that about 8
percent of links stopped working after one year. By 2011, when three years had
passed, 30 percent of links in the collection were dead.21

Lucky for Kevin Vaughan, the author of “The Crossing,” had all the contents of the

story saved on a DVD and, after wrangling with the library and E.W. Scripps, they

ultimately allowed him to resurrect the story, which is now again published online. As

20
Vaughan, Kevin. "The Crossing Story." The Crossing Story. 2007. Accessed March 04,
2017. http://thecrossingstory.com/chapters/1.html.
21
LaFrance, Adrienne. "Raiders of the Lost Web." The Atlantic. October 14, 2015. Accessed
March 04, 2017. https://www.theatlantic.com/technology/archive/2015/10/raiders-of-the-lost-
web/409210/.

234
of my writing this, Vaughan is paying the maintenance fees to keep the story alive and

online.22

‘Snow Fall’

Five years after “The Crossing,” The New York Times pooled its talent to

create the next great piece of multimedia nonfiction when it published “Snow Fall:

The Avalanche at Tunnel Creek” in 2012. This was a six-part story about an avalanche

in the Cascade Mountains east of Seattle, Washington, that ultimately killed three

professional skiers. Like “The Crossing,” “Snow Fall” was a full-on multimedia

project; but this project was larger in scope and included over 10,000 words of text,

scores of photographs, dozens of video interviews, and a final 20-minute

documentary-like summary of the accident. The author of the text story, John Branch

won the 2013 Pulitzer Prize for Feature Writing for his work;23 but the remainder of

the production team went largely unrecognized. Looking at the Berkeley list, “Snow

Fall” has the linear form.

With the first page, the cover art—a short three-second video that loops over

and over again, called a GIF—shows snowflakes blowing over an icy terrain. The

headline of the work is there, as is the author’s byline. Above that, just below the

browser control bar is a line of chapters with links to all six portions.

To read the work, all the reader must do is scroll the text upward; as they do

so, the videos appear as still pictures in the margin and transition from black-and-

22
Ibid.
23
The Pulitzer Prizes. Accessed March 04, 2017. http://www.pulitzer.org/prize-winners-by-
year/2013.

235
white to color symbolically marking their availability; animations also appear to

illustrate elements of the text.24 The software interface here is called “Parallax” and

it’s a browser-based scrolling technology, which creates a linear viewing effect.

Communication theorists Richard Koci Hernandez and Jeremy Rue wrote about the

impact of “Snow Fall”:

On December 20, 2012, visitors to the New York Times website noticed
something they hadn’t seen before. It was a new type of article, teased on the
front page, titled “Snow Fall: The Avalanche at Tunnel Creek.” The story
instantly became a watershed moment in the online news industry. In an
entirely new way to display a story, the New York Times employed a mix of
technological and design conventions like autoplaying background videos,
embedded photos and videos, graphics that changed as the user scrolled down
and a curtain effect, in which new sections of the story seemed to cover
previous parts while the user scrolled. The package redefined the notions of
how a news article could be presented on the Web and focused attention to
new visual and interactive multimedia embedded throughout the story.25

Initially, the work started out as any other long-form newspaper feature: journalist

John Branch heard about the accident and began researching, reporting and writing the

piece. When the editors saw drafts of the story, they began considering a larger, more

ambitious multimedia project and the inspiration for “Snow Fall” was born.26

Ultimately, they added video, still photographs and motion graphics.

Once it appeared online, “Snow Fall” was a major success. At its peak, “Snow

Fall” hosted 22,000 visitors on the page at a time, who spent an average of 12 minutes

24
Branch, John. Snow Fall: The Avalanche at Tunnel Creek. December 2012. Accessed
September 4, 2016. http://www.nytimes.com/projects/2012/snow-fall/#/?part=tunnel-creek.
25
Hernandez, Richard Koci, and Jeremy Rue. The Principles of Multimedia Journalism
Packaging Digital News. New York: Routledge, Taylor & Francis Group, 2016. 83.
26
"Q. and A.: The Avalanche at Tunnel Creek." The New York Times. December 21, 2012.
Accessed March 05, 2017. http://www.nytimes.com/2012/12/22/sports/q-a-the-avalanche-at-tunnel-
creek.html?_r=0.

236
looking at the multimedia feature. During the first six days, “Snow Fall” received 3.5

million page views.27

Now, comparing “Snow Fall” to “The Crossing,” the projects do have many

similarities. Specifically, seasoned, established print journalists wrote both stories for

traditional newspaper audiences; in both cases, the stories were “timeless” features

with historic value and strong “tragic” elements. Also, the text of both stories is

written in the “Wall Street Journal Style” narrative form: Each opens with a teaser

intro with allusions to the larger story; the second section reveals details and a thesis;

the third and succeeding sections begin telling a history; the concluding sections

analyze the event, explain the implications and give it a sense of cultural value before

talking about survivors and life after the event. Further, and this bares observation,

both stories were published in legacy print media but, with the added digital elements,

both were designed specifically with a digital audience—therefore, online audience—

in mind. This is a clear break in the news tradition. Media theorist Bryan Alexander

had this to say about “Snow Fall”:

The Web page expands this story enormously by adding media. It begins with
a title superimposed over what at first appears to be an image of a snowy
landscape until the reader sees snow blowing, hears wind keening, and realizes
that it is actually a video. Proceeding down Snow Fall’s page, the reader
encounters video and audio interviews with participants, each carefully
situated in logical points within the text. A large animated map of the setting
takes over the browser window, letting the readers/viewer explore the tortuous
mountain topography referenced at the point in the story. Historical
photographs pop up when Branch reaches back in time to give more context to
the present day.28

27
"How We Made Snow Fall." Source: An OpenNews Project. Accessed March 05, 2017.
https://source.opennews.org/articles/how-we-made-snow-fall/.
28
Alexander, Bryan. The New Digital Storytelling: Creating Narratives with New Media.
Santa Barbara, CA: Praeger, 2011. 33.

237
And while the story forms are similar, the uses of companion media are different.

“Snow Fall” is a linear progression that directs the reader to merely push the down-

arrow key to scroll through the essay; in many cases the animations merely appear and

reveal themselves as a byproduct of the scrolling; occasionally, the user must use the

mouse to move the cursor to a video or slide show to trigger the media.29

“The Crossing,” on the other hand, is a complex array of media that merely

share the same browser landscape. To view the work, the reader is encouraged to

observe the opening animation atop each of the 34 chapters, but the eye is naturally

drawn to the text of the story. To view the video, photographs and supporting

documentation, the reader must use the mouse to move the cursor around and then

click on each of the media. While this last part might seem nominal, the lack of

cohesion between the various media simply presents them as a series of detached

presentations that share the same browser landscape. Yes, there are multimedia story

components here but, because they are not associated mechanically, the experience is

similar but separate. I call this “media landscaping,” and believe that the interaction

should be more fluid and engaging. Finally, in both stories, although video is a “hot

medium,” the act of dragging a cursor over the video icon and pressing it to launch the

video narrative creates a ‘cold trigger’ for the video. To improve the experience, and

to make it more cohesive, one solution would be to maintain the linear narrative but

remove the ‘cold triggers’ and let the video play; this would flatten the transitions

between the text and the video; it would also sustain the ‘hot’ media components of

the video. To create a ‘true’ media experience, multimedia should be managed by the

29
Branch, John. Snow Fall: The Avalanche at Tunnel Creek. December 2012. Accessed
September 4, 2016. http://www.nytimes.com/projects/2012/snow-fall/#/?part=tunnel-creek.

238
author and producers not by the reader. In their book, The Principles of Multimedia

Journalism authors Richard Hernandez and Jeremy Rue identify news packages

without ‘cold triggers’ as sites with kinetic topography.30 The New York Times’ next

showcase multimedia project would experiment with the transition from text to video.

‘The Jockey’

Eighteen months later, The New York Times published a second multimedia

piece crafted to the same standard as “Snow Fall;” this one was called “The Jockey,”

and was a multimedia feature about Russell Baze, the American horse jockey with the

most professional wins in U.S. history. Like “Snow Fall,” “The Jockey” is a full-on

multimedia offering complete with 10,000 words, scores of photographs, video and

some animation; it was also published in the Parallax format, which requires the

reader to push the down-arrow key to scroll through the essay. Looking at the

Berkeley list, “The Jockey” was published in the linear form.

And while the two essays share a similar feel, there is a major difference with

regard to the video; this package has kinetic topography. In the case of “The Jockey,”

as the reader scrolls downward, the video plays automatically; and while this may

seem a like a minor improvement, it actually works to integrate the video into the flow

of the text storytelling. It works this way: In the opening chapter, three paragraphs of

text and a photograph (of Baze) appear below the bylines of author Barry Bearak and

photographer Chang W. Lee. After the reader scrolls past the third paragraph, the

video begins automatically: The screen turns gray and then black, and in white

30
Hernandez, Richard Koci, and Jeremy Rue. The Principles of Multimedia Journalism
Packaging Digital News. New York: Routledge, Taylor & Francis Group, 2016.

239
lettering, the text of the previous paragraph appears and the author is heard reading the

text as the video begins showing moving images of a horse track; as the 50-second

video finishes, the title of the piece “The Jockey” appears over video of race horses

breaking from a starting gate.31 Gone is the ‘cold trigger’ that once held the video in

reserve; now, the video is cross-integrated with the text and it plays in pattern with the

pace of the reader’s eye movement through the text. Several other videos replicate the

process throughout the multimedia production.

Like “Snow Fall,” the design of “The Jockey” is the same: This is a traditional

long-form newspaper feature, which has been converted to a digital platform. The

story design includes an opening teasing at the whole of the story, which is followed

by a thesis statement before falling back into the history of the subject and his

relationship with the sport.

One of the more interesting video presentations looks at Baze’s history with

sport-related injury. The video opens with a black screen with white text, which the

author Barry Bearak reads; the video then transitions into an audio narrative by the

track’s physician Dr. David Seftel; as for the video itself, we see what looks like a still

photograph of Russell Baze; as the doctor itemizes scores of injuries, lines of text

appear next to notations showing where the injuries occurred on Baze’s body; when

the narrative turns to injuries on the side of his body, something interesting happens:

Baze moves, and we realize that we are looking at video, not a still photograph.

“The Jockey” wasn’t nearly as successful as “Snow Fall” and critics believe

that maybe the success of the “Snow Fall” could be attributed to the originality of the

31
Bearak, Barry. "The Jockey." The New York Times. August 13, 2013. Accessed March 05,
2017. http://www.nytimes.com/projects/2013/the-jockey/#/?chapt=introduction.

240
technology and the dynamics of the story’s content, but the tech mystery is gone now,

and maybe “Snow Fall” was just an anomaly.32 I think that “The Jockey” does succeed

in advancing the seamlessness of multimedia, which was something lacking in “Snow

Fall.” But one of the apparent and troubling things about these stories and “The

Crossing” is the volume of text-written content; the “cool” media content is

voluminous and the problem here is the audience is forced to consider “deep reading”

or “prowling” the text on the screen.

As you might recall, theorists Maryanne Wolf and Mirit Barzillai suggested

that “deep reading” is where the learning and reasoning comes in and this sort of

reading experience can be fulfilling. Screen reading is much different: the glare of the

computer screen is certainly a factor and computers offer many distractions that can

break the concentration needed for “deep reading.” Instead, screen reading is really a

place for “prowling.” That aside, these two New York Times multimedia projects did

inspire a renewed interest in nonfiction story design and several copycat projects did

follow; one of them, produced by The Guardian, does wonders to advance the visual

effect even further. For that story, the reporters looked at a legacy event in the

Southern Ocean.

‘Firestorm’

As The New York Times was pulling “The Jockey” together, The Guardian, a

British daily newspaper, was looking to break into the Australian media market with a

32
Manjoo, Farhad. "“Snow Fall,” “The Jockey,” and the Scourge of Bell-and-Whistle-Laden
Storytelling." Slate Magazine. August 15, 2013. Accessed March 05, 2017.
http://www.slate.com/articles/technology/technology/2013/08/snow_fall_the_jockey_the_scourge_of_t
he_new_york_times_bell_and_whistle.html.

241
great first edition and the editors decided to produce something about the 2013

brushfires that burned across Tasmania. In May, the newsgroup published

“Firestorm,” which was a six-chapter multimedia package that fused text, video and

photographs to tell the story of Tim and Tammy Holmes as they struggled to keep

their family together as brushfires burned their farm. The story started as a ‘selfie,’

taken by the family, as they cowered neck-deep beneath a boat dock hiding from the

intense heat of the fires burning across their property. The photograph appeared on the

cover of The Guardian, and the family became the ‘face’ of a disastrous fire-ridden

summer on the remote Tasmania Island.33

And while the narrative arc of the essay is similar to “The Jockey” and “Snow

Fall,” “Firestorm” is much more video friendly; it was also produced as a Parallax

reading style and the site has nearly seamless kinetic topography.

The essay opens with a look at the original still photograph, which includes the

name of the story and a paragraph of text explaining the inspiration for the story. As

readers look at the photograph, it morphs into a video, which includes sounds and

images of helicopters attempting to put the fire down. Scrolling downward takes you

out of the video, and Chapter 1 begins with more text, which is laid over a video of a

waterway with birds flying overhead.34

Scrolling further down, opens the first video with a spoken narrative. As with

“The Jockey,” the video triggers automatically, showing Tammy and Tim Holmes

33
Henley, Jon, Laurence Topham, Guardian Interactive Team, Mustafa Khalili, and Francesca
Panetta. "Firestorm: The Story of the Bushfire at Dunalley." The Guardian. May 26, 2013. Accessed
March 05, 2017. https://www.theguardian.com/world/interactive/2013/may/26/firestorm-bushfire-
dunalley-holmes-family.
34
Ibid.

242
speaking about their preparation ahead of the brushfires. After the 52-second video

ends, the reader must scroll down into the next block of text, where the story

resumes.35

The multimedia essay continues on that way throughout. At the beginning of

each chapter, a video begins and loops over and over until the reader scrolls further

down. Again, video interviews appear that include comment from the Holmes and

others explaining the entirety of the fire event.

Like “The Jockey,” the video is automated absent a ‘cold trigger,’ which

makes the video more associated to the text narrative; because some of the story is

wallpapered over video images, the narrative has more of a documentary film feel to

it.

Review of Experimental Multimedia

There are many successes and failures here in these experimental projects.

Looking at the successes, it is fair to say that each project makes a strong leap into

multimedia storytelling. Looking to Richard Wagner’s model, these stories attempt to

fuse oral and literal storylines together to create a transcendent experience; and with

the video and the graphics, there is a sense of motion… although, video playing on a

laptop screen can appear rather static when compared to a theater experience, for

example.

Another problem with all of these presentations is the fact that they are all

web-based browser presentations. To see them, and read them, one must use a desktop

computer and you must go to a dedicated webpage to experience the full production
35
Ibid.

243
aspects of each presentation. And, with the exception of “The Crossing”—which is

ancient by comparison—the other three multimedia stories employ Parallax scrolling

technique to illustrate their value. As a collection, the works are solid, interesting

experiments in digital storytelling; there is text, there is video, there are photographs;

and the later works attempted to sear the most dynamic of the media—text and

video—together but there are troubles.

In a column that appeared on the Slate website, tech writer Farhad Manjoo

admits that while The New York Times’ stories were visually interesting, he actually

never got around to reading the text of “Snow Fall” and “The Jockey.”

If I sound a bit clueless about “The Jockey” it’s because I didn’t read it. I
tried—but as soon as I scrolled down after reading the first few paragraphs, my
screen was overtaken by a pointless video introduction, complete with stirring
music, trumpets, and stock horseracing scenes. As the video loaded I clicked
away to something else. I tried to get back into the piece later in the day, but I
was waylaid by another video. And as you’ve probably guessed, I didn’t read
“Snow Fall,” either. I’ve tried to get through it half a dozen times, but every
time I found it just too big—too long, too visual distracting, too overproduced.
But “The Jockey” is even worse. With its constantly intercutting videos, its
unreadability almost feels intentional. Does the Times even care if I read the
story? Maybe you’re just supposed to scroll and watch the videos?36

Manjoo has a point. The length of the text-heavy narrative is daunting, extensive,

and—frankly—an anachronism of centuries of newspaper storytelling. The

Guardian’s approach to “Firestorm” is more refreshing, more video-centric and easier

to consume on the Internet. But desktop computers were never a great place to engage

the written word anyway. Computer screens certainly have never been a comfortable

place for “deep reading,” which is really what the journalists at The New York Times

36
Manjoo, Farhad. "“Snow Fall,” “The Jockey,” and the Scourge of Bell-and-Whistle-Laden
Storytelling." Slate Magazine. August 15, 2013. Accessed March 05, 2017.
http://www.slate.com/articles/technology/technology/2013/08/snow_fall_the_jockey_the_scourge_of_t
he_new_york_times_bell_and_whistle.html.

244
were encouraging when they produced the copy for “Snow Fall” and “The Jockey”

and posted it on a browser-based platform.

Ideally, stories of this nature should be available on tablet computers but when

viewed on devices including the iPad, one had to find the multimedia story using a

web browser and the automatic animations are replaced with ‘cold trigger’ videos and

the mystique of the kinetic topography is lost.

Now, if we read these stories, we might observe that the narrative design of

each story fits the “Wall Street Journal Style.” Specifically, each story starts with a

teaser introduction (much like an overture in a theater production) that hints at the

story to follow. Then each of these pieces flows through the form revealing the story,

its relevance to a larger community, the histories, and so forth. The successes here rely

on the audience’s ability to recognize the pattern as something organically familiar.

We know story form. The anachronism and the thing that makes these pieces

experimental are the attempts to commingle various media to tell these stories. Before

digitalization, stories like “Snow Fall,” would have appeared in newsprint as text-

centric stories with photographs added. Now that these stories can be published

digitally, these newsrooms had an opportunity to experiment with a mixture of text

and video and photos and graphics. The results are varied and mixed and as Farhad

Manjoo points out, the projects continued to be prohibitively text-centric.

Looking to Richard Wagner’s model for guidance, he believed that multimedia

storytelling should be a fusion of poetry, song, dance and emotion. In an effort to find

modern equivalents, I’d like to suggest that “poetry” or media created for the mind has

transformed into text written storytelling, “song” or media created for the heart has

245
transformed into oral storytelling including the spoken word and music, and “dance”

or media created for the body has morphed into video, which replicates the motion of

the human body.

Returning to my critique of these projects, I think these text-heavy stories

placed too much emphasis on the “poetry” of story with too little attention for

Wagner’s “song” and “dance” components. In “Snow Fall” specifically, the 10,000-

word essay was conceived of and written by reporter John Branch and the other media

elements were crafted in reaction to his work making them, in effect, afterthoughts.

The inception of “The Jockey” appears to be better coordinated and the media

components include narration by reporter Barry Bearak, but again, this story is text

heavy. With “Firestorm,” the story design is much more integrated: The presentation

includes video pages with much of the text story laying over it. As the audience reads,

there is slight motion with sound playing which loops over and over again creating a

pleasing but not-too-distracting kinetic symmetry. Did the “Firestorm” producers

create something approaching Richard Wagner’s vision? The project is certainly

aggressive and groundbreaking. It has poetry, dance, song… but does it have

emotional appeal? To the credit of The Guardian producers who created “Firestorm,”

they certainly broke new ground fusing text with video in a way that transports the

audience.

What’s disappointing about these works is the fact that there appears to be a

halt in similar productions. Very few, if any groundbreaking multimedia stories have

emerged at least lately and the stuff that is there may soon disappear.

246
The Internet is ‘Disappearing’

The Internet is also a very dangerous place; or, at the very least, it’s not a very

reliable place for archival content. Things that are published on the web today can, and

often do, disappear within a few months. This is counter to the popular understanding

of the Internet: We are told that once something appears on the Internet, it will be

there forever. This is not entirely true. Often, salacious materials—nude photos,

awkward political moments—will live in the ether of the Internet for years, but web

pages, themselves, are more vulnerable. The variable here is the software used to

create the website itself: Most webpages are hosted by individuals and/or small

companies who do not have the resources (or the interest) to maintain the software that

creates these sites. Because these sites are managed by software protocols, these sites

are vulnerable to upgrades in the software, and software upgrades often degrade the

quality of older websites. Eventually, these sites deteriorate to a point of corruption

and appear in the web browser as “broken links.”

Journalist Adrienne LaFrance wrote about this vulnerability in the Atlantic

Monthly in late 2015 warning us that even great works of journalism like “The

Crossing” can be swept from the digital world without a second thought:

If a sprawling Pulitzer Prize-nominated feature in one of the nation’s oldest


newspapers can disappear from the web, anything can. “There are now no
passive means of preserving digital information,” said Abby Rumsey, a writer
and digital historian. In other words if you want to save something online, you
have to decide to save it. Ephemerality is built into the very architecture of the
web, which was intended to be a message system, not a library.37

37
LaFrance, Adrienne. "Raiders of the Lost Web." The Atlantic. October 14, 2015. Accessed
March 04, 2017. https://www.theatlantic.com/technology/archive/2015/10/raiders-of-the-lost-
web/409210/.

247
Of course, “The Crossing” isn’t the only at-risk work of Internet publishing. The

Internet is a conduit that is managed by software and it is a storehouse of information,

which is discovered by software-driven browsers. Every time the software for these

systems is altered, the potential for loss increases.

It amounts to this: In 2005, HTML4 was the premier markup language for

webpage coding; if it was on the Internet, it was probably written in HTML4 code. In

2014, HTML4 was made obsolete by HTML5 and when the new standard went live,

websites built with HTML4 began degrading and systemic improvements to the new

web coding only further advanced the degradation. When this happens, when a web

browser lands on an outdated web page, information is dropped. When the HTML

code of a web page falls into complete obsolescence, the web browser shows the web

address as a ‘broken link.’38

So, without any updated coding, “The Jockey” and “Snow Fall” and

“Firestorm” could one day disappear from the Internet without a trace. Moving stories

like these and others away from a browser-based system would offer a possible

solution; doing so would require the producers to craft the work for an App-based

environment, a move that would replace the need for HTML coding and web browser

technologies but, again, I suggest that the Internet is not a library.

According to Alphabet’s Executive Chairman Eric Schmidt, the Internet may

be disappearing anyway. It was an odd thing for the executive from the company that

owns Google to say. But he said it during a leadership conference in Davos,

38
Gustafson, Aaron. "Adaptive Web Design: Crafting Rich Experiences with Progressive
Enhancement." Accessed March 05, 2017. https://adaptivewebdesign.info/1st-edition/read/.

248
Switzerland in 2015, and what he meant was that the Internet was becoming more like

a utility than a telephone system.

There will be so many IP addresses… so many devices, sensors, things that


you are wearing, things that you are interacting with that you won’t even sense
it. It will be part of your presence at all time. Imagine you walk into a room,
and the room is dynamic. And with your permission and all of that, you are
interacting with the room.39

What he means is this: Every time you download an App or connect an appliance to a

WiFi signal, you are bypassing web browsers and connecting those items directly to

the Internet. Suddenly, consumer electronics will be self-driving, self-repairing and

reactionary; light bulbs and alarm systems will turn on and off as you move through

and around your home; failing systems will seek repairs; thinking devices will

improve their abilities… and so forth. With all of this, the browser will fall away, and

the Internet will grow in ubiquity but will also fade into the unobvious places or

shadows that house our water and sewer and electrical systems.

And if that’s the case, news organizations including The New York Times and

The Guardian and The Washington Post are going to shift away from their primary

spots on the Internet dominion and into a more protected environment of Application

software and a presence on the next generation of consumer electronics: specifically,

smart phones and tablet computers and the technologies that follow.

The List

In 2007, just months after I was hired to teach digital journalism, I began

searching for a possible list of standards for multimedia storytelling. At the time, I

39
Smith, Dave. "GOOGLE CHAIRMAN: 'The Internet Will Disappear'." Business Insider.
January 25, 2015. Accessed March 05, 2017. http://www2.businessinsider.com/google-chief-eric-
schmidt-the-internet-will-disappear-2015-1.

249
could not find one. As I moved through the succeeding semesters, I found myself

returning to the same protocols, which I shared with my students. In time, we had a

working list of 10 principles for digital storytelling, which are as follows:

1. Digital Journalism integrates traditional media to tell one story


2. Technology is not journalism
3. Create digital content explicitly for that audience
4. Research, reporting, writing, editing, thinking: remain paramount
5. Accuracy, accuracy, accuracy
6. Let images be powerful; apply the same standard to writing
7. Be dynamic, be brief
8. Professional work must look professional
9. Understand each medium
10. Communicate with the audience40

When I was finished with the list, I published it online.

The purpose of this list is simple: I wanted to articulate suggested standards for

the emerging new journalism form. The first simply defines multimedia storytelling.

The second attempts to address the misconception that owning a camera makes you a

professional journalist. The third rule addresses the wholesale misconception that

copying and pasting content from legacy media and publishing it in a digital form

without some reflection cheats the digital audience. Rules four and five are a nod to

the catechism of traditional news reporting. Rules six and seven attempt to convey the

fleeting nature of the digital audience. Rule eight was written in reaction to the poor

production issues prevalent on social media. Rule nine suggests that journalists need

not perfect all forms of media, but they should be aware of each medium and its

dynamic. Rule 10 is about respecting the audience and inviting them to participate.

Now, if I added an 11th idea to this list, it would be to “Archive everything” given

40
Scully, Michael. "10 Rules for Digital Journalism." Scribd. January 26, 2013. Accessed July
13, 2017. https://www.scribd.com/document/147681342/10-Rules-for-Digital-Journalism.

250
what’s been going on with “The Crossing” and the very real threat to these other

experimental works.

As for its source, this list is a byproduct of my own Gesamtkunstwerk, my

quest for a multimodal form for nonfiction storytelling. As it was for Wagner and

many, many others, my search for a multimedia place seems out of reach. To get there,

I believe we must respect the traditions that have come before, we must embrace and

understand the potency of the various media, and we must drive for standards that will

attract an audience. Given the extreme dynamic between each of the media, it’s hard

to fathom a multimodal typography that will coalesce and congeal these various

communication forms. To Wagner, Gesamtkunstwerk was a 19th century revival of the

Greek tragedy, which mixed dramatic performance with song and dance. Since then,

the various forms of media have grown in volume, application and complexity. In the

modern age, Gesamtkunstwerk might be a cacophony of sight and sound and form

(and the subsequent clashing of divergent energies) that must be parsed down and

presented in a way that a willing and practiced audience might have the ability to

embrace and perceive the whole of the performance.

Media professor Chris Salter put it this way:

Based on the Romantic notion of the artist as a conveyer of the sublime,


Wagner’s interest in appealing to the deepest emotions by way of a fusion of
media elements is also surprisingly contemporary. In a strange way, Wagner
already had command over what many contemporary creators are still trying to
sort out: the design of media carefully choreographed within a specifically
defined architected space to create a complete and total immersion of the
spectator’s senses, literally sweeping them into an emotional, hypnotic vertigo;
what Wagner scholar and editor Albert Goldman so aptly called a theater of
narcosis.41

41
Salter, Chris. Entangled: Technology and the Transformation of Performance. Cambridge,
MA: MIT Press, 2010. 2.

251
In his book chapter, Inside the Ring: Essays on Wagner’s Opera Cycle, author Erick

Neher offered this explanation of “theater of narcosis”:

Contemporary commentators spoke of a “theater of narcosis” in which the


spectator, induced by the hypnotic music and the dark setting into a sort of
trance, would form a direct connection with the emotional core of the drama,
unhindered by rationalization.42

One cannot help but be inspired by Wagner. Imagine a storytelling experience that

moves the audience beyond “rationalization” and into a transformative state of

appreciation for the experience. Is Gesamtkunstwerk the final destination?

Drawing from Wagner and others, my vision of multimedia storytelling, or

storytelling in the Digital Age, is one that coordinates the various media to tell a

completed story but in an artful way; but I also diverge from Wagner on the sense of

“space,” which he perceives as a physical one—the theater—while I see it as

something more ethereal, more digital. With the Cyclorama, for example, the audience

must visit the performance hall and sit in the theater to experience the event; in a

digital context, the performance can be exhumed from time and space and presented,

instead, as a multimodal performance as augmented reality as people walk the turf of

the old Atlanta battlefield, or are completely enveloped in a virtual reality experience

that immerses the audience in a synthetized regeneration of the actual battle. Consider

the Duende one might experience riding into battle just behind Union General John

“Black Jack” Logan.

As for the content itself, all of the elements I addressed in this chapter suggest

attempts in that direction towards a transcendent and artful media experience. “The

42
DiGaetani, John Louis. Inside the Ring: Essays on Wagner's Opera Cycle. Jefferson, NC:
McFarland &, 2006. 174.

252
Jockey” is a text-based story with interesting cross-integrated video components;

while “Firestorm” is a video-based story, with interesting cross-integrated text

components. If one could add the positive aspects of audience participation through

social media and/or augmented or virtual reality these events would be richer. And

then, of course, there needs to be a greater sense of permanence and ubiquity.

Wouldn’t it be great if these digital stories were available beyond the reach of

the Internet-based browser? And beyond the reach of the desktop computer? How

would they appear in a VR or an AR environment? And, finally, what would be

learned if we could visit the physical sites of their origin—the train crossing, the

racetrack, the ski resort—and witness for ourselves many of the details offered in the

digital storyline?

253
Chapter 6

Media Observations

Media are forms of conveyance, packaging information in user-friendly

formats for various and receptive audiences; but there is a lot more going on here.

Walter Ong tells us that various media have changed the structures of the human

thought process. We consume text very differently from the way we consume video or

sound; and these variations transform the context of the message. The dynamic of

human cognitive process has also evolved or possibly improved. Ong writes:

But it can be misleading, encouraging us to think of writing, print, and


electronic devices simply as ways of “moving information” over some sort of
space intermediate between one person and another. In fact, each of the so-
called “media” does far more than this: it makes possible thought processes
inconceivable before. The “media” are more significantly within the mind than
outside it. With the use of writing or print or the computer, the mind does not
become a machine, reducing its generation of thought concepts) to shaping, its
memory to “storage,” its recalling to “retrieval,” its sharing to “circulation.”
But writing and print and the computer enable the mind to constitute within
itself—not just on the inscribed surface or on the computer programs—new
ways of thinking, previously inconceivable questions, and new way searching
for responses.43

Marshall McLuhan was acutely aware of this idea and reduced it down very

nicely when he offered his mantra: “The medium is the message,” which translates

into the fact that the same information, either spoken or written, can be perceived

differently by each member of the audience. There’s something more: the spoken

word, or orality, is a form of performance; for the spoken word to be understood, there

must be an audience; absent an audience, the spoken word evaporates into

nothingness. Orality is the fabric of community—of clans—and the living

embodiment of language.

43
Ong, Walter J. Interfaces of the Word: Studies in the Evolution of Consciousness and
Culture. Ithaca: Cornell UP, 1982. 46.

254
The written word is different. Once written, the text on the page exists beyond

the scope of the author and can have a lifecycle that far outlasts the author. The written

word can dwell in perpetuity waiting for its audience to discover it, and while there is

an audience here, there is no performance… the sender is missing when the message is

finally received. McLuhan laments the loss of cultural performance.44

Which brings us to the quandary of multimedia storytelling: Orality is an

outward-reflecting medium; literacy is an inward-reflecting medium. The spoken word

is alive and public and natural; the written word is internal and confined to the silence

of the mind. These two forms of communication are diametrically opposed; begging

the question: is it even possible to make them adhere together to form the same story?

Communication theorists including Joshua Meyrowitz worry that the gulf

between oral and literal is just too broad:

J.C. Carothers, Jack Goody and Ian Watt, Eric Havelock, A.R. Luria, and
Walter Ong have studied various aspects of the shift from orality to literacy.
They have argued convincingly (indeed, much more convincingly than Innis
and McLuhan) that literacy and orality involve completely different modes of
consciousness. They describe how the introduction of literacy affects social
organization, the social definition of knowledge, the conception of the
individual, and even types of mental illness.45

But these theorists are writing about a world before the Internet and the Internet has

certainly been a catalyst in the cultural paradigm shift away from the dichotomy

formed by the separate and neutral worlds of oral and literal media. Given that all

media have been converted to digital formats and given that the Internet is the realm of

44
McLuhan, Marshall. Understanding Media: The Extensions of Man.
45
Meyrowitz, Joshua. No Sense of Place: The Impact of Electronic Media on social Behavior.
New York: Oxford University Press, 1985. 18.

255
digitized matter, wouldn’t it stand to reason that the Internet has given us the potential

for associated media? Meyrowitz thinks so:

The important underlying principle is firmly rooted in systems theory and


ecology: When a new factor is added to an old environment, we do not get the
old environment plus the new factor, we get a new environment. The extent of
the “newness” depends, of course, on how much the new factor alters
significant forces in the old system, but the new environment is always more
than the sum of its parts.46

So, what will this new environment look like? And finally, what is the future of

storytelling in the Digital Age?

We are just two decades into the Digital Age—the idea of a commercialized

Internet was born (in the United States) in 1996—and its usefulness and its potential

are still being realized. Ideally, we would like to keep the best of each medium.

Printing, by its very nature, can be archived and preserved; radio and television are

immediate, ubiquitous and easily consumed. Cold media have a long shelf life; hot

media arrive and thrive and die instantaneously. Meyrowitz suggests that these

characteristics help the consumer define the value of each medium:

One goes after printed messages that are worth the trouble. One’s collection of
books often represents select sets of messages painstakingly gathered over
time. A personal “library” usually ties the individual into the information
network of a group or a small constellation of groups. The personal library,
therefore, also tends to isolate the individual from other groups and their
information. Since one needs to search for individual books, one tends to
“find” books on topics one already knows about and is interested in.

Many electronic messages are chosen with less care and discrimination. We
often spend more time deciding over the model of a radio or a television set
than we do selecting the particular broadcast program. People tend to choose a
block of time to watch television rather than choose specific programs.47

46
Ibid., 19.
47
Meyrowitz, Joshua. No Sense of Place: The Impact of Electronic Media on social Behavior.
New York: Oxford University Press, 1985. 82.

256
It is this selection process that defines our prejudices between oral and literal media:

we collect books, which gives them an intrinsic ‘commoditized’ value; we channel

‘surf’ television looking for programming that will amuse us but we don’t get overly

emotionally invested in what we watch… and when we turn off the television, the

medium evaporates, departing the room, leaving no trace. Television and radio are true

ephemera; books are keepsakes. But why?

Given that the process of reading is such a complex exercise, we may feel,

subconsciously, that the labor of reading has earned us the entitlement to keep the

book as a symbol of that engagement:

Reading is hard work even for the literate. The black shapes on this page, for
example, must be scanned, word after word, line after line, paragraph after
paragraph. You are working hard to receive this message. To read these words,
your eyes have been trained to move along the lines of print the way a
typewriter carriage moves a piece of paper. When you get to the end of the
line, your eyes dart back to the left margin and move down one line. And just
as musicians often hum the notes they see on sheet music, many readers tend to
sub-vocalize the sounds of the words they are reading. Because of the energy
involved, people will not bother to read every book that comes to their
attention. People are more likely to search for specific books in which they are
actively interested and that justify all of the effort of reading them. Electronic
images and sounds, however, thrust themselves into people’s environments,
and the messages are received with little effort. In a sense, people must go after
print messages, but electronic messages reach out and touch people. People
will expose themselves to information in electronic media that they would
never bother to read about in a book.48

And television and radio have a much broader reach than the book. Meyrowitz writes

that a book can reach the best seller list just by selling 115,000 hardcover copies;

while a successful primetime show can find an audience of 25 to 40 million viewers in

a single episode.49

48
Ibid., 84.
49
Ibid., 85.

257
The dynamics here are massive.

Placing television and radio aside, Walter Ong looks at the power of recorded

media; specifically, he writes about recorded music and recorded film and videotape.

He says that these recordings represent something he defines as the ‘secondary

orality,’ or oral media different from what came before.

Secondary orality is both remarkably like and remarkably unlike primary


orality. Like primary orality, secondary orality has generated a strong group
sense, for listening to spoken words forms hearers into a group, a true
audience, just as reading written or printed texts turns individuals in on
themselves. But secondary orality generates a sense for groups immeasurably
larger than those of primary oral culture— McLuhan’s ‘global village’.
Moreover, before writing, oral folk were group-minded because no feasible
alternative had presented itself. In our age of secondary orality, we are
groupminded self-consciously and programmatically.50

Ong describes the new oral movement as a reconstruction of the old oral traditions, but

with one looming difference. Before the printed word, the peoples of the world were

naturally oral people, which meant they were also naturally group minded; after the

printed word deconstructed oral traditions, recorded music and video began

reconstructing these audiences, but we returned to the group aware of our sense of

self; we were, as Ong described it, returning to the ‘global village’ to be united again,

unknowing we were merely standing alone together in a crowd. (Example: Consider

the teenager at the crowded bus stop wearing heavy insolating headphones wirelessly

connected to the pop music streaming from a smart phone, and you’ll get the idea.)

Gutenberg’s invention has done complex things to the consciousness of man.

Earlier on, I wrote that Marshall McLuhan believed that “schizophrenia may be a

50
Ong, Walter J. Orality and Literacy: The Technologizing of the Word. London: Methuen,
1982. 134.

258
necessary consequence of literacy.”51 Because of our literacy, we have shattered

ourselves, broken and divorced from the musings of the man in his natural state. We

are literal men in an oral world. As Ong suggests, we have no path to reparation;

instead, we must acknowledge our loss of the natural self and return to clan with the

purpose of being both oral and literal. The only difference is the fact that we are now

aware of the oral and the literal.

Political philosopher Jean-Jacques Rousseau struggled with the idea of the

“natural man” and the loss of innocence. Natural man, he believed, lived in a state of

bliss inside himself unaware of the trappings of knowledge, while the modern man

lived outside himself consumed with the conflicts of alienation and being alone.

Natural man didn’t even care to acknowledge these things.52

Clearly, literacy has transformed the human psyche and the emergence of this

new ‘secondary orality’ offers a new promise of reparation of our sense of belonging

in the world; we will never be as raw as the pre-literal man roaming free across the

savannah, however we can come to terms with our literacy and repair our oral selves;

we just need to get past the “bowling alone” syndrome and begin to live again in a

social world.

So what would that look like exactly?

Literacy defines the individual; conversation defines the societal; radio,

television, and newsprint define the idea of “local” or association by geography; and

the digital defines the global. Inside the fabric of society (in all its forms), the human

51
McLuhan, Marshall. The Gutenberg Galaxy: The Making of Typographic Man. Toronto:
University of Toronto Press, 1962. iix.
52
Damrosch, Leopold. Jean-Jacques Rousseau: Restless Genius. Boston: H. Mifflin, 2007.
238.

259
animal has an acute need for identity… and this is the reason we yearn for stories (for

narrative). Richard Kearney writes about this idea at length:

If the need for stories has become acute in our contemporary culture, it has
been recognized from the origin of time as an indispensable ingredient of any
meaningful society. In fact, storytelling goes back over a million years, as
scholars like Kellogg and Scholes have shown. The narrative imperative has
assumed many genres—myth, epic, sacred history, legend, saga, folktale,
romance, allegory, confession, chronicle, satire, novel. And within each genre
there are multiple sub-genres: oral and written, poetic and prosaic, historical
and fictional. But no matter how distinct in style, voice or plot, every story
shares the common function of someone telling something to someone about
something.53

Stories define society. We must invest more energy in the design of our best stories

and we must work to preserve the best of our stories. On this last point, I find myself

reflecting upon Kevin Vaughan and his work to repair and republish “The Crossing.”

It seems grotesque that this dynamic work of storytelling disappeared from the

Internet over something as simple as the demise of an ailing newspaper. And while it

has been restored and returned to the Internet, it still lives on the edge of extinction.

There are no guarantees that this history will survive another decade or last out the

century.

As for a final word on the value of storytelling, author Moses Hadas had this to

say:

Nothing is so interesting to men as the lives of their fellow-men. People who


care for no other form of literature will read biography; and novels and plays
are acceptable only as their characters are credible human beings. For bookish
people, at least, greatest interest naturally attaches to lives which have shaped
our given direction to cultural tradition; even in a metropolis where rank is
conferred by Mammon intellectual achievements comes into its own in the
obituary columns, where a writer or composer at last takes precedence over a
banker or a politician.54

53
Kearney, Richard. On Stories. London: Routledge, 2009.
54
Hadas, Moses. Ancilla to Classical Reading. Pleasantville, NY: Akadine Press, 1999. 122.

260
Oddly prescient here is his use of the phrase “writer or composer,” which suggest both

the literal and the oral producer. This—from a book authored in 1954.

Which brings us to Robert Logan’s ideas about the “tertiary” or “digital

orality.”

In this next phase, digital orality takes text and presents it in an electronic

format. Specifically, written ideas are translated into binary code and transmitted over

the digital network from one user to another in the form of email, text messages and

instant messages; given the fact that this content is digital, it can also be altered,

repackaged and reconstituted as enhanced versions of the original. This media

interaction is in stark contrast to the electronic media that came before—specifically,

radio and television—simply because electronic media were transmitted in one-

direction absent an opportunity for exchange. Digital orality is about interaction, a fact

that brings the audience closer to the performer.

The single most important factor of this “new media” in contributing to the
closing of the gap between the producers and consumers of media is the fact
that the Internet provides each potential artist, journalist, filmmaker, radio or
television show producer with a distribution channel with global reach.
Although it is true all of the “new media” gadgets make it easier to produce
content, distribution is king from the point of view of the producer. The
stranglehold that mainstream media had on distribution disappeared with the
advent of the Internet. This does not spell the end of mainstream media, but
they have had to learn to partner with their consumers, as some of the
examples in the above paragraph demonstrated.55

With advances in virtual reality and augmented reality tools, this distance is further

reduced. Especially with regard to the VR headset, the virtual reality experience places

the audience at the interaction of experience as perceived by the producer. In other

55
Logan, Robert K. Understanding New Media: Extending Marshall McLuhan. New York:
Peter Lang, 2016. 67.

261
words, by wearing a VR headset—one that covers the eyes and ears—the audience

user is seeing and hearing what the producer intended them to see and hear; when the

user moves his head up and down and side to side, the VR headset replicates the visual

and audio experiences the producer found when he created the digitized event.

Wearing a VR headset projects the experience of the producer into the sensuous orbit

of the audience user creating an interactive event. In this way, the producer and the

audience are very nearly one.

But this is where Robert Logan’s perception of the new orality ends and I think

another phase of development in this new Digital Orality… one that fuses the producer

and audience experiences into the same experience. But before I get to that, we should

talk about the current economic forces preying upon media convergence.

Digital Economies

There are three basic steps in the media process: the first is production, the

second is packaging and the third is delivery.

Looking to the newspaper as an example, the daily paper is conceived,

reported, written and edited within the course of a single business day; it is then laid

out in a process called pagination before being shipped to the printing presses; after

the papers have been run, they are loaded on trucks and hauled out into the delivery

areas and disseminated among a distribution network that moves smaller and smaller

bundles out into the world until, finally, the paper is hand-delivered to the steps of the

subscriber. Again, the process is production, packaging and delivery.

262
Unlike traditional commodities, electronic media don’t benefit from the

economic influences of scarcity; scarcity is the economic variable that determines how

limited revenues and resources are spent. However, media including television and

radio don’t depreciate or appreciate in value as they are consumed; there is no sense of

degradation; and these are commodities in infinite supply, which means that scarcity

isn’t a factor determining their street value. Economist Gillian Doyle put it this way:

“However much a film, a song or a news story is consumed, it does not get used up.”56

Instead, the profit model for television and radio media are found in two streams: the

first is in content, or advertisers paying to have a message attached to a specific topic;

the second is audience, or advertisers paying to have a message appear before a certain

demographic.57 So, advertising for beer and sports cars will more likely find its target

audience during an evening broadcast of a baseball game than during an afternoon

soap opera.

But Doyle warns that the traditional broadcasting model is stuck in a cyclical

business model that has largely gone unchanged:

For broadcasters, however, the cost of putting together and transmitting a given
programme service is fixed, irrespective of how many viewers tune in or fail to
tune in. Similarly, few savings can be made by newspaper and other print
media publishers when circulation fails to live up to expectations (although,
unlike in broadcasting, marginal print and distribution costs are present).58

Therefore, economies of scale are the most potent form of financial gain for these

various broadcast media.

56
Doyle, Gillian. Understanding Media Economics. London: SAGE, 2013. 10.
57
Ibid., 1-16.
58
Ibid., 13.

263
Marginal costs are virtually always lower than average costs. Consequently, as
more viewers tune in or more readers purchase a copy of the magazine, the
average costs to the firm of supplying that product will be lowered. If average
production costs go down as the scale of the consumption of the firm’s output
increases, then economies of scale and higher profits will be enjoyed.59

This was especially true when most media still possessed analog signatures. Analog is

a pre-electric form of the medium or a pre-digital form of the medium. So, before

photographs were digitized, they existed as photochemical shadows of captured light

trapped and stored on the gelatinous glazed surface of a plastic ribbon of film. In the

1990s, that all changed when technology improvements replaced the chemicals and

the plastic film with an electronic light sensor—known as an image sensor—which

had the ability to ‘witness’ the light and transform it into a digital code of zeros and

ones. Once digitized, the photo image could be moved from the camera to a computer

and onto the Internet more freely. It also made the image more pliant, creating the

potential for photo editing, image enhancement, and photo manipulation.

Digitizing music was probably the worst thing technologists could do to the

music industry. Before digitization, analog music was available on record albums and

later cassette and 8-track tapes; in the 1990s, with the advent of the compact disc,

companies began digitizing music and the whole record industry shifted. For a time, it

looked like things were going to work out and then the Internet came along and the

world discovered it could shift digitized audio tracks around very easily; pirate

services including Napster and others created a bootleg culture that had normally law-

abiding citizens—mostly college-aged students—breaking the law, downloading

volumes of stolen music files. Stephen Witt writes about this in his book How Music

Got Free:
59
Ibid., 14.

264
I am a member of the pirate generation. When I arrived at college in 1997, I
had never heard of an MP3. By the end of my first term I had filled my 2-
gigabyte hard drive with hundreds of bootlegged songs. By graduation, I had
six 20-gigabyte drives, all full. By 2005, when I moved to New York, I had
collected 1,500 gigabytes of music, nearly 15,000 albums worth. It took an
hour just to queue up my library, and if you ordered the songs alphabetically
by artist, you’d have to listen for a year and a half to get from ABBA to ZZ
Top.60

In the early 2000s, things got so bad, the Record Industry Association of America

began stalking American universities, filing “John Doe” lawsuits that forced many of

the nation’s top universities to surrender campus Internet traffic data; the RIAA then

used that data to find high-volume data consumers who were most likely pirating

music. This was done to track down the worst offenders and to scare others—students

and universities—into compliance with the copyright licensing laws.61 By 2010,

damage had been done, the record industry was on the brink of collapse, and billions

of illegally downloaded songs were populating the MP3 players of an entire

generation. The fallout has sweeping repercussions: Possibly as a byproduct of the

bootleg movement, an entire generation—the Millennial generation and some Gen-

Xers—has no interest in paying for anything on the Internet; certainly not music (and

as an unhealthy byproduct, news content), especially if there are ways to get these

things for free.

The odd thing here is this: I believe that because analog music came on a

tangible piece of media—the record album, the cassette tape, the CD—people didn’t

mind paying for the “packaging.” But when we translated music into MP3s and began

moving them around the Internet, the absence of tangible media created the illusion

60
Witt, Stephen. How Music Got Free: The Inventor, the Mogul, and the Thief. London:
Vintage, 2016. 1-5.
61
Ibid., 1-5.

265
that nothing was really being stolen. Returning again to the three phases of media—

production, packaging, delivery—the Internet has streamlined the first and the last, but

the issue of “packaging” remains unresolved and that problem may actually be the

catalyst leading to the basic misbelief about public piracy: digital downloads aren’t

theft… simply because MP3s aren’t tangible artifacts.

In their book, World of Warcraft and Philosophy: Wrath of the Philosopher

King, Luke Cuddy and John Nordinger address this very issue:

…music pirates who download the [World of Warcraft] soundtrack are taking a
digital good: the soundtrack itself is not a physical item. It is debatable whether
digital goods can be stolen per se: if I download a pirated copy of the
soundtrack I do not “take” it from anyone; I make a copy of a digital item that I
may or may not have otherwise purchased. Nonetheless, piracy and the “theft”
of digital goods will be immoral on at least two of the theories considered
earlier, rule of utilitarianism and deontology.62

Basically, absent the packaging, audio files are—like radio and television—immune to

the economic influences of scarcity. Again, if consumption of the medium doesn’t

degrade the quality or quantity, it’s hard to assign a value to these commodities. But

Cuddy and Nordinger warn that stealing these digital items has the ability to kill

creativity entirely:

The rule utilitarian will claim the following: if it were moral to pirate digital
goods, then producers of digital goods would no longer be guaranteed an
income. This would create a great deal of anxiety. Consumers of digital goods
would gain some immediate happiness, but they too would be caused pain as
producers would soon stop making virtual goods (on account of its
unprofitability). Because piracy creates more pain than pleasure, it is immoral
to the rule utilitarian. The deontologist also holds that piracy is immoral. We
cannot have a moral law that licenses piracy: if everyone pirated digital goods
then nobody would produce them, and so there would be no digital goods to
pirate.63

62
Cuddy, Luke, and John Nordlinger. World of Warcraft and Philosophy: Wrath of the
Philosopher King. Chicago: Open Court, 2009. 178.
63
Ibid., 175.

266
So, this nearly-permissive culture of digital MP3 piracy has lasting and dire

consequences and music and photographs weren’t the only media to move from the

analog to the digital. In fact, one of the key catalysts transforming the data delivery

system is the fact that most major media can now be digitized. This digitalization

allows for text, photos, video and audio files to be transmitted swiftly between

recording devices and computer platforms before being moved over to the Internet, as

Doyle suggests here:

The fact that media content can be reduced to a string of zeros and ones and
distributed electronically means that it is ideally suited to dissemination over
the Net. Of course, this is also true of other knowledge-based intangible goods,
for example computer software. The general implication is that the growth of
the Internet represents an opportunity to distribute media content either as
existing or as new products over an additional delivery platform at a very low
marginal cost.64

Clearly, digitalization has had sweeping influence over the media formula. In the

digitized form, media content can cycle more fluidly through the production process.

Before photographs were digitized, the process of creating a still photograph was

rather cumbersome. The photographer went out into the field and shot a roll of film;

during the process, the photographer had to trust that his camera settings were

producing the images he was seeking because, unlike modern DSLR cameras, the

photographer could not see his work; after the images are shot, the photographer took

the film into the darkroom to process the image using a chemical process that

transforms invisible light images into a visible form called a “negative”; from there,

light is cast through the negative and down upon a light-capturing piece of

64
Doyle, Gillian. Understanding Media Economics. London: SAGE, 2013. 151.

267
photographic paper, which secures the image in a final form. The process can take

several hours.65

With a DSLR camera, the photographer can aim and shoot, inspect the

photograph and transmit the image to an editor waiting in a distant newsroom who can

edit it and publish it within minutes—if not seconds—after the image was captured.

Time, it seems, is a one of the things digitalization has given us in excess.

Philosopher Paul Virilio warns us about time. He tells us that humans are obsessed

with time and that our obsession has translated into a forward motion, a velocity away

from our sense of place and towards some great unknown. We are fleeing forward…

with no regard for place. He puts it this way:

We are thus witness to a phenomenon of ‘disanimalization’ followed by a


phenomenon of ‘dematerialization’: not only the animal (the pack animal, the
draught animal, the race animal) disappears to the advantage of the machine,
but the technological vehicle of transmission tends to disappear in its turn with
the rise of importance of the message transmitted, leading ultimately to the
instantaneity of radio and radar signals…. By-product of the steam engine and
in spite of the electronic motor, the automobile will have to wait for half of the
twentieth century to participate in the ‘information revolution’ as radios and
televisions make their way into the cabin along with the tentative introduction
of TV. Since its mechanical and thermodynamic (Cugnot) origins, and in view
of the very lively concurrence of the railway and commercial aviation, it must
be stated that with the automobile, the principle of the autonomy of
transportation continued to mask that of the information of transportation.

With electronics, we enter the period that is coming to completion today.66

To Virilio, media are just an extension of the race to some undetermined finish and his

exception to it is really how it is part of our self-created maintenance of distraction.

65
Andrae, Monika, and Chris Marquardt. The Film Photography Handbook. Santa Barbara,
CA: Rocky Nook, 2016.
66
Virilio, Paul, and Michael Degener. Negative Horizon: An Essay in Dromoscopy. London:
Continuum, 2008. 154.

268
Time has a value, and the ability to transmit swiftly has a value… as surplus and a

surplus of time is commodity and that commodity could be shared across the social

spectrum… but Virilio never addresses the value of the message, only the speed with

which it is delivered.

Philosophers Max Horkheimer and Theodor Adorno, however, take a keen

look at what is being transmitted and they do not like it. It is their belief that mass

culture is identical and that a select few corporations have coopted the power of the

media for the purpose of profit. In doing so, they set aside any pretense of the

aesthetic.

Under monopoly all mass culture is identical, and the lines of its artificial
framework begin to show through. The people at the top are no longer so
interested in concealing monopoly: as its violence becomes more open, so its
power grows. Movies and radio need no longer pretend to be art. The truth that
they are just business is made into an ideology in order to justify the rubbish
they deliberately produce. They call themselves industries; and when their
directors’ incomes are published, any doubt about the social utility of the
finished products is removed.67

What’s worse is the fact that they believe that the Hollywood machine has a grapple

hold over what is considered fair and amusing content and that everything is put

through this aesthetic blanching process—a ‘culture filter’—that bleaches out any

sense of color, form or flavor.

The whole world is made to pass through the filter of the culture industry. The
old experience of the movie-goer, who sees the world outside as an extension
of the film he just left (because the latter is intent upon reproducing the world
of everyday perceptions), is now the producer’s guideline. The more intensely
and flawlessly his techniques duplicate empirical objects, the easier it is today
for the illusion to prevail that the outside world is the straightforward

67
Adorno, Theodor W., and Max Horkheimer. Dialectic of Enlightenment. London: Verso,
2016.

269
continuation of that presented on the screen. This purpose has been furthered
by mechanical reproduction since the lightning takeover by the sound film.68

The result is a wholesale ‘stereotyping’ of culture where the real is supplanted by ‘real

style,’ or the illusion of manufactured reality as the substitution for what was once an

organic availability of truth. Before… to know Rome, one had to visit Rome; today…

all one needs to do is witness Rome on the projection screen.

And the dynamic range of film is diminishing. Because Hollywood dominates

the global film industry, we must look there for information, and Variety, the trade

journal for the film industry, continues to report that the number of high-budget films

is growing, but the total number of films is diminishing.69 The purpose here is to win

big with a few blockbuster, or ‘tentpole’ films that cost in excess of $100 million to

make; these films tend to also gross those same earnings during the opening weekend

in the theaters. As an example, writes journalist James Rainey: in 2015, “over 25% of

the total box office came from just five films, well above the average of roughly 16%

from 2001-14 and the prior peak of 19% in 2012.” 70 So, when we go to the movies,

we have fewer choices; also, these larger blockbuster films tend to target broad

audiences, a trend that often diminishes the artfulness of the film.

We saw the same trend in the television industry. As television audiences

found more and more outlets for content, ratings began dropping and TV producers

68
Ibid.
69
Rainey, James. "‘Increasingly Dire’ Film Industry Has Fewer Winning Films, Studios
(Analyst)." Variety. March 04, 2016. Accessed March 09, 2017.
http://variety.com/2016/film/news/hollywood-dire-outlook-tentpoles-1201722775/.
70
Ibid.

270
needed to find solutions. The solution was to create ‘inoffensive’ programming that

appealed to the lowest common denominator.

[James] Webster’s work is representative of several conceptualizations that see


broadcast television content limited to a relatively narrow range of discourse,
one that is motivated by two factors; the need to maximize audiences (and
please advertisers) by providing inoffensive, lowest common denominator fare
with the widest possible audience appeal, and the duty that broadcasters, as
licensed public trustees, have to avoid offending audiences (e.g., with indecent
programming).71

So what we get now is flashy blockbuster films about super humans doing

extraordinary things in bright, explosive colors as they face off against other super

human antiheroes. The film industry, in its effort to remain solvent, has become a

visual playground for teenage fantasy and the adult audience has been left entirely

behind. This reduction in sophistication has amounted to a medium that sets action

above dialog, style above story, flash above substance. The story isn’t the

conversation, it’s the visual narrative created as the hero maneuvers around or smashes

through a trail of physical or human obstacles.

Director and Actor Jodie Foster was quoted recently talking about this very

thing: “Studios making bad content in order to appeal to the masses and shareholders

is like fracking—you get the best return right now but you wreck the earth…. It’s

ruining the viewing habits of the American population and then ultimately the rest of

the world.”72

71
Bryant, J. Alison. Television and the American Family. New York: Routledge; Taylor &
Francis Group, 2008. 52.
72
Ramos, Dino-Ray. “Jodie Foster Slams Superhero Movies, Compares Studios’ “Bad
Content” to Fracking.” Deadline. January 02, 2018. Accessed January 24, 2018.
http://deadline.com/2018/01/jodie-foster-black-mirror-superheor-movies-marvel-studios-dc-
1202234126/.

271
Up against this culture, there doesn’t seem to be much room for film’s

featuring Humphrey Bogart’s brooding gaze as he contemplates his fleeting, yet-

unfinished-tryst with Ilsa against the backdrop of his war-torn, yet passively neutral

Moroccan night club.73

Clearly, Horkheimer and Adorno believe that—to Hollywood—great movies

aren’t the ones that make you think or feel; great movies are the ones that make a

massive profit. If stories define society, what sort of a society is being defined here?

Reacting to this, theorist Fredric Jameson has made some observations about

the potency of film upon culture. Specifically, he believes that film jolted us “out of a

print culture,” and, for a time became something more:

That film has today become postmodernist, or at least that certain films have, is
obvious enough; but so have some forms of literary production. The argument
turned, however, on the priority of these forms, that is, their capacity to serve
as some supreme and privileged, symptomatic, index of the zeitgeist; to stand,
using a more contemporary language, as the cultural dominant of a new social
and economic conjuncture; to stand—now finally putting the most
philosophically adequate face on the matter—as the richest allegorical and
hermeneutic vehicles for some new description of the system itself.74

But he ends that thought explaining that film and literature no longer aspire for these

lofty heights; instead, he suggests that a new medium has the potential to take over

and form what he calls a “cultural hegemony”: That medium is video.

Which brings us back around to the potency of the AVCHD camera.

Video, like film, is the moving image… but video is digital, it is the digitalized

version of film; digital images are portable and this portability of the moving image

73
""Casablanca" Plot Summary." IMDb. Accessed March 09, 2017.
http://www.imdb.com/title/tt0034583/plotsummary.
74
Jameson, Fredric. Postmodernism, Or, the Cultural Logic of Late Capitalism. London:
Verso, 1991. 69.

272
makes it pliant and available to a new type of artist. As a result, video production is

relatively inexpensive, easily learned, and seamlessly published. It also represents the

rebirth of amateur production. Remember John Philip Sousa’s fear that audio

recording would kill the amateur? Affordable video tools had the opposite effect.

Jameson suggests that video restores the potential for artfulness in the hands of the

amateur. And this creative energy has the potential to undo the cultural malaise

dominated by the Hollywood film apparatus.

I have tried to suggest that video is unique—and in that sense historically


privileged or symptomatic—because it is the only art or medium in which this
ultimate seam between space and time is the very locus of the form, and also
because its machinery uniquely dominates and depersonalizes subject and
object alike, transforming the former into a quasi-material registering
apparatus for the machine time of the latter and of the video image or “total
flow.” If we are willing to entertain the hypothesis that capitalism can be
periodized by the quantum leaps or technological mutations by which it
responds to its deepest systemic crisis, then it may become a little clearer why
and how video—so closely related to the dominant computer and information
technology of the late, or third, stage of capitalism—has a powerful claim for
being the art form par excellence of late capitalism.75

That said, Jameson is actually articulating Virilio’s nightmare, and like Virilio,

Jameson has a point. Video is the vanishing point where time and space intersect.

Given the affordability of video, one cannot help but realize the potency of this new

media form.

And from there, one cannot help but acknowledge that digitized media—in all

its forms—offer economic and practical advantages for the creative amateur. Looking

at the whole of the media process, digital content has made the production process

faster and easier; digital content has accelerated the delivery model so successfully

75
Ibid., 76.

273
that media is now nearly instantaneous and almost global; which leaves us with

packaging.

Before we digitized content, packaging was only about the vehicle for delivery

but content design is now just as important. In fact, the issue that must be answered is

how content should be designed and packaged to yield the largest audiences. The

economics of the digital realm are certainly a key factor but they don’t necessarily

have to be. As it stands now, the U.S. business model for media is a for-profit one.

Specifically, most of the news content generated by American media is done so by

publicly-traded for-profit news groups including Comcast, 21st Century Fox, Time

Warner, News Corp., Viacom and the Walt Disney Company; additionally, Amazon

recently purchased The Washington Post; and The New York Times continues to be a

major player in media experimentation.

The problem here is the fact that for-profit news isn’t a very efficient way to

serve the public; capitalism is an inefficient system. Political theorist Charles Barone

put it this way:

In the radical perspective, capitalism is, despite its potential for economic
growth, a highly inefficient system. Capitalist growth and profit depend upon
keeping people’s propensity to consume artificially high. Millions of dollars
spent each year by capitalists trying to convince consumers to spend their
incomes on capitalist-produced goods and services, many of which radicals
believe do little to satisfy real human needs. Rather than overcoming scarcity,
capitalism thrives on perpetuating scarcity.76

And heaven forbid the content of news strays outside the ideals or the ideology of the

capitalist state.

According to radicals, the free flow of ideas under capitalism is free as long as
it does not challenge capitalist domination of the labor process. In the United

76
Barone, Charles A. Radical Political Economy: A Concise Introduction. Armonk, N.Y:
Sharpe, 2004. 134.

274
States, capitalists have direct control of the media. They own the major
network radio and television stations, newspapers, and other print media.
Corporate sponsorship and advertising also mean that television and radio
programming as well as the print media are dependent upon corporate funding.
Increasingly this is true even for “public” radio and television in the United
States. This makes it difficult to get funding for programming or print media
that air views and publish articles that are hostile toward or critical of
capitalism. Business interests are thus able to exert control over the major
channels of communication and this imparts a procapitalist mass media bias
according to radicals.77

Clearly, leaving media and publishing in the hands of the American corporate

structure can be fraught with concern. Corporate interests aren’t necessarily aligned

with the interests of the American people, and this disparity can act as a filter,

diminishing and/or eradicating social narrative. If stories define society, and

corporations are filtering the stories reaching the public, there must be gaping holes in

the public’s general understanding of the structure of society; or worse, the public is

subjected to the corporation’s subjective definition of the society.

The for-profit model doesn’t appear to be the solution… but, for now, it is the

process that seems to be leading the way. But it doesn’t have to be. In fact, we can go

in another direction.

Public Media

In 1967, as part of his Great Society legislative package, President Lyndon

Johnson pressed to create a public broadcasting network to serve “the public interest.”

In the legislation, he defined the “public interest” as something removed from the

marketplace. The end result was a pair of broadcast networks: PBS for television;

77
Ibid., 83.

275
NPR for radio.78 So, the idea of public funding for American broadcasting media isn’t

a new one. In 2014, the U.S. government allocated $445 million for the Corporation

for Public Broadcasting, which then funds about half of the operating expenses for the

Public Broadcasting System and National Public Radio and their respective

programming.79 Of course, the Republican Party has been attempting to kill publicly-

funded media almost from the moment of its inception.

Those who, like members of the Reagan administration, believe that what
happens in the marketplace is always right found such a broadcasting system
unnecessary, even dangerous. Philosophically, the Reagan administration
could not support government-subsidized broadcasting. Politically, it settled
for merely reducing federal support and encouraging public broadcasting to
seek voluntary support elsewhere.80

And when the Reagan Administration cut the CPB budget, NPR turned to its

audiences and asked for financial donations, which it received in volume

demonstrating the public’s connection with government-supported radio. Corporations

also stepped up, sponsoring programs across the PBS schedule.

And these public networks have done wonders for American culture. It was

PBS that sponsored a series of children’s programs including Sesame Street, Mr.

Roger’s Neighborhood, Zoom, The Electric Company, and so forth. PBS also fostered

a series of adult programs including Masterpiece Theater, The PBS NewsHour, This

Old House, Antiques Roadshow, and Frontline. All these programs found audiences

and garnered public support; with regard to the news programming, shows like

78
Mitchell, Jack W. Listener Supported the Culture and History of Public Radio. Westport
(Conn.): Praeger, 2005.
79
Ibid.
80
Ibid.

276
Frontline addressed issues that appeared nearly forbidden on the traditional, for-profit

networks owned by CBS, Comcast, Disney and News Corp.

Government-funded media isn’t a new thing. In the United Kingdom, they

have the BBC; in Canada, they have the CBC; and in Australia, they have the ABC;

and these are just a few of the offerings. Globally, many countries across Europe and

elsewhere have government-funded media that sponsors not just radio and television,

but film and performance too.

On this last point, in 2014, the film Under the Skin starring Scarlett Johansson

was funded, in part, by a series of government-financed production groups including

Film4 Productions, the British Film Institute and Scottish Screen.81 The film cost an

estimated $13.3 million to produce,82 and it only made back $7.2 million in global box

office receipts;83 clearly, a box office failure… but was the production worth it?

Because the Scottish Screen was involved, the film was shot almost entirely in

Glasgow, Scotland and the surrounding areas. Also, because of the limited budget—

the initial production costs were estimated around $35 million—the director decided to

shoot the movie “guerrilla style” in the bars and streets of the city. He also decided to

simply use extras from the community as actors in the film. All of these factors give

81
Thompson, Anne. "Why Jonathan Glazer’s ‘Under the Skin’ Took a Decade to Make
(VIDEOS)." IndieWire. October 23, 2014. Accessed March 10, 2017.
http://www.indiewire.com/2014/10/why-jonathan-glazers-under-the-skin-took-a-decade-to-make-
videos-190464/.
82
24 March, 2014 | By Andreas Wiseman. "Under The Skin: At Any Cost." Publication Name.
Accessed March 10, 2017. http://www.screendaily.com/features/under-the-skin-at-any-
cost/5069904.article.
83
"Under the Skin (2014) - Financial Information." The Numbers. Accessed March 10, 2017.
http://www.the-numbers.com/movie/Under-the-Skin#tab=summary.

277
the production a raw, immersive feel.84 The project also generated jobs locally as well

as cultural interest in Scotland globally.

Is this serving the “public interest?” Given the exposure to Glasgow, the local

artist community and other intangible factors, one could argue yes.

Looking at the Scottish Screen website, they also have funding for “Digital

Media,” or “non-broadcast” interactive content under the provision that the grant

winner have matching funds and a letter of interest from a third-party private

investor.85 Again, the purpose here is to stimulate the Scottish economy, but it’s also

looking to foster creative storytelling across the country.

Here in the United States, the state and federal governments offer tax

incentives and credits for film productions, there are also grants for the arts and the

humanities. But given the current political environment, the United States is moving

away from public funding for creative industries and creative arts, favoring, instead, a

corporate model, which has principles and motivations of its own. Corporations are

more concerned with profit than aesthetic and this continues to have a lasting effect on

the quality of the work being produced in the United States.

But, and let me articulate this now, what could happen if the U.S. government

reversed direction and began investing heavily in public art, public humanities and

public media? This shift could foster a generation of amateur-driven creativity and a

revolution in the aesthetic of domestic media.

84
24 March, 2014 | By Andreas Wiseman. "Under The Skin: At Any Cost." Publication Name.
Accessed March 10, 2017. http://www.screendaily.com/features/under-the-skin-at-any-
cost/5069904.article.
85
Graham Shedden, Ashley Milne - The Big Picture. "Scottish Screen - Digital Media IP
Fund." Scottish Screen - Digital Media IP Fund. Accessed March 10, 2017.
http://www.scottishscreen.com/content/sub_page.php?sub_id=207&page_id=19.

278
Amateurism

The idea of the amateur has gotten a little lost in the current culture simply

because everyone seems to strive towards something “professional,” as though being

considered as such, means wealth and prosperity. In fact, the definition of the amateur

as someone who does something for the joy of it and not for the financial reward,

seems to strike at the essence of the artist. Here in the United States, it seems almost

naïve to think of oneself as “creative” or “artistic” without want for financial gain, like

money should be motivation for all creative endeavors. Clearly, the American affinity

for capitalism has so infected us, we’ve grown to worship the wealthy and despise the

poor based almost entirely on the perception that wealth is success and success is

happiness. These days, it’s hard to be a starving artist and the safety nets that were

once there to catch us, are fraying all around us.

But there is still a romantic notion about amateur artists. There is a belief that

someone working away in the garage, or basement, or some other negative cultural

space, should be granted some leniency as they pursue their passion for some chosen

art form; to be frank, often the idea of that garage project has been coopted by the

business community as some idealistic entrepreneur bent on building the next great

widget. But I’ve strayed from my main point: Americans have little room in their

personal lives for creative endeavors, but we do retain some romantic notion about the

process of artful creative energy… and finally, we possess the electronic tools and

devices to begin creating again.

279
In the documentary Hearts of Darkness, A Filmmaker’s Apocalypse, director

Francis Ford Coppola is quoted reacting to the great potential for amateurs working

with modern tools:

To me, the great hope is that now these little 8mm video recorders and stuff
have come out, and some… just people who normally wouldn’t make movies
are going to be making them. And you know, suddenly, one day some little fat
girl in Ohio is going to be the new Mozart, you know, and make a beautiful
film with her little father’s camera recorder. And for once, the so-called
professionalism about movies will be destroyed, forever. And it will really
become an art form. That’s my opinion.86

Coppola said this some years before the invention of the digital video camera and the

commercialization of the Internet. Given those advances, Coppola might actually live

to see that “girl in Ohio” produce her masterpiece.

Communication theorist Henry Jenkins certainly believes there is an emerging

trend of amateur media; he calls it a grassroots movement, and the catalyst that is

returning amateur art back before the public is the Internet. Corporate media have long

dominated the landscape repressing the amateur movement, relegating homemade art

to the confines of the basement and the living room; anywhere, as long as it wasn’t in

the public eye. The Internet has unleashed the amateur, setting him loose upon the

world.87

It probably started with the photocopier and desktop publishing; perhaps it


started with the videocassette revolution, which gave the public access to
movie-making tools and enabled every home to have its own film library. But
this creative revolution has so far culminated with the Web. To create is much
more fun and meaningful if you can share what you can create with others, and
the Web, built for collaboration within the scientific community, provides an
infrastructure for sharing the things average Americans are making in their rec

86
Kennedy, Ian. "Francis Ford Coppola on the Amateur." Everwas. February 24, 2015.
Accessed May 01, 2017. http://everwas.com/2015/02/francis-ford-coppola-on-the-amateur/.
87
Jenkins, Henry. Convergence Culture: Where Old and New Media Collide. New York: New
York University Press, 2006. 240.

280
rooms. Once you have a reliable system of distribution, folk culture production
begins to flourish again overnight.88

Now, the folk culture of today is vastly different from what came before. In the United

States, early folk culture was “built on borrowings from various mother countries,” he

says, but now creative amateurs are looking at the last half-century of corporate media

for inspiration and this is causing a problem.

The issue of copyright is almost as old as the creative arts themselves.

In 1506, Albrecht Dürer sued an Italian artist named Marcantonio Raimondi

for reproducing Dürer’s woodcut series Life of the Virgin and The Great Passion; the

works included Dürer’s monogram.89 90 Now, it may seem like a slight thing, but

Dürer’s monogram was just as important as a painter’s signature assigned to a piece of

art. Dürer’s monogram was his initials, “A” and “D,” and he’d created an original

design for them: the letter A is much taller, nearly twice as tall, and is written in a

stilted, square serif font; the letter D is housed underneath and in between the legs of

the letter A; often, he placed the date of the work in a similar numerated version of the

font just above the letter A. This was Dürer’s monogram and it as just as important as

his signature.91 When the Venetian judge ruled on the lawsuit, he determined that

Dürer’s works themselves could not be protected, but his personal monogram could be

protected.92

88
Ibid., 136.
89
Kleiner, Fred S., and Helen Gardner. Gardner's Art through the Ages: The Western
Perspective. Belmont, CA: Wadsworth, 2013.
90
Koerner, Joseph Leo. The Moment of Self-portraiture in German Renaissance Art. Chicago:
University of Chicago Press, 1996.
91
Ibid. 203-223.

281
This was the beginning of copyright law. The triumph here is the fact that the artist’s

identity could be preserved and later judgments worked in favor of the artist, for a

time.

Now, the issue of copyright has been written about at length (and I will leave it

to others to work through the litigious history) but the current result was the creation

of a pair of creative environments: the first is the corporate environment, the second is

the amateur. And anytime someone from the amateur group strayed into the

boundaries of the corporate realm, the lawyers were dispatched and the amateur works

were extinguished.

Harvard law professor Lawrence Lessig believes the current copyright

environment is absolutely out of balance favoring corporate interests unfairly.

Searching for examples, Lessig looks at Walt Disney. Disney, he says, built an empire

by retelling the old stories created by the Brothers Grimm and others; of course,

Disney sweetened these rather dark tales, making them more appropriate for American

audiences. Lessig explains it this way:

This is a kind of creativity. It is a creativity that we should remember and


celebrate. There are some who would say that there is no creativity except this
kind. We don’t need to go that far to recognize its importance. We could call
this “Disney creativity,” though that would be a bit misleading. It is, more
precisely, “Walt Disney creativity”—a form of expression and genius that
builds upon the culture around us and makes it something different.93

In other words, Walt Disney took existing stories, changed them for a more

contemporary audience and then presented them to the public. Lessig calls this the

process of “Remix,” and offers many examples. Disney’s examples include the

92
Ibid.
93
Lessig, Lawrence. Free Culture: The Nature and Future of Creativity. New York, N.Y.:
Penguin, 2005.

282
following films: Snow White, Pinocchio, Cinderella, Alice in Wonderland, Peter Pan,

Lady and the Tramp, the Jungle Book, and so forth. Someone else created all these

stories but, because the permissions for these works had exhausted, all of these stories

existed in the public domain, which means anyone could do anything with them. And,

at the time of Disney’s peak work, the length of copyright ownership was just 56

years.94

In 1998, the laws regarding copyright were altered significantly by the Sonny

Bono Copyright Term Extension Act, which extended them to 75 years. Further, it

was companies including the Disney Company that lobbied to get this law placed into

effect. The final slight in all of this is the fact that now, because Disney has produced

films about Cinderella, Pinocchio and Peter Pan, the company has free title to protect

revisions of these works. So, the very practice that made Walt Disney famous is now

virtually condemned by the current copyright laws.95

Lessig believes that the current copyright laws are so prohibitive that they

threaten to destroy Western culture.

And now copyright law does get in the way. Every step of producing this
digital archive of our culture infringes on the exclusive right of copyright. To
digitize a book is to copy it. To do that requires permission of the copyright
owner. The same with music, film, or any other aspect of our culture protected
by copyright. The effort to make these things available to history, or to
researchers, or to those who just want to explore, is now inhibited by a set of
rules that were written for a radically different context.96

Lessig believes the copyright laws need to be rewritten, amended to restore what he

called the “read-write” culture. As I explained earlier, read-write or RW culture is

94
Ibid.
95
Ibid.
96
Ibid.

283
when people consume media and then respond to it; the Internet is flush with

examples. Before the Internet, most 20th century media were “read-only” or RO

culture, where, once someone saw something on television, they had no recourse for

reaction.97 The Internet coupled with digital tools restored the RW culture and now

copyright law is stunting its development. Lessig has a list of five ways we can

improve copyright law immediately: first, deregulate amateur creativity; second,

create “clear title” or a registry that itemizes copyright ownership; third, simplify the

copyright laws for the common person; fourth, decriminalize the idea of copying

things and broaden the idea of “fair use;” and fifth, allow people to share files.98

Lessig argues that doing these things will lift the unnecessary burden of copyright law

and allow for homespun or grassroots creativity to flourish. The yield in the volume of

amateur creativity would be worth it.

Or we could take it a step further: Philosophers Michael Hardt and Antonio

Negri say that the future of the global economy is dependent upon a belief in

“common space,” and they write that copyright and patent laws are in direct conflict

with that idea and should be eradicated. To their thinking, there are three forms of

ownership: private, public and shared space, which they call “the commons.”99 And

the Internet is a critical component of that:

As Internet and software practitioners and scholars often point out, access to
the common in the network environment—common knowledge, common
codes, common communications circuits—is essential for creativity and
growth. The privatization of knowledge and code through intellectual property
97
Lessig, Lawrence. Remix: Making Art and Commerce Thrive in the Hybrid Economy. New
York, NY: Penguin Books, 2009.
98
Ibid.
99
Hardt, Michael, and Antonio Negri. Commonwealth. Place of Publication Not Identified:
Gallimard, 2014. 3-66.

284
rights, they argue, thwarts production and innovation by destroying the
freedom of the common. It is important to see that from the standpoint of the
common, the standard narrative of economic freedom is completely inverted.
According to the narrative, private property is the locus of freedom (as well as
efficiency, discipline, and innovation) that stands against public control. Now
instead the common is the locus of freedom and innovation—free access, free
use, free expression, free interaction—that stands against private control, that
is, the control exerted by private property, its legal structures, and its market
forces. Freedom in this context can only be freedom of the common.100

The essence of their entire argument about Commonwealth focuses on a series of

Marxist ideas, which would be hard to sell here in the United States. In addition to

eradicating copyright laws, Hardt and Negri also argue for relaxed borders and basic

living allowances; basically, theirs is a wholesale rejection of capitalism.101

That aside, Lawrence Lessig has already begun to take action in his war to

correct copyright. In addition to legal advice and council, he is also the architect of the

“Creative Commons,” an online sharing platform that allows users to gather content

from other members and craft their own projects. The Creative Commons was

launched in 2001 and continues to grow and thrive as a storehouse of “common”

content. The purpose of the Creative Commons is to take the lawyers and the courts

and the Congress out of the creative equation and, for the cost of a few choice words

reflecting upon the source of a piece of music, a video producer might add that sound

to his video work. After all, this is the essence of amateurism. It’s a lovely idea, and

it’s gaining some momentum, but it still flies in the face of the for-profit corporate

culture that continues to dominate American media.

100
Ibid. 282.
101
Ibid.

285
The Digital Disruption

Returning again to Gillian Doyle’s digital economic model: she defines the

cycle as production, packaging and delivery. Looking at this model, one can determine

that the first two phases—production and packaging—are revenue out; while the last

phase—delivery—is both revenue out and revenue in (with payment upon delivery).

In the summaries above, I move from the corporate model to the state model to the

amateur model, hoping to make sense of the financial troubles plaguing media

development. Looking at Joseph Schumpeter’s ideas about “creative destruction” or

the prospect that new technologies create new economies that destroy previous

technologies, one must apply his theory to the profit-based (or corporate-based)

(&'()
model. The formula for such a calculation is as follows: , where x represents the
&

old technology and y represents the new technology.102 Reading through the formula,

one will see that the economics here are to take the value of the old technology,

subtract the cost of the new technology and divide by the value of the old; the result is

a percentage representing the cost benefit.

Let’s apply this formula to the newspaper as a model for media. Turning again

to Doyle’s economic cycle, one must apply the creative destruction formula to each of

the three phases and the following variables must be considered: With regard to the

first phase—production—one must assume that the operating costs for a digital

newsroom are probably similar to those of the traditional news environment; with

regard to phase two—packaging—the real variable here is the value gained from

eliminating the printing operation; specifically, by eliminating the press run, the news

102
Komlos, John. "Has Creative Destruction Become More Destructive." NBER Working
Paper Series, August 2014, 3-5.

286
group subtracts the wholesale expenses of paper, ink, printing operations, personnel

and so forth; unfortunately, with some minor exceptions, most legacy media groups

continue to produce a tangible print organ and that fact destroys the cost savings one

could gain here; finally, with regard to the third phase—delivery—by eliminating the

expense of moving newspapers from the printing house to the consumer, the media

group can save millions in annual distribution costs. The down side here is the fact

that by transforming the product from a consumable, tangible good, to a digital

ethereal product, the audience’s interest in paying for this product also diminishes.

This shift in the economies of scale and revenue are certainly tricky: Legacy

newspaper groups are having a difficult time dropping the traditional publishing model

and replacing it with a modern digital one simply because the revenues earned by

digital display advertising are far less than display advertising from the analog,

newsprint, counterparts.

In the Pew Research Center’s “State of the News Media 2016” study,

researchers reported that The New York Times print edition earned $846 million in

advertising and circulation revenues for 2015; while, during this same fiscal year, The

New York Times digital division earned $198 million.103

Armed with these figures, one can place those numbers into the “creative

(&'() $+,-'$$.+
destruction” formula, ; this creates the following equation: , which
& $+,-

equates to a 77-percent revenue differential between the two media platforms; in other

words, to make the conversion from print to digital, The New York Times would have
103
Mitchell, Amy, and Jesse Holcomb. "State of the News Media 2016." Pew Research
Center's Journalism Project. June 15, 2016. Accessed July 23, 2017.
http://www.journalism.org/2016/06/15/state-of-the-news-media-
2016/?utm_content=buffer6871f&utm_medium=social&utm_source=twitter.com&utm_campaign=buff
er.

287
to cut its operating expenses by 77-percent, which means culling newsroom costs,

firing reporters and editors, and capitalizing on the economic benefits of digital

publishing and digital delivery; or it needs to find a way to increase digital revenues

by 77-percent; or finally it could work to diminish operating expenses as it increased

revenues.

This economic disparity is called “the digital disruption” and one can begin to

understand the transitional troubles plaguing the legacy news community. The Pew

Research Center, which does studies of American media models annually, offered this

explanation in its 2017 report:

Newspapers are a critical part of the American news landscape, but they have
been hard hit as more and more Americans consume news digitally. The
industry’s financial fortunes and subscriber base have been in decline since the
early 2000s, even as website audience traffic has grown for many.104

The study goes on to explain that annual advertising revenues for legacy newspaper

companies have dropped steadily since a peak in 2002, reducing them to where they

were in the mid-1980s; and there appears to be nothing slowing this decline.

Following this trend, the number of newsroom employees has also dropped from a

high near 70,000 to 41,400 in 2015; that’s a decrease of 37-percent.105

Local television isn’t doing much better. During the last decade, local

television audiences for late evening news have dropped by 31-percent and early

evening news viewership have diminished by 19-percent. However, these stations did

far better financially due, in part, to political advertising during the mid-term

104
Barthel, Michael. "Newspapers Fact Sheet." Pew Research Center's Journalism Project.
June 01, 2017. Accessed July 22, 2017. http://www.journalism.org/fact-sheet/newspapers/.
105
Ibid.

288
Congressional (2014) and Presidential elections (2016).106 Overall, it doesn’t look like

the for-profit model is working very well for legacy newspaper companies as we move

through the so-called “digital disruption.” Television is fairing better, at least

economically, but the news content on local television can be spotty and, as Neil

Postman reminds us, vacant.

For newspaper publishing to survive, a series of things might be attempted:

First, newspapers must find a better way to get audiences to look for and pay for

digital news content (getting news off the web browser is a key component of that);

second, newspapers must reinvent their relationship with the digital audience, setting

aside many of the anachronisms these newsrooms brought with them from the print

model; finally, newspapers need to consider better ways to integrate various media to

tell a multimedia story.

Barring these changes, the alternative is to forgo the standard capitalist

business model and embrace either private-public models (like Public Broadcasting)

or government sponsorship (like the BBC); or forge a new model, one founded in

Marxism that includes a wholesale shift away from capitalism and towards a global

culture that includes base-line salaries, copyright-free content, amateur content,

common sharing and so forth.

Summary of Digital Economies

For newspapers and other print-based storytellers, making the leap into the

realm of the digital, would have to be a pronounced leap of faith. The economic

106
Matsa, Katerina Eva. "Local TV News Fact Sheet." Pew Research Center's Journalism
Project. July 13, 2017. Accessed July 22, 2017. http://www.journalism.org/fact-sheet/local-tv-news/.

289
factors lost and gained in the shift from a print-based operation to a digital one are

pronounced and desperate and it would seem almost nearly impossible. In fact, given

these current publishing choices, it would simply be easier to start a digital operation

from scratch, instead of attempting to shift the protocols of a print-based operation to a

digital base.

As for the theory involved, I’d like to return to Robert Logan’s observations

about the Digital Orality. As I wrote earlier, Primary Orality is the spoken word;

Secondary Orality is oral media observed through the literal lens; which brings us to

the Digital Orality. The Digital Orality, as Logan explains, is about transmitting text

(or literal media) in a broadcast format. Radio is about transmitting oral media over a

broadcast network; television is about transmitting video over a broadcast network; the

Internet is about transmitting text (literal media) over a broadcast network. It seems

like a slight, and necessary thing, but this new epoch in media communication rounds

out our understanding of the digitalization of media. All the pieces are here, we just

have to determine how they fit together.

The Digital Age

So here we are just 20 years into the Digital Incunabula and during this

transition, we are seeing glimpses of the future. These experiments in digital

storytelling are only in their infancy and our experiments with multimedia production

are still just getting launched. Looking forward, several things need to change. First,

the tyranny of copyright enforcement needs to be reckoned with, second, a vision for

digital narrative needs to be defined, and third, a standard for conveyance and

290
reception must be defined, and finally, an audience must be forms ready for a new

form of media consumption. None of these things will be easy.

Setting the copyright issues aside, a true multimedia narrative form must be

developed. Right now, there is a strong dependence on linear, chronicle-like story

simply because chronology is easily adaptable. However, it is absolutely possible for a

nonlinear form to be developed. Imagine, for example, shattering the “Wall Street

Journal Style” into pieces and setting alongside each other in an organized way that

encourages the audience to select the pattern it wishes to review the story. Narrative

forms of this nature have already been presented but only experimentally.

On the issue of conveyance, eventually an advanced consumer electronic

device will present itself which will allow audiences to consume information in wildly

intricate ways. Information will be available on demand and a catalog of story will be

housed in a digital library, which will be assessable by all. Addressing the issue of

hardware directly, right now, we cling to our smart phone devices, which we carry in a

pocket, or holster or bag. To review information, we must retrieve the device and raise

it eye level. Imagine now the next series of devices, which bring that information

closer to the human mind: augmented reality will likely place this data on eyeglasses,

or contact lenses, which presents this information more seamlessly than the smart

phone model. The next phase might be a more intrusive augmentation, including data

implants that join the human intelligence with a digital intelligence. If this happens, a

whole new world of human experience would be in the offering.

But before I get into these things, lets consider the dangers that could lay

ahead.

291
Chapter 7

Dystopia

In the “Forward” of his book Amusing Ourselves to Death, communication

theorist Neil Postman reviews the dystopian literature looking for clues that might

reveal why the American media culture is the way it is. He chooses to look at George

Orwell’s 1984 and Aldous Huxley’s Brave New World. Examining the plot lines for

both books, Orwell creates a society where the government is the ultimate authority

and this repressive totalitarian regime uses television monitors to admonish and

control its population. In this dystopian society, it is the government that is creating

the conditions of inhumanity. In Brave New World, Huxley creates a society of

absolute entitlement, where all citizens’ earthly desires are provided freely and the end

result is a self-medicating human society bent on total ambivalence.107 Postman sums

it up this way:

Contrary to common belief even among the educated, Huxley and Orwell did
not prophesy the same thing. Orwell warns that we will be overcome by an
externally imposed oppression. But in Huxley’s vision, no Big Brother is
required to deprive people of their autonomy, maturity and history. As he saw
it, people will come to love their oppression, to adore the technologies that
undo their capacities to think.108

Postman writes that Orwell’s vision was based on the human capacity to hate and that

Huxley’s vision was based on the human capacity to love and that too much of one or

the other will ruin us. He concludes the section by suggesting that Huxley, not Orwell,

was right.109

107
Postman, Neil. Amusing Ourselves to Death: Public Discourse in the Age of Show Business.
New York: Viking, 1985. vii-xvi.
108
Ibid., xix.

292
Postman goes on to explain that there is a pronounced blurring of the lines

between what is information and what is entertainment and the audience is losing track

of both. The news industry isn’t doing much to help itself either.

Consider the crossover appearances of cable news anchor Anderson Cooper

who is with CNN and 60 Minutes, and is also the son of socialite Gloria Vanderbilt

and the great-great-great-grandson of robber baron Cornelius Vanderbilt. In 2001,

Cooper joined CNN after doing two seasons of a reality show called The Mole. In

2003, he was given his own show, Anderson Cooper 360°. During the first season of

the show, on Cooper’s birthday, his mother, Gloria Vanderbilt surprised the young

cable news anchor by appearing on set and walking towards him with a birthday cake,

candles lit and all. This was certainly not the news moment people tuned in looking

for. In 2016, Cooper allowed his personal relationship with his mother to reemerge

when they appeared together in a documentary Nothing Left Unsaid: Gloria

Vanderbilt & Anderson Cooper. Was it news? No! It doesn’t for a moment approach

the tenets of news. Is it entertainment? Absolutely. Should cable news anchors

searching for creditability appear in vehicles of this nature, or the myriad of films

where he appears as himself, reading someone else’s words? Absolutely not. Clearly,

the lines between news and entertainment have been smeared… to a point where the

idea of the line is a fiction and television news has been reduced to sound bites and

salacious accusations interspersed between elements of real global trauma.

The terrifying thing here is the fact that, absent Neil Postman’s accusations, the

viewing public seems to be desperately unaware of the infotainment-as-news culture

and it seems to only be getting worse.


109
Ibid., xix.

293
Adding to the malaise of information is the corporate desire to cut the expense

of media production and some are looking to computers to provide cheaper media

alternatives. At Google, the Artificial Intelligence group is called Magenta, and they

have a series of computer AI programs in development.110 The purpose of AI is to

train computers to learn from media patterns and structures, believing that ultimately

the computer will begin creating media on its own. One of its more promising projects

it the AI Duet, which is a music program where the user punches in a few notes into a

keyboard and the computer responds with note phrases of its own.111 It’s clear that the

software is in the early stages of development but projects like these and others have

computers crafting music. The results have been basic and technically soulless, but the

technology is growing and evolving and one day, music could be entirely computer

generated.112 What’s more is the fact that Google and other companies are applying

the same technologies to storytelling.

In 2011, The New York Times published the following sentences:

“WISCONSIN appears to be in the driver’s seat en route to a win, as it leads 51-10

after the third quarter. Wisconsin added to its lead when Russell Wilson found Jacob

Pedersen for an eight-yard touchdown to make the score 44-3….” With 15 minutes

110
"Magenta." Magenta. Accessed March 12, 2017. https://magenta.tensorflow.org/.
111
"Learning from A.I. Duet." Learning from A.I. Duet. February 16, 2017. Accessed March
12, 2017. https://magenta.tensorflow.org/2017/02/16/ai-duet/.
112
McFarland, Matt. "Analysis | Google’s Computers Are Creating Songs. Making Music May
Never Be the Same." The Washington Post. June 06, 2016. Accessed March 12, 2017.
https://www.washingtonpost.com/news/innovations/wp/2016/06/06/googles-computers-are-creating-
songs-making-music-may-never-be-the-same/?utm_term=.b92286993252.

294
left in the Wisconsin-UNLV football game, a computer made these observations and

wrote this news lede in just 60 seconds.113

The news copy was the result of an AI project being produced by an Evanston,

Illinois start-up called Narrative Science, and the results of this project should be

chilling. Narrative Science is looking to teach computers how to generate news copy

and, five years after The New York Times profile, the company is still going strong.

The Guardian’s Tim Adams visited the company in 2015, and made these

observations in a story about the technology:

It’s not deathless prose—at least not yet; the machines are still “learning” day
by day how to write effectively—but its already good enough to replace the
jobs once done by wire reporters. Narrative Science’s computers provide daily
market reports for Forbes as well as sports reports from the Big Ten sports
network. Automated Insights does all the data-based stock reports for the
AP.114

For now, the work is basic: the ‘bots are creating news traditionally written by wire

reporters, but the technology is growing day-by-day and soon, the human print

journalist may be a thing of the past. Narrative Science co-founder Kris Hammond

goes as far as to predict that by 2020, a computer-generated news story will win the

Pulitzer Prize and by 2030, 90 percent of text-based journalism will be computer

generated.115

113
Lohr, Steve. "In Case You Wondered, a Real Human Wrote This Column." The New York
Times. September 10, 2011. Accessed March 12, 2017.
http://www.nytimes.com/2011/09/11/business/computer-generated-articles-are-gaining-traction.html.
114
Adams, Tim. "And the Pulitzer Goes To... a Computer." The Observer. June 28, 2015.
Accessed March 12, 2017. https://www.theguardian.com/technology/2015/jun/28/computer-writing-
journalism-artificial-intelligence.
115
Ibid.

295
Digital News Anchors

And then there’s the trend towards computer-generated actors. In 2013, actor

Robin Wright starred as herself in a film called The Congress, which is about an aging

actress who sells her likeness to a movie studio who then go on to make her likeness

into an ageless international star. The story speaks about the issues of identity and

corporate appropriation and the fact that once the star was created, the actress became

obsolete. The film, itself, starts out with Robin Wright negotiating the details of her

contract, selling her likeness and then, after doing so, the film leaps forward 20 years

and shows the aged Wright being invited as the “guess of honor” to a meeting of

animated celebrities; the second half of the film is almost entirely animation much like

the Beatle’s Yellow Submarine or Who Framed Roger Rabbit.116

In 2016, science fiction became science fact. Although the actor Peter Cushing

died in 1994, his likeness appears in the Star War’s reboot Rogue One, which was a

movie blockbuster that year. To do this, the producers used a computer-generated

image of Cushing—images lifted from his original Star Wars appearance—and laid

them over the performance of an actor who moved through his role on set.117 The

effect was limited but it does represent a wholesale change in the actor’s relationship

with film. Not yet, but soon, it will be possible to create a computer-generated image

of a human being and animate that person to do whatever is necessary to tell the story.

Soon, and possibly forever, the television and cable news anchor will be replaced by

116
O'Falt, Chris. "Robin Wright Digitally Preserved in Trippy New Film." The Hollywood
Reporter. July 16, 2014. Accessed March 12, 2017. http://www.hollywoodreporter.com/news/robin-
wright-digitally-preserved-trippy-718682.
117
Epstein, Adam. ""Rogue One" Features a Computer-generated Character More
Controversial than Jar Jar Binks." Quartz. December 20, 2016. Accessed March 12, 2017.
https://qz.com/868278/rogue-one-a-star-wars-story-features-a-controversial-cg-peter-cushing/.

296
digital incarnations of attractive computer-generated images (remember Max

Headroom?), which will read computer generated scripts about real human events to

an audience that may not know—or worse, may not care to know—the difference.

But this is merely the beginning of my dystopian vision of the future.

Most nonfiction stories are news stories and most news stories are either

“breaking news” or “general news;” the remaining form is the most complex, the

feature story. Feature stories have found their way into newspapers and television and

they share a common design: most feature stories are either about a singular person

and are produced as “feature profiles;” the other form is about a major social issue but

features a “face” or someone who is influenced by the social issue and these stories are

written in the “Wall Street Journal Style.”

Given the technical advances that have computers generating breaking news

and general news stories, I believe the last bastion for human achievement in long-

form non-fiction storytelling is in the feature form. The blessing of the feature story is

the fact that they have a long freshness date; the stories “The Crossing,” “Snow Fall,”

and “The Jockey” are still relevant years after they were published. Because the

timeliness of these types of stories is missing, the stories themselves must be

interesting and that means taking mundane facts and finding creative ways to make

them more attractive to the audience; this is the essence of the creative form.

Now, my dystopian vision would be of a future American culture where

breaking news and general news stories are generated entirely by computer

algorithms. This human-free-generated content would be posted in volume and

available over a variety of media forms including text, radio, television and film.

297
Under this system, news would merely be a commodity generating revenues for

faceless corporations. In time, the volume of content would be a deluge of information

swamping Western culture with ephemera; this volume of ‘information noise’ would

distract the public from the events with direct influence over the quality of their lives,

and companion entertainment media would anesthetize the public’s interest in their

own affairs.

So, like Orwell’s 1984, there will be a Big Brother but the leader of my

dystopian future will be a computer algorithm—imagine Hal 2000 running things—

operating from behind the legal curtain of a massive corporatocracy and humans, in

this society, will be like free-roaming animals in a human petting zoo. To keep the

humans in line—drawing my inspiration from Huxley’s Brave New World—the

corporatocracy will medicate the public with visual playthings—video news, video

stories, video documentaries—all dripping with chauvinism and calls for cultural

loyalty.

The Treacherous Turn

And then there is the development of Artificial Intelligence: This is a software

revolution where computer systems are programmed to learn on their own. In doing

so, through trial and error, the computer becomes capable of more-complex processes.

So, instead of simply answering simple yes-or-no questions, the computer develops

the ability to reason and calculate; the computer builds an artificial intuition and

begins solving problems well beyond the tasks humans lay before it. Ultimately, the

computer software evolves to the point of “being,” or—considering the philosophy of

298
Martin Heidegger—the computer develops its own “Dasein” (or sense “of being”)

establishing a consciousness or sentient presence of self. From here, the AI theorists

believe that the computer will evolve further, developing an intellect well beyond the

cognitive level of their human counter parts. When the computers do this; when they

become smarter than humans; when they learn the ability to reason and problem solve

on their own; and, in doing so, outperform the humans who built them; this is the

tipping point in the evolution of computer-human relations… something called

“Singularity” and many theorists believe this will be the end of mankind.

Statistician I. J. Good was the theorist who developed the idea of “Singularity”

although he never used the phrase. He called this moment in AI development “an

intelligence explosion,” or the moment when software intellect leaps aggressively

upward, leaving organic human intelligence well behind. He put it this way:

Let an ultraintelligent machine be defined as a machine that can far surpass all
the intellectual activities of any man however clever. Since the design of
machines is one of these intellectual activities, an ultraintelligent machine
could design even better machines; there would then unquestionably be an
“intelligence explosion,” and the intelligence of man would be left far behind.
Thus the first ultraintelligent machine is the last invention that man need ever
make, provided that the machine is docile enough to tell us how to keep it
under control.118

To his way of thinking, Good believed the development of a super-intellect was a

statistical certainty and all mankind could do is plan for it and attempt to control it.

The Singularity, he envisioned, would be super fast unless provisions were made to

slow, taper and control its development. Absent that, humans would be reduced to lab

animals, which would be controlled by machines.

This is the worst of all scenarios.

118
Barrat, James. Our Final Invention: Artificial Intelligence and the End of the Human Era.
New York, NY: Thomas Dunne Books, 2015. 104.

299
Apple founder Steve Wozniak isn’t so sure. In fact, he thinks AI will be okay,

believing that computers will host a nurturing relationship with humans saying

“They’re going to be smarter than us and if they’re smarter than us then they’ll realize

they need us. We want to be the family pet and be taken care of all the time.”119

But other tech pioneers aren’t so sure.

Tech giants Elon Musk and Bill Gates and scientist Stephen Hawking have all warned

that AI could signal the end times for humanity. Towards that end, Musk has invested

$10 million of his own money to create an AI monitoring firm designed to track a

technology he has described as “our biggest existential threat”.120

Musk specifically believes that software engineers are absolutely designing the

next “apex predator,” a species with the ability to dominate the planet. Humans are

currently the leading “apex predator” but imagine a generation of self-programming

robots who can develop their own belief systems and virtues; and then consider the

next step, one where these beings realize that they are superior to humans and then a

mass extermination begins. Musk, Gates and Hawking all believe this future is closer

than we realize.

In his book The Singularity is Near, futurist Ray Kurzweil suggests that there

are six evolutionary epochs leading to the next stage in the evolutionary food chain:

Evolution is a process of creating patterns of increasing order. I’ll discuss the


concepts of order in the next chapter; the emphasis in this section is on the
concept of patterns. I believe that it’s the evolution of patterns that constitutes
the ultimate story of our world. Evolution works through indirection: each

119
Gibbs, Samuel. "Apple Co-founder Steve Wozniak Says Humans Will Be Robots' Pets."
The Guardian. June 25, 2015. Accessed March 12, 2017.
https://www.theguardian.com/technology/2015/jun/25/apple-co-founder-steve-wozniak-says-humans-
will-be-robots-pets.
120
Ibid.

300
stage or epoch uses the information-processing methods of the previous epoch
to create the next. I conceptualize the history of evolution—both biological and
technological—as occurring in six epochs. As we will discuss, the Singularity
will begin with Epoch Five and will spread from Earth to the rest of the
universe in Epoch six.121

Those six epochs are as follows: Physics and chemistry; biology information in DNA;

brain research; technology design and development; merger of technology with human

intelligence; and finally epoch six which he describes as “[when] the universe wakes

up.”122

As the title of his book suggests, he believes the Singularity is near and that it

will actually begin when we begin to fuse human intelligence with artificial

intelligence during epoch five. For now, we are in epoch four where we are exploring

technology and the thinking computer is an integral part of that development.

It will result from the merger of the vast knowledge embedded in our own
brains with the vastly greater capacity, speed, and knowledge-sharing ability of
our technology. The fifth epoch will enable our human-machine civilization to
transcend the human brain’s limitations or a mere hundred trillion extremely
slow connections.

The Singularity will allow us to overcome age-old human problems and vastly
amplify human creativity. We will preserve and enhance the intelligence that
evolution has bestowed on us while overcoming the profound limitations of
biological evolution. But the Singularity will also amplify the ability to act on
our destructive inclinations, so its full story has not yet been written.123

To say the least, Kurzweil is an optimist. His book is nearly breathless with

anticipation of a future where man and computer work together to advance the future

of humankind. Once achieved, the human-technological mind will accelerate global

121
Kurzweil, Ray. The Singularity Is Near: When Humans Transcend Biology. London:
Duckworth, 2016. 14.
122
Ibid.
123
Ibid., 408.

301
intellect at an alarming rate that will culminate with the universe being entirely

saturated with human intelligence.124

Or it could go entirely wrong.

In his book on Superintelligence, Philosophy Professor Nick Bostrom warns that if we

don’t manage our technology effectively enough, computers may push mankind aside:

Taken together, these three points thus indicate that the first superintelligence
may shape the future of Earth-originating life, could easily have non-
anthropomorphic final goals, and would likely have instrumental reasons to
pursue open-ended resource acquisition. If we now reflect that human beings
consist of useful resources (such as conveniently located atoms) and that we
depend for our survival and flourishing on many more local resources, we can
see that the outcome could easily be one in which humanity quickly becomes
extinct.125

Bostrom goes on to analyze ways we might sustain our superiority over Artificial

Intelligence, which includes software programming designed to make the AI

emotionally dependent on humans for positive reinforcement; but he also warns that

the computers, armed with a complex definition of self could learn to manipulate the

intelligence metrics until the computer believes the timing is right to take over. It’s all

very dark and sinister but worthy of notice and concern.

Further aggravating things is the idea that humans can be armed with digital-

analog electrodes that enhance human cognitive performance. Elon Musk is currently

partnered with a company called Neuralink, which is developing a “neural lace”

device that allows for a brain-to-computer link.126 In an article on the subject, reporter

Nick Statt quoted Musk saying: “Over time I think we will probably see a closer
124
Ibid.
125
Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University
Press, 2016. 141.
126
Statt, Nick. "Elon Musk Launches Neuralink, a Venture to Merge the Human Brain with
AI." The Verge. March 27, 2017. Accessed April 21, 2017.

302
merger of biological intelligence and digital intelligence… it's mostly about the

bandwidth, the speed of the connection between your brain and the digital version of

yourself, particularly output.”127

If the technology exists and Musk successfully rolls it out before the public,

this neural lace device has the ability to link the human consciousness to computers

and, ultimately, the Internet; doing so would forge a global consciousness. After all,

the Internet was designed as a communication device, not as a repository for

information. Hardwiring human consciousness to this digitized world has the ability to

enhance the human intellectual development on a scale that’s entirely unfathomable.

So when is this going to happen? Kurzweil actually attempts to set a timeline

for the Singularity believing that computer intelligence will reach that tipping point by

2045.

Despite the clear predominance of nonbiological intelligence by the mid-


2040s, ours will still be a human civilization…. Returning to the limits of
computation according to physics, the estimates above were expressed in terms
of laptop-size computers because that is a familiar form factor today. By the
second decade of this century, however, most computing will not be organized
in such rectangular devices but will be highly distributed throughout the
environment. Computing will be everywhere: in the walls, in our furniture, in
our clothing, and in our bodies and brains.128

Remember that Kurzweil is our optimist and he predicts that artificial intelligence and

biological intelligence will fuse together collaboratively. Also, it’s worth noting that

Kurzweil is echoing what Eric Schmidt said about the Internet “disappearing.”

127
Ibid.
128
Kurzweil, Ray. The Singularity Is Near: When Humans Transcend Biology. London:
Duckworth, 2016. 136.

303
Turning again to Bostrom, this integration of the human consciousness with

the digital realm also places the human being at the mercy of a superintelligence that

could manipulate, control and/or exterminate human life on the planet. Something as

simple as manipulating the human body to inject toxic amounts of insulin or

adrenaline or dopamine into the host’s body could kill one person or millions.

Bostrom calls this “The Treacherous Turn” and describes a research culture that places

the AI devices in a “sandbox” where they are observed in a controlled environment for

symptoms trending towards Singularity:

The flaw in this idea is that behaving nicely while in the box is a convergent
instrumental goal for a friendly and unfriendly AI alike. An unfriendly AI of
sufficient intelligence realizes that its unfriendly final goals will be best
realized if it behaves in a friendly manner initially, so that it will be let out of
the box. It will only start behaving in a way that reveals its unfriendly nature
when it no longer matters whether we find out; that is, when the AI is strong
enough that human opposition is ineffectual.129

Putting all these pieces together, things could turn out this way: In an effort to enhance

human cognitive abilities, a commercial enterprise manufactures a neurological

networking product that hardwires the human mind to the digital realm of computers

and the Internet. Inside this connectivity, humans will have the ability to observe and

participate in simulated life enterprises on a scale exponentially larger and more

integrated than the ocular experience created when a user wears a virtual reality visor

to play a first-person video game. With virtual reality, the user is still observing an

experience through the traditional human senses of sight and sound; the neurological

webbing, on the other hand, bypasses the human senses, placing the experience inside

the sensory receptors located inside the brain. This complex integration of human

129
Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University
Press, 2016. 142.

304
tissue with digital impulses opens wide a variety of possibilities for cognitive

experience.

In his book Brave New World, Aldous Huxley writes about a cinema

experience called “the feelies,” which is where viewers sit in a dark theater and, by

placing their fingers atop a networking device situated in the armrest of the chair, they

can feel the action taking place on the screen. A whole host of human exchanges could

be transmitted through such an integrated connection: I mean, consider feeling the

experiences of the soldiers landing at Omaha Beach on D-Day, or the sensation of

free-falling as someone skydives, or the sensuous experience of deep sea diving along

the Great Barrier Reef. A whole new genre of empathetic experience could be

developed and the human mind could become a neurological playground.

Kurzweil writes briefly about how this will work. He calls the human hosting

the experience an “experience beamer” or someone who lives the experience and then

broadcasts it to others who wish to share the cognitive event.

“Experience beamers” will send the entire flow of their sensory experiences as
well as the neurological correlates of their emotional reactions out onto the
Web, just as people today beam their bedroom images from their Web cams. A
popular pastime will be to plug into someone else’s sensory-emotional beam
and experience what it’s like to be that person, à la the premise of the movie
Being John Malkovich. There will also be a vast selection of archived
experiences to choose from, with virtual-experience design another new art
form.130

Now, as crazy as this idea might seem, I’d like to draw some comparisons. After the

invention of the printing press, the first generation of creative writers, and later, the

Grub Street “hack” writers began documenting their observations for a their respective

audiences. At the time, their medium—then, as now—was the printed word (and later

130
Kurzweil, Ray. The Singularity Is Near: When Humans Transcend Biology. London:
Duckworth, 2016. 316.

305
audio and video incarnations) but in the world Kurzweil is describing, the medium is

digital or, more directly, electronic impulses very similar to the organic jolts of

electrical current pulsing through the human mind today. Begging the question: Will

we have the ability to have shared sensory experience? One cannot help but wonder

and follow Kurzweil’s optimism.

However, as we are considering the prospect of plugging our minds into this

neurological playground, there are dangerous issues to consider. Concurrently, as

neurological transmitters are penetrating the human mind, a superintelligence could

also emerge with unfriendly intentions and take over the Internet and use the neural

lace integration pathway as a conduit to control the global human community.

Searching for comparisons, this ‘apex predator’ with superintelligence could do lasting

harm on a global scale. Bostrom puts it this way:

Superintelligence is a challenge for which we are not ready now and will not
be ready for a long time. We have little idea when the detonation will occur,
though if we hold the device to our ear we can hear a faint ticking sound.

For a child with an undetonated bomb in its hands, a sensible thing to do would
be to put it down gently, quickly back out of the room, and contact the nearest
adult. Yet what we have here is not one child but many, each with access to an
independent trigger mechanism. The chances that we will all find the sense to
put down the dangerous stuff seems almost negligible. Some little idiot is
bound to press the ignite button just to see what happens.131

Returning again to the science fiction for possible clues, I’d like to talk about The

Matrix Trilogy. Launched in the 1990s, this three-film series created a story line about

a post-modern-day Earth where machines have coopted the planet and have turned

humans into an energy food source. To placate the humans who are housed in bio-

farm cages (like larva), each human was hardwired into a media network that

131
Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University
Press, 2016. 319.

306
simulates 20th century Earth; in other words, the humans are anesthetized with the

mental images of everyday life while in fact their physical bodies are contained in

biological cages where they are harvested for their bio-energy. The media network is

called “the matrix” and it is the conduit that tricks the human consciousness into

believing that it is existing in a physical world, where in fact the body is trapped, and

encaged. The overarching message throughout the film series is the fact that humans

could be prisoners for a computer-dominated world and media are used as

psychological shackles; the heroes are the humans who escape the trappings of the

matrix.

As for the overarching concern for the powers of AI, consider the following

quote from Mr. Smith, one of the digital bad guys assigned to help exterminate

humans who have escaped their containment:

Have you ever stood and stared at it, marveled at its beauty, its genius?

Billions of people just living out their lives, oblivious. Did you know that the
first Matrix was designed to be a perfect human world, where none suffered,
where everyone would be happy?

It was a disaster. No one would accept the program, entire crops were lost.
Some believed we lacked the programming language to describe your perfect
world, but I believe that, as a species, human beings define their reality
through misery and suffering. The perfect world was a dream that your
primitive cerebrum kept trying to wake up from.

Which is why the Matrix was redesigned to this, the peak of your civilization. I
say your civilization, because as soon as we started thinking for you, it really
became our civilization, which is of course what this is all about. Evolution,
Morpheus, evolution. Like the dinosaur. Look out that window. You've had
your time. The future is our world, Morpheus. The future is our time.132

132
"Quotes from The Matrix." Matrix Wiki. Accessed April 21, 2017.
http://matrix.wikia.com/wiki/Quotes_from_The_Matrix.

307
So, like Aldous Huxley, my dystopian vision is this: Imagine a world where human

beings use technology to digitally enhance the cognitive powers of the organic mind.

In doing so, a new form of empathic media is developed that allows one human being

to feel the physical sensations experienced by another human being. As a conduit to

experience, a digital marketplace—a neurological extension of the Internet—is

developed where human animals exchange life experiences; but, concurrently, an

unfriendly Artificial Intelligence with super-intellectual powers subsumes control of

the network and injects a digitized virus over the media network and into the host

minds of the organic human participants. In doing so, this digital intellect either

enslaves humankind or exterminates the human race. Clearly, there are risks in the

process of integrating organic intelligence with synthetic intelligence.

Beyond unchecked technological advances, another factor that could lead us

into a dystopian future is a wholesale collapse of capitalism and a shift towards

something more desperate. Automation is a key component in the changes that some

futurists see ahead of us, and their visions are not very cheery.

Crumbling Capitalism

At the heart of all these matters is the underlying fact that capitalism as a

market solution is crumbling. In the United States, the economy has lumbered at a

growth rate of 3-percent or less, unemployment is below 5-percent but employee

wages have been flat since the Reagan Administration (1981-1989) and the nationwide

distribution of wealth favors the wealthy 1-percent. By all accounts, we are sliding

back into a feudal system that places all the power in the hands of the aristocracy

308
while the bourgeoisie and the proletariat classes struggle to understand. Capitalism, it

seems, only favors the poor when the national economy is growing robustly, but when

the economy stagnates, it’s the poor and middle classes who suffer.

“Bourgeois society stands at the crossroads, either transition to socialism or

regression into barbarism,” wrote Friedrich Engels about the situation.

There are a variety of reasons the U.S. economy is slipping backwards towards

feudalism. Key among them is a pervasive greed among the aristocracy, a

commercialized educational system, a smoldering distrust for the International

community, the collapse of the industrial base and an accelerated rise of automation.

Each of these things is affecting the shape of the American economy but this last

one—automation—which could be perceived as both a positive and a negative, may

be the leading catalyst. Automation is the process of replacing human labor with

mechanical, robotic labor. In a perfect world, an automated society will produce an

abundance of goods and services, which will benefit the whole of the nation and

humans would thrive in a world free from the struggles for food and shelter and armed

with a sense of leisure and time. In the less-than-perfect world, the rich will capitalize

on the industrial surpluses created by automation, and the populous would be

superfluous, under-employed subjects to a ruling class. The trouble here is the fact that

the architecture of capitalism amounts to an uneasy balance between the industrialist

and the worker: the factory owner needs workers to manufacture a product and the

workers need the factory owner to pay fair wages. Automation disrupts this balance,

replacing workers with machines, which surrenders all the power of production to the

309
industrialists. Automation makes the working class obsolete and absent a solution, the

future after capitalism could go horribly wrong:

The question of class power comes down to how we end up tackling the
massive inequity of wealth, income, and political power in the world today. To
the extent that the rich are able to maintain their power, we will live in a world
where they enjoy the benefits of automated production, while the rest of us pay
the costs of ecological destruction—if we can survive at all. To the extent that
we can move toward a world of greater equality, then the future will be
characterized by some combination of shared sacrifice and shared prosperity,
depending on where we are on the other, ecological dimension.133

In his book Four Futures: Life After Capitalism, author Peter Frase breaks his vision

of life after capitalism down into four forms that mix the ideas of scarcity, abundance

with egalitarianism and feudalism. These final forms equate to: communism,

socialism, “rentism” and “exterminism.” The first two are egalitarian systems with

either a scarcity or abundance of goods; the second two are class systems with either a

scarcity or abundance of goods.

To Frase, the first two systems are possible, even more attractive options. With

communism, he envisions a world economy that marries egalitarianism with

abundance. Abundance is the end result of an automated society that provides for

everyone… a technological utopia. Under an egalitarian state—one that favors no

man—where no one has to work and everyone would be absent the desire for the

necessities of housing, clothing, education and so forth. This is Marxist theory at its

purist.

What possible society could be so productive that humans are entirely liberated
from having to perform some kind of involuntary and unfulfilling labor? Yet
the promise of widespread automation is that it could enact just such a

133
Frase, Peter. "Four Futures." Jacobin Magazine. Accessed September 15, 2017.
https://www.jacobinmag.com/2011/12/four-futures.

310
liberation, or at least approach it—if, that is, we find a way to deal with the
need to generate power and secure resources.134

In this environment, humans would participate in projects, which they find

“inherently, fulfilling, not because we needed a wage or owed our monthly hours to

the cooperative.” His is a paradise of bounty and equality. Frase compares it to the

lifestyle we see in the Star Trek series, where people explore their interests because

they want to, not because they need to. We would be liberated from our need to work.

Under the second form, the system of socialism, Frase envisions an egalitarian

world where we have limited resources. The solution here would be a “big

government,” which doles out necessities equally among the populous. Again,

automation liberates humans from working but limited reserves of natural resources

and energy, force the state to manage how these things are consumed.

Our [next] future, then, is one in which nobody needs to perform labor, and yet
people are not free to consume as much as they like. Some kind of government
is required, and pure communism is excluded as a possibility; what we get
instead is a version of socialism, and some form of economic planning. In
contrast to the plans of the twentieth century, however, those of the resource-
constraining future are mostly concerned with managing consumption, rather
than production. That is, we still assume the replicator; the task is to manage
the inputs that feed it.135

To make this system work, Frase suggests that everyone would receive a basic wage,

which would be used to invest in energy and resources.

Suppose that everyone received a wage, not as a return to labor but as a human
right. The wage would not buy the products of others’ labor, but rather the
right to use up a certain quantity of energy and resources as one went about
using the replicators. Markets might develop insofar as people chose to trade
one type of consumption permit for another, but this would be what the
sociologist Erik Olin Wright calls “capitalism between consenting adults”,

134
Ibid.
135
Frase, Peter. "Four Futures." Jacobin Magazine. Accessed September 15, 2017.
https://www.jacobinmag.com/2011/12/four-futures.

311
rather than the involuntary participation in wage labor driven by the threat of
starvation.136

Again, the impetus here is a society where automation replaces the need for

labor and government manages the resources while feeding, housing and caring for the

populous. In these first two systems, there is a shared sense of community absent the

class systems dominant in capitalism. We are either living collectively in a paradise of

abundance or collectively in an equitable world of rationing.

Looking forward, Frase applies these same ideas to a world absent equality and

things get quite a bit more desperate. He calls these economic systems “rentism” and

“exterminism,” and each places all the power in the hands of the aristocracy. With

“rentism,” the wealthy control everything through government and copyright law;

under this environment, the industrial base has been entirely automated and the

working classes are left without a source of income. There is an abundance of goods

but the ruling classes create an artificial sense of scarcity and use government and law

enforcement tools to maintain their advantage. In this culture, money is the source of

power and class authority.

With “exterminism,” Frase describes a desperate Mad Max like world where

goods are sparse and the strong and powerful hoard everything.137

The great danger posed by the automation of production, in the context of


world or hierarchy and scarce resources, is that it makes the great mass of
people superfluous from the standpoint of the ruling elite. This is in contrast to
capitalism, where the antagonism between capital and labor was characterized
by both a class of interests and a relationship of mutual dependence: the
workers depend on capitalists as long as they don’t control the means of
production themselves, while the capitalists need workers to run their factories
and shops. It is as the lyrics of “Solidarity Forever” had it: “They have taken

136
Ibid.
137
Ibid.

312
untold millions that they never toiled to earn/But without our brain and muscle
not a single wheel can turn.” With the rise of the robots, the second line ceases
to hold.138

Frase writes that the troubles here are many. Among them, the working classes

become superfluous and pose a real (rebellious) threat to the aristocracy, which leaves

only two solutions: pay off the working classes or exterminate them. This would be

dystopia indeed.

Now, again, the catalyst creating these post-capitalist societies is automation or

the process of using technology to replace human labor. Under the simplest of

interpretations, Frase’s most successful model is communism (Marxism), which

requires a few things to happen: first, we must address demands for energy and natural

resources and, second, we must foster a society that believes in universal sharing. In

the current political/economic environment, these achievements appear nearly

unattainable. What’s worse is that in an effort to move towards this purpose, a slight

deviation favoring one class of people over another could have resounding and

desperate results. For now, I think the global intellectual community is just too

immature to make these advances and that time is the only cure. We must grow

towards this greater purpose. Absent this growth, given our historical relationship with

technology, dystopia is assured.

What should be interesting about both these arguments is the fact that they are

opposite sides of the same coin: Each argues that technology is going to replace us. In

the Singularity, the threat is that technology will render all of mankind obsolete; in the

economic argument, technology is going to render the working classes obsolete. In the

first case, the monster is a dominant technical device; in the second, the monster is a
138
Ibid.

313
dominant human aristocracy. In either situation, the problems are the same: absent any

ethical considerations, an unchecked human lust for technology and the human

predisposition for greed, we are doomed. Either way, technology is the catalyst and in

both cases the solution is a sense of ethical moderation. Can we tame the lust for

technological evolution and the human capacity for greed?

Consider this simple fact, as published recently in Business Insider:

On Thursday, the Treasury Department took a paper off its website that says
“82% of the corporate income tax burden is distributed to capital income and
18% is distributed to labor income,” the Wall Street Journal first reported.

In other words, American workers pay for a small chuck of corporate taxes—
around 20%. The problem with this for [U.S. Treasury Secretary Steve]
Mnuchin is that he would prefer that you think the number is around 70%.139

If you look at that data: 18-percent of the corporate tax burden goes towards paying

the workforce, one can deduce that the remaining 82-percent is surplus revenues or

profits, which are being invested elsewhere and by “elsewhere” I mean those revenues

are not being shared with the workers. Instead, they are going back into the business

directly or into the pockets of the shareholders. This data is from a U.S. Treasury

Department study published in 2012 and clearly illustrates who the winners and losers

are in the U.S. economy. Capitalism defines ownership and rewards these elite

members of society; turning away towards a socialist or Marxist economy seems

desperately out of reach. And in fact, at this point in history, it seems nearly

impossible at least in the United States to turn away from the dual vices of greed and

technology simply because—ingrained in our culture—is a dual sense that capitalism

139
Lopez, Linette. "Steve Mnuchin Tried to Bury a Number That Tells You Whom Trump's
Tax Plan Is Really for." Business Insider. September 29, 2017. Accessed September 29, 2017.
http://www.businessinsider.com/mnuchin-buries-research-paper-on-corporate-tax-2017-9.

314
and technology are the salvations for the nation. Humanist Langdon Winner had this

to say:

American society encouraged people to be self-determining, to pursue their


own economic goals. That policy would work, it was commonly believed, only
if there were a surplus that guaranteed enough to go around. Class conflict, the
scourge of democracy in the ancient world, could be avoided in the United
States because the inequalities present in society would not matter very much.
Material abundance would make it possible for everyone to have enough to be
perfectly happy. Eventually, Americans took this notion to be a generally
applicable theory: economic enterprise driven by the engine of technical
improvement was the very essence of human freedom.140

Corporate innovation, enterprising human achievement, wealth and technology…

these things—the American myth suggests—will save us. Ours is a desperate cry for

corporate-sponsored technological advancement but the apparatus is absent one key

component: a human sense of the consequences.

Searching for proof? Consider these things: In the United States, working

Americans, by tradition, are allotted two weeks vacation time and yet Americans fail

to use even this time to exercise and recreate. In the corporate world, accepting

vacation time is considered a sign of weakness and in the most dire of situations, there

is a fear that taking time away from the office might actually illustrate to management

one’s own value to the enterprise.

When it comes to taking time off, Americans themselves can be the biggest
barriers. A variety of justifications lead about two-in-five workers (37%) to
conclude it is not “easy” to take the time off they have earned. Top reasons
workers say they leave vacation unused are fear of returning to a mountain of
work (40%) and the belief that nobody else can do their job (35%). The effects
of a tough economy still linger: one-third (33%) of employees say they cannot
afford to use their time off and nearly a quarter (22%) of workers say that they
do not want to be seen as replaceable. Roughly three-in-ten (28%) employees

140
Winner, Langdon. The Whale and the Reactor: A Search for Limits in an Age of High
Technology. Chicago: University of Chicago Press, 1986. 45.

315
do not use all their time off because they believe it will show greater
dedication to their company and their job.141

Consider that for a moment. The American workforce has been convinced that its time

on the assembly line is more important than its time away from the assembly line. The

influence of capitalism is apparent here: we live to work. We aren’t human any more;

we are soldiers in the corporate apparatus and our value as Americans is defined by

the efficacy we add to the assembly line.

Langdon Winner saw this as one of the pervasive influences of technology

upon society and suggested that maybe we should reconsider our value system.

A great many people, including some with considerable social power, seem to
have lost the ability to link the specific, concrete conditions of their own work
to any reasonable conception of human well-being. The question just never
seems to come up. To remedy that would require a fundamental change in
orientation for many organizations, vocations, and professions. We encourage
people to become competent in a particular professional field, especially those
concerned with inquiry into natural phenomena and the manipulation of
material reality. At the same time we allow a scandalous incompetence in
dealing with the fundamental, recurring questions of human existence: How
are we to live together? How can we live gracefully and with justice?
Questions of this nature are not, as some teachers like to tell their students,
“soft” ones as compared to the “hard” research questions of science. They are
as “hard” and as challenging as any that science could hope to tackle. They are,
furthermore, eminently practical, involving the combined practice of ethics,
politics, and technology.142

“How are we to live together?” he asks. If we succeed at placing aside our lust and

greed for technology and the surpluses of wealth gained, maybe we can move towards

an equitable post-capitalism culture. But what would that look like?

141
“Overwhelmed America: Why Don't We Use Our Paid Time Off?” Project: Time Off, 28
June 2016, www.projecttimeoff.com/research/overwhelmed-america.
142
Winner, Langdon. The Whale and the Reactor: A Search for Limits in an Age of High
Technology. Chicago: University of Chicago Press, 1986. 162-163.

316
In their book Commonwealth, authors Michael Hardt and Antonio Negri

propose solutions to a world after capitalism. Their model appears suited to the Frase’s

communism model and it amounts to this: Hardt and Negri seek world absent misery

and they offer this guidance: “…governments should provide everyone with the basic

means of life” and they suggest a guaranteed income for everyone and believe that

healthcare and education are basic human rights.143 They argue that globalism should

supplant nationalism and border restrictions should be lifted; people should be able to

move freely around the globe absent passports and visas. Finally, Hard and Negri

argue that claims to intellectual property rights should be lifted and made common to

everyone; this would insure a universal sense of creative production.

These three platforms are just and reasonable demands to make on today’s
ruling powers. They are nothing but the conditions that most favor the
constituent encounters that we said earlier constitute the wealth of the
multitude in the metropolis: ensuring that everyone has the basic means to life
and good health; creating the conditions that we meet in a relation of equality,
with the knowledge and skills to interact socially; and providing all open
access to the accumulated common wealth that serves as the basis for and is
also enriched by our encounters. Remember, too, that we have already seen in
our analysis that large portions of the global population already possess many
of these capacities, in the networks of biopolitical production, in the life of the
metropolis, and in the fabric of everyday social life. We can demand of the
ruling powers that they be guaranteed and made universal.144

Again, doing these things under the current political climate would be very difficult,

but the ideas here needed to be stated and offered as a common goal. Looking again to

Ray Kurweil’s ideas about the Singularity, it would be nearly impossible to create a

unified, collective thought platform if everyone is fighting over ideals pertinent to

individual identity such as intellectual property rights. Capitalism, in its purist form, is

143
Hardt, Michael, and Antonio Negri. Commonwealth. Place of Publication Not Identified:
Gallimard, 2014. 380.
144
Ibid., 382.

317
about a human’s right to own property. If we are to move forward with an utopic ideal

of egalitarianism in a plentiful world, we must out grow our selfishness and mature to

a point where we consider the needs of everyone, often, above the needs of ourselves.

This is how Hardt and Negri saw it:

Our free and equal access to the common, through which we together produce
new and greater forms of the common, our liberation from the subordination of
identities through monstrous processes of self-transformation, our autonomous
control of the circuits of the production of social subjectivity, and in general
our construction of common practices through which singularities compose the
multitude are all limitless cycles of our increasing power and joy. While we are
instituting happiness, our laughter is as pure as water.145

Of course, for all of this to be relevant to my idea about the future of storytelling, we

need to believe that one day our stories will be shared over a common platform, or

common place where everyone has equal access.

The trouble here again is this: if we get this wrong, Neil Postman’s vision for

the future may be our fate. As you may recall, Postman compared George Orwell’s

1984 to Aldous Huxley’s Brave New World. In the first dystopia, Orwell created a

world that was ruled by hate; in the second, Huxley created a world that was ruled by

love. In both cases, a small elite class—an aristocracy—ruled the societies, which

were opposites: Orwell’s world was a world of scarcity while Huxley’s was a world of

bounty. Comparing these worlds to the worlds imagined by Peter Frase, Orwell’s

world was a society of socialism tipping towards exterminism, while Huxley’s was a

society of hedonistic communism tipping towards rentism. Looking back upon recent

and ancient history, we’ve seen examples of both these societies: Orwell’s favors the

worst aspects of the former USSR while Huxley’s offers a glimpse at the final

gluttonous days of the Roman empire. The only advantages we have over these two
145
Ibid., 383.

318
fallen societies are the lessons they offered in their decline… and the warning that

Neil Postman offered us suggesting that, now more than ever, we are susceptible to

wandering backward into these societal abysses.

Again, these would be my visions for our dystopian future but let me return to

the central premise of this thesis: What will the future of storytelling look like in the

Digital Age?

319
Part IV: Conclusions

Chapter 8

Digital Incunabula

“What’s past is prologue,” wrote Shakespeare in The Tempest.1

Everything that has come before has been practice for the advances that lay ahead.

Johannes Gutenberg had no idea what his invention was going to do to Western

culture. He was simply inventing, experimenting with a mechanical process that he

hoped would help replicate the handwritten word and do so in a pleasing manner. His

meticulous nature forced him to raise his standards to ones of perfection and his

ultimate work, his final prize, was a collection of a 180 mechanically printed Bibles,

which were seized by the courts and handed over to his enterprising financial backer.

Along the way, Gutenberg established several standards including the basics for

typography, printing type, ink formulas and the process for a printing run. The

byproducts of that work—the birth of literacy, the demand for literature, the division

of theocracy, the fertilization of nationalism, the death of oral culture—were really

beyond his scope of his understanding, and frankly, really not of his own doing. We

took to the printing press because of the economies it presented.

Looking again at the economic structure of media, Gillian Doyle condensed it

down to three phases: production, packaging and distribution.2 With manuscripts, the

first portion, the production was the packaging and all the scribes were doing was

copying pre-existing texts, replicating them to the point of exactitude that, should they

come upon a typographical error, the scribe would include it in the new version, just to

1
Shakespeare, William, and Mark McMurray. The Tempest. Canton, NY: Caliban Press, 2001.
2
Doyle, Gillian. Understanding Media Economics. London: SAGE, 2013. 69.

320
assure that the lines, columns and pages were an exact match to the original.3 The

manuscript was entirely about how it appeared and the beauty of it was observed not

in the content of the written words, but rather in the completeness of the entire work,

the “illumination.”4 The printing press extinguished the orality of the manuscript.

During the first Incunabula, the transition from manuscript to print, there was a

push to preserve the “illumination” of the publication; after all, it was the gold leaf and

the ornate illustrations that gave the book its value. In fact, Gutenberg’s Bible—the

B42—was crafted as an Incunable complete with blank spaces for rubricating; the idea

was to have artists hammer the gold leaf into place and paint the illustrations alongside

the text after the inks had dried, and that was actually the process that affected the 180

books produced during Gutenberg’s initial print.5 As proof, all one must do is review

two Gutenberg Bibles: say, the specimen available at the Library of Congress and the

second at Princeton University; these books are products of the same press run but the

“illumination” in each is very different. The Bible in the Library of Congress is rather

spare, tasteful but plain; the Bible at Princeton has much more colorful initials and the

marginalia, when it occurs, is exciting to see.6 Searching further afield, the two

3
Eisenstein, Elizabeth L. The Printing Revolution in Early Modern Europe. Cambridge:
Cambridge University Press, 1983. 24.
4
Ong, Walter J. Orality and Literacy: The Technologizing of the Word. London: Methuen,
1982.
5
Eisenstein, Elizabeth L. The Printing Revolution in Early Modern Europe. Cambridge:
Cambridge University Press, 1983.
6
"Princeton University Digital Library -- Item Overview." Princeton University. Accessed
March 12, 2017. http://pudl.princeton.edu/objects/7d278t10z.

321
Gutenberg Bibles available at the British Library are wonderfully garish by

comparison.7

Clearly, the transition phase from the manuscript to the printing press was a

slow steady process, one that attempted to preserve the best traditions from the first as

new production elements were created during the second. But economic forces for

handmade books just couldn’t stand up to the new commerce of mass-produced

printed texts. Economist Joseph Schumpeter identified this trend in capitalism as “the

perennial gale of creative destruction,” which is where something new sweeps in and

destroys something old.8 Schumpeter put it this way:

The opening up of new markets, foreign or domestic, and the organizational


development from the craft shop and factory to such concerns as U.S. Steel
illustrate the same process of industrial mutation—if I may use the biological
term—that incessantly revolutionizes the economic structure from within,
incessantly destroying the old one, incessantly creating a new one. This
process of Creative Destruction is the essential fact about capitalism. It is what
capitalism consists in and what every capitalist concern has got to live in.9

Schumpeter then explains that it takes time to assess the values of the old system and

their usefulness in the new system. It’s a blessing that the Incunabula period was 50-

years long, simply because it gave the European publishing community time to

experiment and appreciate what was left of the manuscript era. But the printing press

also helped launch a new age of commerce and this birth of capitalism ultimately

created an economic imbalance between the process of the manuscript and the process

of the printed word. Schumpeter argues that economies seek equilibrium and, in doing

7
Wight, C. "Gutenberg Bible: View the British Library's Digital Versions Online." British
Library. September 07, 2004. Accessed March 12, 2017.
http://www.bl.uk/treasures/gutenberg/homepage.html.
8
Schumpeter, Joseph A. Capitalism, socialism, and Democracy. Mansfield Centre, CT:
Martino, 2011.
9
Ibid.

322
so, destroy the weaker system.10 Gutenberg’s press produced 180 books in the time it

took one scribe to produce one volume of the Giant Bible of Mainz.

Turning to the present, the Digital Incunabula is in the process of dismantling

the second orality. Economically, it makes sense to move the published word away

from ink and paper to a digital realm circulated over a vast digitized delivery system.

Doing so, eradicates the process of producing paper and inks, and moving text stories

through a labyrinth of (physical) publishing rituals before the final product, the book,

is produced. Instead, the publishing process is moved through a digital realm, which is

more economically streamlined and delivered more rapidly to a waiting audience at a

fraction of the expense.

Next, Johannes Gutenberg intuitively knew that text on the printing press had

to be shaped in a way that would make it easier to read and he designed a topography

for the printed word. In the Digital Incunabula, a new topography must be formed. We

need to create a process that commingles and layers content in a way that makes it

easily digestible and it appears, with experiments including “Snow Fall” and

“Firestorm,” we are working towards that goal. Of course, to realize a final,

accommodating solution, the audience needs to build a new literacy, one that requires

a cognitive sophistication to appreciate associated ‘hot’ and ‘cold’ media. If the first

literacy, as I suggested earlier on, shaped human evolution, this new literacy will

certainly attempt to do the same thing. The first literacy joined our brain’s cognitive

abilities to recognize images with our cognitive ability to hear sound. In the new

literacy, the links between these cognitive centers and others will have to aid our

10
Ibid.

323
comprehension. Begging the question: do Homo sapiens have the potential for further

cognitive enhancement?

Looking to the middle of the century, the Digital Incunabula will be reaching

its fruition and the future of publishing will be better revealed. Based on my research,

there will be at least four variables, which will influence the development of digital

storytelling. They are as follows:

§ A  viable  narrative  story  form,  which  is  both  pleasing  and  informative;  
§ A  production  community  who  possess  the  imagination,  skills  and  
expertise  to  conceive  of  and  create  coordinated,  seamless  multimedia  
stories;  
§ A  consumer  electronic  device—or  digital  platform—that  forms  the  story,  
giving  it  a  place  to  dwell;  
§ And  finally,  a  receptive  audience  with  the  cognitive  sophistication  to  
absorb,  understand  and  appreciate  complex  digital  story  forms.    
 
Again, these issues merely clarify the digital media production model: Content

creation à Packaging à Distribution. Since the inception of media, these three ideas

have been the variables affecting the success of given media form and the digital age

isn’t immune to these factors. Looking at the list above, narrative story form and

content production address the first issue, the inception of a receptive consumer device

addresses the second and a digitally literate audience addresses the last variable.

Let’s examine how we move forward.

Content Creation

During the Middle Ages, the death of the manuscript marked a departure from

oral culture, according to Marshall McLuhan and Walter Ong. Orality is performance

and the manuscript was a form of the performance arts. Literacy, according to Terry

Eagleton, separated the man from his ideas and forced people into the silence of their

324
own minds. It would take 300 years for the oral media to catch up.11 And when hot

media including photography, audio, radio and television emerged on the scene, they

created their own independent relationships with the public.12 It would take the

inventions of the personal computer, the Internet, the online retailer and the tablet

computer to create a creative culture that might allow, finally, for the next phase of the

age of storytelling: we started with oral, we moved to literal, and finally we are ready,

at least technologically, for multimedia.

So how do we get there?

At this point in history, we are obsessed with capitalism and manufacturing

with a fervent concern for the bottom line. As oral creatures, we must rethink our

obsession with the written word; writing is assumed to be more permanent, more

official, than oral messages, but this no longer needs to be the case. We also have to

contend with the forces of Creative Destruction and let the economics dictate what we

should save from the print-only culture, and what we should embrace in the print-

digital culture.

Again, the publishing model is production, packaging, delivery… and

advances in digitized media have certainly set the groundwork for a huge disruption in

the publishing world. Specifically, digital video and audio can be moved about quite

freely, which aids the production process; but as we’ve learned with digital sound,

consumers don’t quite respect the content without the packaging. So future

11
Ong, Walter J. Orality and Literacy: The Technologizing of the Word. London: Methuen,
1982.
12
McLuhan, Marshall. The Gutenberg Galaxy: The Making of Typographic Man. Toronto:
University of Toronto Press, 1962.

325
incarnations of digital sound, and most likely digital video, should include some

packaging component. Of course, the Internet has completely overturned the delivery

model, making it virtually cost free for producers.

Again, the technology is there for our use.

On the issue of copyright, the government needs to stop giving priority to the

corporations and it needs to lift the restrictions guarding corporate content from

amateur production projects. In the 16th century, the English crown commissioned the

Stationers’ Company, giving the printers of London an absolute monopoly over book

production throughout the city and the country. The union leaders then used their

influence over government to pass restrictive copyright laws and a ban on the import

of books from outside England.13 These decisions stunted the progressive growth of

literature, or at least the literature created outside the traditions firmly established by

the monarchy. Here in the United States, it appears that Congress may have simply

replicated the Stationers’ Company model, surrendering the powers of creative control

to the corporations. And as we’ve learned, creativity inside the structure of the

corporate model is dull, absent the idea of “public interest” and entirely profit driven.

The New York Times decision to experiment with “Snow Fall” and “The Jockey” were

nearly anomalous events; as proof, consider the fact that News Corp., after just a year

of experimentation, killed The Daily because company executives didn’t see it turning

profitable in the near term. That decision stalled what could have been an aggressive

shift toward the newspaper of the future.

13
Eisenstein, Elizabeth L. The Printing Revolution in Early Modern Europe. Cambridge:
Cambridge University Press, 1983.

326
So what should long-form nonfiction publications of the future look like? We

have learned during the last century, that audiences enjoy visual storytelling both in

film and television and that there is an opportunity to incorporate those forms into a

converged platform through the form of video. The problem is that video is a ‘hot’

medium, or one of performance; and text is a ‘cold’ medium, or one of concentration.

When paired together, the user is torn between allowing the publication to speak to

them, or for them to place their concentration inside the publication. It’s an odd

inverted relationship. Should I read? Should I watch? How do we engage both?

All the media theorists have touched on this idea in some fashion. When we consume

media, we approach it with a sense of consciousness but no one has offered a clearly

defined explanation of what goes on between the human mind and the medium that is

communicating with it. So far, I’ve danced around the subject, explaining McLuhan’s

definitions of ‘hot’ and ‘cool’ media hoping to lay the groundwork for the next major

leap in thought.

It amounts to this: ‘hot’ media performs, ‘cool’ media commands performance.

Let me pull back and talk about Martin Heidegger. In his book Being and

Time, the German philosopher created the idea of “Dasein,” which is German for “of

being.”14 As I understand it, Dasein is the relationship man’s consciousness shares

with his physical world—we interact with the hammer, the cup of coffee, our

reflection in a storefront window—this is the sensory interaction between self and our

perception of the materials we encounter in the world; therefore: I am hammering; I

am drinking bitter coffee; I need a haircut. This is our Dasein, our perception of being

in the world.
14
Heidegger, Martin. Being and Time. London: SCM Press, 1962.

327
In his book Literary Theory, Terry Eagleton approaches the issue in a chapter

entitled “Phenomenology, Hermeneutics, Reception Theory,” where he explains that

“All consciousness is consciousness of something: in thinking, I am aware that my

thought is ‘pointing towards’ some object. The act of thinking and the object of

thought are internally related, mutually dependent. My consciousness is not just a

passive registration of the world, but actively constitutes or ‘intends’ it.”15

In the last sentence, he uses the phrase “intends it,” which suggests that we can

direct our intellect in a direction to perceive the intention of an object. Marrying that

idea to Heidegger’s Dasein, we have the ability to direct our consciousness to interact

with an object; we can “intend” to perceive that object. Armed with these shared ideas,

let’s turn to ‘hot’ and ‘cool’ media.

In our relationship with ‘hot’ media—music playing from a radio—we don’t

necessarily have to direct our attention to the radio to hear the music, instead the radio

performs the sounds which reaches our ears as music; when it comes to our intention

with Dasein, all we do is allow the medium to deliver the performance to our sense of

hearing; therefore, ‘hot’ media performs. The opposite is true with our relationship to

‘cool’ media—the text inside a novel—we must direct our Dasein into the pages of the

book and methodically scan the black symbols across the page to experience their

collective meaning; the act of reading, or scanning symbols, is performance, which

must be done to experience the medium of text; therefore, ‘cool’ media commands

performance.

15
Eagleton, Terry. Literary Theory: An Introduction. Minneapolis: University of Minnesota
Press, 1983. 47-78.

328
In their book, The Principles of Multimedia Journalism, authors Richard

Hernandez and Jeremy Rue use different language to explain the same principle. They

call these divergent media “lean-back, lean-forward,” media. So, with television, you

lean back and let the video play over you; with computer technology, you lean forward

and concentrate.16

We know from decades of research that “lean-forward” devices—such as


desktop computers—correlate with shorter attention spans, especially when
compared to modes where a user is in a more comfortable setting, such as
watching television or reading a book on a couch in the living room. The
typical television show length are set in half-hour increments, whereas most
Web videos are only a couple of minutes long.17

Which brings us to the prospect of multimedia. For multimedia to function

properly, the tools of the media narrative must morph together in a tapestry of story; or

as Richard Wagner described as a transcendent media experience: “Not a single richly

developed capacity of the individual arts will remain unused in the Gesamtkunstwerk

of the future.”

Begging the following questions: Is it possible for the Dasein of the human

animal to have locomotive thought that trades between the actions of both perceiving

and performing? Can we consume the performance—pause—perform the action—

pause—consume the performance… over and over again during a common

multimedia journey?

This would certainly be a break from the tradition of “deep reading” alone in a

quiet place; and a break from sitting in a dark theater absorbed with the projections on

the movie screen. This true multimedia experience would be a seesawing effect

16
Hernandez, Richard Koci, and Jeremy Rue. The Principles of Multimedia Journalism
Packaging Digital News. New York: Routledge, Taylor & Francis Group, 2016.
17
Ibid.

329
between divergent ‘hot’/‘cool’ media. To experience true multimedia storytelling, the

human mind must be prepared to engage such a challenge. As it happens, we do have

some experience with that.

As Steven Johnson points out in his book Everything Bad is Good for You, the

human mind is pliant and malleable and adaptive. After decades of viewing television

drama, TV audiences developed the necessary viewing skills for something more

complex and by the early 1980s, stick-figure dramas evolved into more complex

stories lines like the ones playing out on Hill Street Blues… and later The Sopranos…

and later Game of Thrones.18 Given some experience with multimedia storytelling, it

is entirely possible that multimedia audiences can learn a similar adaptability.

In fact, in his book The Brain That Changes Itself, psychiatrist Norman Doidge writes

that one of the amazing things about Homo sapiens and their brains is their ability to

adapt to new neurological conditions. In neurological science, this ability is called

“neuroplasticity” and it means that the human animal has a brain that is highly

adaptive and plastic… like “Play-Doh.” He puts it this way:

Plasticina, he tells me, is the musical Spanish word for “plasticity,” and it
captures something the English word does not. Plasticina, in Spanish, is also
the word for “Play-Doh” or “plasticine” and describes a substance that is
fundamentally impressionable. For him our brains are so plastic that even
when we do the same behavior day after day, the neuronal connections
responsible are slightly different each time because of what we have done in
the intervening time.19

Reflecting upon the first Incunabula, as the printed book increased literacy around the

world, this literacy transformed the human cognitive structure, altering Homo sapiens

18
Johnson, Steven. Everything Bad Is Good for You: How Popular Culture Is Making Us
Smarter. London: Penguin Books, 2006.
19
Doidge, Norman. The Brain That Changes Itself: Stories of Personal Triumph from the
Frontiers of Brain Science. New York: Viking, 2007. 209.

330
brain chemistry and neurological pathways. The adaptive human brain assimilated to

the literal medium by changing the brain. In the current Digital Incunabula, there is no

reason human biology won’t replicate this transformation. The human brain is plastic

and changeable… but Doidge warns us that adaptability tends to take a generation.20

On the issue of multimodal story production, for true multimedia to converge, the

entire project must be conceived by an expert in the arts of several media… just as the

Master Printer orchestrated all of the elements of the press run, the multimedia

“Master Producer”—therefore, the Executive Producer—must be aware of all the

media forms. This producer must know how video and text and photos and sound

converge together to form a uniformed story. As it stands now, there are very few with

the knowledge, the skills and the acumen to assume this role. Historian Elizabeth

Eisenstein explains the talent of the master printer this way:

…we need to recall that early printers were responsible not only for publishing
innovative reference guides but also for compiling some of them. To those of
us who think in terms of later divisions of labor, the repertoire of roles
undertaken by early printers seems so large as to be almost inconceivable. A
master printer himself might serve not only as a publisher and bookseller, but
also as indexer-abridger-translator-lexicographer-chronicler. Many printers, to
be sure, simply replicated whatever was handed them in a slapdash way. But
there were those who took pride in their craft and who hired learned assistants.
Such masters were in the unusual position of being able to profit from passing
on to others systems they devised themselves.21

In the Digital Age, executive producers working to forge multimedia stories, most

replicate the achievements of the master printer, which is not impossible. And, in time,

the labors of the executive producer will be dispersed (as has happened in many cases)

among other specialists.

20
Ibid.
21
Eisenstein, Elizabeth L. The Printing Revolution in Early Modern Europe. Cambridge:
Cambridge University Press, 1983. 66.

331
Packaging: The next ‘Black Box’

On the issue of packaging, in his book Convergence Culture, Henry Jenkins

warns us not to focus on what he calls “the Black Box.”

Much contemporary discourse about convergence starts and ends with what I
call the Black Box Fallacy. Sooner or later, the argument goes, all media
content is going to flow through a single black box into our living rooms (or, in
the mobile scenario, through black boxes we carry around with us everywhere
we go). If the folks at the New Orleans Media Experience could just figure out
which black box will reign supreme, then everyone can make reasonable
investments for the future. Part of what makes the black box concept a fallacy
is that it reduces media change to technological change and strips aside the
cultural levels we are considering here.22

Before I read Jenkins’ book, I almost fell into the trap, writing entirely about the

emergence of the iPad, but—as he says—this is just another black box. What we need

to really do is set aside all the electronic detritus and think about how media should

appear to us. The iPad puts many forms of media in front of us but the tablet computer

doesn’t appear to be the final form; another incarnation, another consumer electronic

device, will follow, which will improve upon the iPad, the way the iPad improved

upon the newspaper, the television, the desktop computer and so forth.

But are tablet computers “lean-forward” or “lean-back” tools?

In their book on the issue, Richard Hernandez and Jeremy Rue return to the

2010 Apple presentation by Steve Jobs: “During a portion of the demo, he

conspicuously sat on a small black couch on stage while demonstrating the table’s

capabilities projected behind him.”23 Jobs was dying of cancer at the time, but Apple

is known for its choreographed presentations and Hernandez and Rue believe the

22
Jenkins, Henry. Convergence Culture: Where Old and New Media Collide. New York: New
York University Press, 2006. 14.
23
Hernandez, Richard Koci, and Jeremy Rue. The Principles of Multimedia Journalism
Packaging Digital News. New York: Routledge, Taylor & Francis Group, 2016.

332
couch was a prop designed to make the “lean-forward, lean-back” argument: “…the

couch is symbolic of the type of consumption this device respects. It’s a casual device,

one on which you want to lean-back and consume content, rather than create content.”

Clearly, the iPad is a “lean-back” device.

So what will replace the tablet computer? Looking at the newspaper and the

iPad for clues, let me say this: The newspaper, for a time, was the perfect medium. It

was cheap, organized, portable, lightweight and disposable. Comparatively speaking,

the iPad is organized, lightweight and portable but it certainly isn’t cheap or

disposable. Whatever replaces the iPad should reflect these things.

In the Steven Spielberg film Minority Report, a sci-fi police story, he creates a

digital periodical that looks like a newspaper and has the ability to download new

content and display video content. In the scene, the main character John Anderton

(played by Tom Cruise), a fugitive, is sitting on a crowded subway train and across

from him, a man is reading a digitized copy of The USA Today. As we peer over the

shoulder of the man with the paper, the cover page transforms, keeping the banner but

replaces the lead story with a digital alert which reads “breaking news,” and shows

moving images of John Anderton.24

Again, this is science fiction but in 2013, Samsung announced that it was

experimenting with flexible, bendable OLED displays. The technology isn’t here yet,

but it is coming.25 Regardless of the timing, when the product (or products) does

24
Goris151515. "John Anderton Usa Today." YouTube. March 22, 2009. Accessed March 14,
2017. https://www.youtube.com/watch?v=jLEeDla2u40.
25
"Flexible OLED: Samsung Already Defending an Unassailable Lead." Android Authority.
February 21, 2017. Accessed March 14, 2017. http://www.androidauthority.com/samsung-flexible-oled-
production-751220/.

333
finally arrive, it will certainly be Internet friendly and it will certainly adapt to the

iTunes Store (and its competitors) and the users will possess a mature sense of

adaptability to adjust to the evolution of the technology.

The software is also there. Millions of applications are already online and

millions more are in development. When the time comes to package multimedia, there

will certainly be a market for it.

Searching for early examples: E.O. Wilson’s multimedia textbook series Life

on Earth, currently available on iTunes, shows how these media tools can be joined

together. In this body of work, there is text, there are interactive illustrations and there

is a video explanation for each topic; all the student must do is read the explanation,

sample the interactive illustrations and then allow the scientist, E.O. Wilson to explain

the details. It’s a sweet model with lots of potential.

Right now, on recent covers of The New Yorker, producers there have been

experimenting with companion media on the covers of this historic publication. In one

case, the cover art was the picture of a pane of glass on a wet rainy day; there is the

outline of a cityscape and the blur of a yellow cab in the foreground. On the analog

addition of the publication, the audience just got a static shot of the cover art; but on

the digital version, there are drips of water streaming in a cyclical pattern down the

cover of the magazine. The effect is alluring and somewhat soothing and melancholy.

It also marks a nod towards the power of the digital… and, given some imagination,

the possible restoration of the “illumination” we lost when we killed the manuscript.26

26
Mouly, Françoise, and Mina Kaneko. "Cover Story: Christoph Niemann's Rainy Day." The
New Yorker. February 09, 2015. Accessed March 12, 2017. http://www.newyorker.com/culture/culture-
desk/cover-story-2014-10-06.

334
Further, journalism specifically and nonfiction storytelling generally need to

get off the web browser model. This is a packaging issue and the relationship has been

awful from the beginning. The news industry has been suffering on a scale not too

dissimilar to that of the record industry. I mean, no one really wants to pay for news

we find freely on the Internet. Getting news off the web browser isn’t entirely

impossible; it just requires producers to create the proper packaging, which will likely

encourage the audience to pay for the content. For now, the easiest way to do that will

be for them to craft software Apps, which can be downloaded to smart phones,

computer desktops and tablet computers. Doing so creates the permanence of

packaging, making these stories less vulnerable to updates on the browser and Internet

software protocols. It also creates an opportunity for archival purposes, which merely

amplifies the potential audience size.

A word about the amateur: Because of the copyright laws, the current creative

culture is prohibitive and that should change. It’s unlikely that legacy media are going

to lead the charge forward into the age of multimedia and there needs to be an element

of creative entrepreneurship in the mix. Easing the copyright restrictions and allowing

for the members of the public to explore and experiment with—film and music and

literature—the fabric of contemporary society with an eye towards the future. It’s

uncertain what will come of it… but there should be an element of the ‘Black Swan’

syndrome (or something wholly unexpected) in the mix. Someone somewhere

whittling away at a story concept absent the restraints and shackles of an

overprotective and cumbersome legal system.

335
Want proof? Look at the myriad of Kickstarter campaigns currently underway

on the Internet. The purpose of the site is to allow entrepreneurs and inventors a place

to showcase fledging projects looking for funding and other support. As I’m writing

this, four million people have donated in excess of $112 million to publishing projects.

Let’s reflect back to the beginning.

In the 1450s, when Johannes Gutenberg began building his printing press, he

was an experienced goldsmith and rubricrator who saw a better way and began—in the

true entrepreneurial spirit—to convert a wine press into his vision for a printing press.

To do this, he needed capital, and he found a financial backer, Johann Fust, who

funded the project for a time. In the 1920s, when Philo Farnsworth was assembling the

components for his first television set, he was paying for it mostly out of pocket until

he found a pair of financial backers who helped him get the project off the ground. In

the 1970s, as Steve Jobs and Stephen Woziak were developing their first Apple

computer, they found financial support through the form of advanced orders from a

small retail vender in San Francisco. And in the 1990s, when grad students Larry Page

and Sergey Brin were writing the computer code for the Google search engine, an

interested investor found them and wrote them a check for $100,000.

As it stands now, the process of invention, the process of advancing the digital

realm of storytelling is hinged upon the inspiration of rogue thinkers and intuitive

investments from financiers with enough imagination to gamble on emerging

technologies. Wouldn’t it be detrimental if some looming government policy were

standing in the way of that next paradigm shift?

336
The Future of Storytelling

We tell stories for a variety of reasons. Stories are our histories, our parables

and the root source of our culture heritage. For the last 1,000 years, the model for

storytelling has been an organic relationship between man and media: The poet recites

the lyrics of his epic as the audience engages, watching with their eyes, listening with

their ears; the author transcribes his ideas, forging them into linear symbols, which are

presented in a literal form to a reading public that scans the symbols with their eyes

and the language of the author’s tale is revealed through internal monologue. In each

of these cases, the story’s path into the respective minds of the audience is through the

human senses. This is the organic pathway that media follow linking the story to the

sentient imagination of the human being.

But what form would the story take if it were possible to bypass the senses?

In the current form, our stories are about conflict resolution or conflict

calamity or conflict unresolved. Either the hero saves the day or the tribe is lost or the

outcome remains uncertain. Searching for examples, James Bond saves the world; the

Greeks sack Troy; Larry Darrell drives away in his taxicab never to be heard from

again. These are our story forms and they’ve been this way for some centuries.

With news, the tapestry of news is a long unbroken fabric, ribbons of facts

vaulting forward towards some uncertain resting place and to make sense of it all,

editors cut swaths from the cloth and hold it up to the light to explain to us what it

might all mean. Even stories that seem complete, like the Watergate Investigation or

the Kennedy Assassination, are still ongoing; these stories didn’t end with Richard

Nixon’s resignation in 1974 or with the Warren Commission Report in 1964; instead,

337
these stories continue to be reinterpreted, measured against changes in the pop-culture

narrative. Nixon may have been guilty of corruption in 1974, but the standards for

decency have shifted so radically, one might argue that “by today’s standards…”. This

is an interpretation of the story through the lens its historical ‘horizon,’ which is the

fact that the text is measured against what is considered culturally fashionable at the

time.

…a more historically-minded member of the school of Constance is Hans


Robert Jauss, who seeks in Gadamerian fashion to situate a literary work
within its historical ‘horizon’, the context of cultural meanings within which it
was produced, and then explores the shifting relationships between this and the
changing ‘horizons’ of its historical readers. The aim of this work is to produce
a new kind of literary history—one centered not on authors, influences and
literary trends, but on literature as defined and interpreted by its various
moments of historical ‘reception’. It is not that literary works themselves
remain constant, while interpretations of them change: texts and literary
traditions are themselves actively altered according to the various historical
‘horizons’ within which they are received.27

Terry Eagleton describes this process as “literary reception.”

Hayden White says its up to the storytellers to design the story form to

transform chronicle to narrative. In doing so, the story producer creates an opportunity

for aesthetic evolution because narrative form gives way to creative presentation.

Narrative, after all, is about story design, the imparting of information and human

empathy, or how the author engages the audience and, more to the point: does the

audience understand?

Which brings us to the intricate issue of human consciousness or the way

information is received by the human mind. There are many mysteries but one of the

greatest may be about the idea of human intellectual thought. Given the biology of the

27
Eagleton, Terry. Literary Theory: An Introduction. Minneapolis: University of Minnesota
Press, 1983. 72.

338
human brain (of which we know so very little), it is fascinating to consider that the

human mind is little more than an organ filled with chemicals, which are triggered into

action by little jolts of electrical current. According to the current research, the human

brain requires roughly 20 watts of electricity to operate.28

At this point in our evolutionary development, the human animal is dependent

upon the senses to deliver information to the brain; take, for example, our need for

sight: We see the wolf, the image appears in the brain, the brain determines that the

animal is attacking, and we respond. This is the ‘first person’ understanding of the

world. When we read books or consume other media, we witness the event; even if the

perspective is from the vantage point of the ‘first person,’ we realize that we are, for

example, sitting in a movie theater and a safe distance from the events we are viewing.

We are passively removed.

Imagine now what form the story would take if the message bypasses the

sensory receptors of the human body and are, instead, injected directly into the brain

tissue. We are no longer seeing the wolf in the traditional sense, which means the

medium (light) traditionally used to deliver the images is absent, and—instead—the

images of the wolf attacking us is delivered directly into the brain (as electricity) as

though we are really seeing them; how would the human body react to that

experience? Would we still believe that the animal is a conveyance of media? Or

would we have the biological fight-or-flight act of self-preservation? Where does the

ability for ration occur in the biological process of seeing?

28
Maxwell, Mary. Human Evolution: A Philosophical Anthropology. London: Croom Helm,
1984. 62.

339
Transforming information into a digital form converts the written word from

the tactile medium of news-printed information into electronic impulse signals. In

other words, the new medium—the medium McLuhan would have us observing—

would be electricity, which is the only known medium with the ability to penetrate and

communicate with the brain directly, or more specifically, absent the use of the five

senses. Suddenly, and this bares observation, this medium would be a new form, a

new experience, an ‘injected experience’ or an ‘infusive media’ that circumvents the

human body; and, in the process, this information also circumvents the human’s

capacity for reason. This would be true empathetic storytelling.

Before I move on, let me define ‘infusive media’: Infusive media are media

that enter the human consciousness absent the assistance of any of the five senses. I’m

not even sure if this is possible. Can the data related to sound be placed inside the

mind of a listener? Not the idea of the sound but rather a streaming melody equal to

the sentient hearing experience?

It is from this vantage point, that I look to expand upon Robert Logan’s ideas

about the Digital Orality. He suggests that the Digital Orality is about transforming the

written word into a broadcast medium; or transforming literal communication into a

‘hot’ medium.

I’d like to move this forward. If the purpose of storytelling is to transport an

audience to new and distant places and the technological advances in storytelling have

continued to move the producer and the audiences closer together, wouldn’t the end

result be a true empathic experience? Wouldn’t the ultimate goal be to unify the

storyteller and the audience together? This is the essence of my idea about Infusive

340
Media. This would be the conveyance of story as a total sentient experience or an

empathetic exchange that induces in the audience the exact thoughts and feelings

observed by the author in a distant time and place. The idea of ‘me’ would be shared

completely; in fact, the idea of ‘me’ could be recorded and broadcasted digitally.

This would be the ultimate goal of the Digital Orality.

Armed with that idea, the next consideration is over the shape of stories. If the

human brain possesses the plasticity to adapt to Infusive Media, it will be entirely

possible to plant human experience, human thought, human memory, and advanced

story into the mind of the participant. Clearly, the content and the presentation of the

story will have to be reconsidered entirely.

Consider again, Ray Kurzweil’s ideas about “experience beamers,” or people

who share their cognitive living experiences with others in a way that the audience is

feeling exactly what the producer (or “beamer”) felt… considering that, one might

begin to perceive what the final Digital Orality may become: one where people are

connecting with digital libraries and downloading the cognitive experiences—the

sights, smells, sounds, tastes, touches—of that sensual moment as experienced by

another human in a life that is vast and different and at a long distance separated by

space and time. In that moment, the producer and the audience will be one… the same

person.

Let me take it a step further: Imagine a digital matrix where everyone is

connected forming a ‘digital hive’ and everyone is sharing the same collective

experiences. We think together, we feel together, we are uniform in thought and mind.

Suddenly, one story belongs to everyone; one history belongs to everyone. In this

341
place, the idea of “we” is replaced with the uniformity of “me.” This digitally

integrated human agglomeration would become a single entity, a single sentient “me.”

A series called Year Million, which appeared on the National Geographic

Channel in 2017, took up this issue. The overarching theme of the six-part series

looked at the future of integrated human communication and it moved roughly through

Ray Kurzweil’s six developmental epochs of “the Singularity” to make its argument.

In episode four, producers introduced the idea of a “hive mind,” or a unified group-

think experience:

You and all your fellow concert-goers, your brains are hardwired together. Still
not getting it? For the price of admission, when all of your brains are
connected, you can be the performer, the audience, the orchestra, the vibrations
in the air itself. You hear a melody, it sparks an emotion, a memory. Down the
rabbit hole you go. Imagine literally teleporting yourself to that moment in
time and actually living the experience. Welcome to your future, in the hive
mind. It’s the deep future. Your body, gone. You’re all computer, all the time.
Your brain is way more powerful than even a billion supercomputers. Jobs,
food, language, water, even traditional thought, all of humanity’s building
blocks, all that’s done. And you are immortal. Squirming in your chair yet?
You should be.29

In this “hive mind” world, we are everyone and no one. We surrender our single

identity to become a part of the greater whole, the group think, the uni-present

conscience defined as “we” or even ‘the greater I.’ In this situation, there would be no

need for physical media because the stories would be gathered within the realm of the

“hive mind.”

Ah! But this idea is merely part of the greater musing of the science fiction

community… at least at the moment.

29
Connolly, Chris, Jenny Connell Davis, Jeremy Lubman, and Brian Wizermann, writers.
"Mind Meld." In Year Million, directed by Mark Elijah Rosenberg. National Geographic Channel. 2017.

342
For now, we are in the opening chapters of this period of digital storytelling

and the best we can hope for is a better understanding for the near term. I mean,

imagine a 16th century audience attending a 3-D viewing of Avatar or Star Wars and

then project forward 550 years into the future and you can begin to understand the

temporal and developmental distance between our cognitive abilities now as compared

to where they will be in the year 2537 and that’s only if we progress at the natural rate

of evolution. The futurists are saying “the Singularity” could accelerate human

evolution at a rate we can hardly imagine and Ray Kurzweil has even gone as far as to

predict that 2045 is when the acceleration will begin.

Ahead of all that, all we can do now is ask the basic questions: Is the human

mind capable of empathy, telepathy and Infusive Media? Will this change in evolution

be safe and productive? And how do you suppose Max Horkheimer and Theodor

Adorno would react to a technology that literally injects content—like a hypodermic

needle—directly into the consciousness of a living person? There are so many things

to consider and work out as that future draws closer.

So, what is our potential for storytelling in the near term?

Reflect back to Gutenberg and the changes that followed. To get his Bibles

produced, Gutenberg had to experiment with metal alloys and ink solutions; he also

had to work through design considerations that commingled text and Illuminated

images; to do these things, he and his journeymen had to learn all sorts of new

processes and the concept of the “master printer” was created. A whole host of new

industries, technologies and innovations followed. The printing press triggered an

interest in vernacular languages; booksellers sprung up all over Europe; and a list of

343
professions including writers, editors, researchers, designers, typesetters, librarians,

professors and so forth.

Turning now to the contemporary movement: Digitized media have inspired all

sorts of new industries… especially, with regard to the Internet. Text stories, sound,

still and moving images all have a presence in the digital age. Social media, for

example, have moved beyond a mere cottage industry and into a full-blown segment

of industrial commerce; so much so, that new cottage industries—such as emoji

developers—have emerged and are finding success in the global economy.

On YouTube, a new kind of celebrity has emerged: the YouTube “Influencer,” or

someone whose videos earn enough traffic that they are being paid for that attention.

One of the first was Marina Orlova, an attractive, educated, video blogger who hosted

a site entitled “Hot for Words,” where she explains the etymology of words. And

while she does have the educational background to deliver this information, Orlova

used sex appeal and innuendo to attract over 470 million page views to her channel,

making her the first “influencer” to earn $1 million on YouTube.30 Hundreds more

have duplicated her model on YouTube and other social media platforms. Looking

back, these content creators are similar to the hack writers of Grub Street, creating

content for the purpose of gaining wealth, not to impart important information.

The Digital ‘Master Printer’

But let’s talk about the future of storytelling specifically: The shape of

information has shifted dramatically. When it comes to breaking news, that

information is delivered in short bursts as text messages generated by a variety of


30
Marcovici, Michael. The Wealthy Blogger. Books on Demand, 2014.

344
media outlets. When the Boston Marathon Bombing happened, I learned about it over

Twitter; when Happy Days actress Erin Morgan died, I learned about it from a text

message from The Washington Post. In this format, text messages are brief—just a

dozen words or so—and it’s up to the audience to know the background to understand

the value of the information (therefore, I had to know that Erin Morgan was “Joanie”

on Happy Days); Twitter and text messaging are broadcasting instruments for literal

content and given the form, probably as complex as this medium will ever get. If the

news is interesting enough, the audience often runs to other platforms—radio,

television—for a more thorough understanding, which is purely hot media. After I

read about the Boston Marathon Bombing, I turned on CNN knowing it would have

video and soon a narrative. These different media forms are associative but not

necessarily acclimated or integrated to aid one another: The Twitter message piqued

my interest enough, driving me to find a television set and a cable news signal to see

what was going on. In a future, more fluid world of advanced multimodal

communication, the Twitter message should also grant the user to instantaneously

migrate to a related video signal.

With regard for breaking video-based news, the current cable news model—

CNN, Sky News, the BBC—is to introduce a live video signal but because the anchors

and reporters are witnessing things as the audience sees them, there is a glaring lack of

context and any effort made by the journalists to offer context often leads to errors.

Take for example the myth of the box cutters as a tool for hijacking the planes on

September 11th, 2001.

On the Pentagon plane, American Flight 77, Barbara Olson reported hijackers
carrying knives and box cutters but did not describe how they took the cockpit.

345
And on United Flight 93, passengers reported knives but also a hijacker
threatened to explode a bomb. The box cutter-knives story isn’t demonstrably
false, but it serves to divert attention from the other weapons and to mask the
fact that we don’t have any idea how the hijackings happened.31

This is the problem with live, breaking news coverage, and I’ve discovered that there

are some precautions the TV news community must follow; key among them is a

propensity to advance the story narrative with sensational details even if the facts

don’t add up. In most cases, journalists who are first to report a story aren’t necessarily

the most accurate.

Instead, there is a pattern of clarity. After the news is broken, it goes through

an evolution of discovery where assumptions are replaced with facts. On September

11th, 2001, the death toll for the attacks was estimated near 5,000 before it was revised

downward (days later) and corrected to just under 3,000 people. This is a byproduct of

the competitive nature of cable news, which is still searching for a fair balance

between timeliness and accuracy. In this evolution of discovery, storytellers find

themselves in a place to relay sound facts and figures and combine them with human-

interest stories. Before cable news, newspapers usually had time enough for the dust to

settle around the news event and could publish fairly accurate information. But in the

age of instant news—which is guaranteed to us via the Internet and the web of cable

news operations—it is very easy for an excited newsroom to deliver inaccurate

information. My hope is that over the succeeding years, these institutions will mature

to a point where they recognize that news accuracy should trump velocity.

31
Plotz, David. "Six Myths about Sept. 11." Slate Magazine. September 10, 2003. Accessed
September 12, 2017.
http://www.slate.com/articles/news_and_politics/hey_wait_a_minute/2003/09/what_you_think_you_kn
ow_about_sept_11_.html.

346
In the 20th century, the magazine industry perfected this process and, today,

The New Yorker, Rolling Stone, The Economist, The Atlantic Monthly, Vanity Fair and

Fortune magazine have all become leaders in this story form, one that offers

complexity, context and uniform accuracy. In the electronic formats, television news

magazines including Frontline and 60 Minutes have also embraced longer form, news

stories that are equally complex and contextual.

So, if video and print can offer the same story form but under divergent media,

there certainly is a place for converged storytelling, and “The Crossing,” “Snow Fall”

and “The Jockey” are clear experiments in that form. What’s missing from this

development is a modern day “Master Printer” or someone who understands the

dynamic of the multimedia storytelling components to orchestrate a final publishable

form.

I suspect that a generation of Executive Producers, or multimedia storytellers

channeling the role of the Digital ‘Master Printer,’ will emerge in the coming decades

to forge a converged form of the long-form nonfiction story. This ‘Master Producer’

will orchestrate the process: leading research; defining valuable information;

dispatching text, still and video producers out to accumulate data, which will

ultimately find its way to a packaging desk, where it will be commingled and moved

along to the delivery phase. In the delivery phase, the accumulated works will either

be sent out piecemeal or as a total form… a Gesamtkunstwerk… a tapestry of story,

real and complete, transcendent and total.

The relevance in all of this is the fact that we’ve transformed all media into a

translatable electronic form, which can be cut and pasted, altered and sweetened, and

347
packaged. When it becomes time for the information to be delivered, the pathways of

delivery will evolve: for now, we are consuming multimodal media on smart phones

and tablet computers but the pathway is moving closer to the senses—to the smart

watch on the wrist and soon the reflective facing surface of eyewear—, which seem

like minor enhancements but actually abbreviate the gesture of understanding. With

enhanced eyewear, one doesn’t need to halt what they are doing to glance at a smart

phone to receive the information; this is the essence of augmented reality

communication. After enhanced eyewear, there is the potential that data will be

coming to us via contact lenses, which project the information right to the retina tissue

inside the eye. Before, finally, in the last phase that information might be infused

straight into the cranial realm and into the living tissues of the human brain, which, of

course, would ultimately alter the pace of human evolution.

Writing improved human potential for artifact of memory; printing expanded

the human potential for learning; radio and television extended the human potential for

community; computers expanded the human potential for complex research; digitized

media expanded the human potential for transmission and storage; neurological

enhancement has the potential to transform the human potential for wisdom.

One day, the act of learning may be as pure as an instantaneous injection or

exposure to digitized data infused directly into the human learning centers inside the

brain; this injection might be a singular event, or it might be a lifelong infusion of

data, a so-called ‘wisdom on demand.’

348
The Recipe for Digital Storytelling

In the chapter on multimedia, I presented my list for digital storytelling. The

list is a decade old but still it represents a rough sketch of the process for storytelling

in the digital age. As the technologies evolve, as the audience matures, as storytellers

adapt and as leaders emerge, I’d like to think that my list is a recipe for the future.

Here is the list again:

1. Digital Journalism integrates traditional media to tell one story


2. Technology is not journalism
3. Create digital content explicitly for that audience
4. Research, reporting, writing, editing, thinking: remain paramount
5. Accuracy, accuracy, accuracy
6. Let images be powerful; apply the same standard to writing
7. Be dynamic, be brief
8. Professional work must look professional
9. Understand each medium
10. Communicate with the audience 32

If I was going to add an eleventh rule, it might be to guarantee that these stories be

archived after they have been published, so they continue to exist in the public record.

I’d also like to think that, in time, production rituals would evolve and a new

catechism for storytelling will immerge that will ultimately develop into an aesthetic.

Over the centuries following the invention of the printing press, English

literature evolved into its own art form. For stories to be truly transcendent, they must

be artful because beauty has the ability to elevate the human spirit. The same should

be expected from the next epoch in storytelling. In the digital realm, there must be an

evolution of story borne from the tools and processes that one day establishes a sense

32
Scully, Michael. "10 Rules for Digital Journalism." Scribd. January 26, 2013. Accessed July
13, 2017. https://www.scribd.com/document/147681342/10-Rules-for-Digital-Journalism.

349
of story that transcends its moving parts to become something more… stories that are

rich, splendid, awe-inspiring and inspirational.

In the end, storytelling is really about escapism. As human animals, we have

always liked stories simply because they allowed us to be other people in distant

places; this idea is true both to fiction and nonfiction readers. Reading allows us to

dwell in a special place inside our minds, imagining ourselves living through the

actions of others; video too has moved to this place in our heads where we witness and

explore in a different, exciting and new sensory way. Given that these media now

thrive in a shared digital format, the collaboration or fusion of multiple media in one

place is a near certainty. Until finally, in the next incarnation, we may soon be able to

explore and feel the actual sensory experience of other human beings… dwelling in

their existence cognitively feeling and seeing and being them in that instance of

human experience… this may be the future of storytelling.

Finally… returning again to the Library of Congress and to the two wooden

cases situated opposite each other. The Giant Bible of Mainz and the Gutenberg Bible

are curious artifacts for tourists who wander between the cases, dwelling in the

moment as they stand over looking down through the glass at the pages of these

ancient artifacts. With the manuscript, an unknown scribe spent months writing and

drawing and painting the final work; and Johannes Gutenberg spent countless hours

printing and repressing pages for his final collection of 180 pristine Bibles.

Imagine now—95 years into the future—a third case… the one hosting the multimedia

experience and to see it, all one needs to do is attach their consciousness to the digital

mechanism that transports humans to a place of shared experience. In this space, we

350
could be physically away—in Paramus, Peoria or Pismo Beach—but our sentient-self

would be there dwelling among the halls of the Library of Congress visiting the cases,

opening the displays and leafing through the pages… touching the leather binding,

smelling the inks and vellum and gazing down upon the gilded images of St. Paul and

St. Francis and St. John… and doing all of this from the vantage point of a six-year-

old during her first visit to Washington, DC back on a hot summer day… in 2028.

QED

351
Bibliography

"1911 Encyclopædia Britannica/Pamphlets." 1911 Encyclopædia


Britannica/Pamphlets - Wikisource, the Free Online Library. Accessed March
10, 2017.
https://en.wikisource.org/wiki/1911_Encyclop%C3%A6dia_Britannica/Pamph
let.

24 March, 2014 | By Andreas Wiseman. "Under The Skin: At Any Cost." Publication
Name. Accessed March 10, 2017. http://www.screendaily.com/features/under-
the-skin-at-any-cost/5069904.article.

Abercrombie, Nicholas, and Brian Longhurst. Audiences: A Sociological Theory of


Performance and Imagination. London: Sage Publ., 2003.

Adams, Tim. "And the Pulitzer Goes To... a Computer." The Observer. June 28, 2015.
Accessed March 12, 2017.
https://www.theguardian.com/technology/2015/jun/28/computer-writing-
journalism-artificial-intelligence.

Adorno, Theodor W., and Max Horkheimer. Dialectic of Enlightenment. London:


Verso, 2016.

Alexander, Bryan. The New Digital Storytelling: Creating Narratives with New
Media. Santa Barbara, CA: Praeger, 2011.

Almén, Byron. A Theory of Musical Narrative. Bloomington: Indiana University


Press, 2017.

Andrae, Monika, and Chris Marquardt. The Film Photography Handbook. Santa
Barbara, CA: Rocky Nook, 2016.

Andrews, Alexander, ed. Newspaper Press. London, 1870.

"AOL Hikes Monthly Fee." CNNMoney. Accessed May 31, 2017.


http://money.cnn.com/1998/02/09/technology/aol/.

"App Stores: Number of Apps in Leading App Stores 2016." Statista. Accessed March
05, 2017. https://www.statista.com/statistics/276623/number-of-apps-
available-in-leading-app-stores/.

"Apple: IPad Sales 2010-2017." Statista. Accessed March 07, 2017.


https://www.statista.com/statistics/269915/global-apple-ipad-sales-since-q3-
2010/.

352
Auerswald, Philip E. The Code Economy: A Forty-thousand-year History. New York,
NY: Oxford University Press, 2017.

"AVCHD INFORMATION WEB SITE." AVCHD INFORMATION WEB SITE.


Accessed March 08, 2017. http://www.avchd-info.org/.

"Average Internet Connection Speed in the U.S. 2007-2016 | Statistic." Statista.


Accessed March 04, 2017. https://www.statista.com/statistics/616210/average-
internet-connection-speed-in-the-us/.

Bahl, I. J. Fundamentals of RF and Microwave Transistor Amplifiers. Hoboken, NJ:


Wiley, 2009.

Baig, Edward C., and Bob LeVitus. IPad for Dummies. Hoboken, NJ: John Wiley &
Sons, 2015.

Barbour, Ian G. Ethics in an Age of Technology. San Francisco, CA:


HarperSanFrancisco, 1993.

Barkin, Steve Michael. American Television News: The Media Marketplace and the
Public Interest. Armonk, NY: M.E. Sharpe, 2003.

Baron, Naomi S. Words Onscreen: The Fate of Reading in a Digital World. New
York: Oxford University Press, 2016.

Barone, Charles A. Radical Political Economy: A Concise Introduction. Armonk,


N.Y: Sharpe, 2004.

Barrat, James. Our Final Invention: Artificial Intelligence and the End of the Human
Era. New York, NY: Thomas Dunne Books, 2015.

Barthel, Michael. "Newspapers Fact Sheet." Pew Research Center's Journalism


Project. June 01, 2017. Accessed July 22, 2017.
http://www.journalism.org/fact-sheet/newspapers/.

Battelle, John. The Search: How Google and Its Rivals Rewrote the Rules of Business
and Transformed Our Culture. London: Brealey, 2008.

Baum, Eric B., Marcus Hutter, and Emanuel Kitzelmann. Artificial General
Intelligence: Proceedings of the Third Conference on Artificial General
Intelligence, AGI 2010, Lugano, Switzerland, March 5-8, 2010. Amsterdam:
Atlantis Press, 2010.

"BBC Bitesize - How Do Search Engines Work?" BBC News. Accessed March 11,
2017. http://www.bbc.co.uk/guides/ztbjq6f.

353
"BBC Radio 4 - In Our Time, Caxton and the Printing Press." BBC News. October 18,
2012. Accessed February 10, 2017.
http://www.bbc.co.uk/programmes/b01nbqz3.

Bearak, Barry. "The Jockey." The New York Times. August 13, 2013. Accessed
March 05, 2017. http://www.nytimes.com/projects/2013/the-
jockey/#/?chapt=introduction.

Benjamin, Walter, Hannah Arendt, and Harry Zohn. Illuminations. New York:
Harcourt, Brace & World, 1968.

Bernstein, Carl, and Bob Woodward. All the President's Men. New York: Simon &
Schuster Paperbacks, 2014.

Betty Houchin Winfield (Editor). "Journalism 1908: Birth of a Profession Hardcover –


September 3, 2008." Journalism 1908: Birth of a Profession: Betty Houchin
Winfield: 9780826218117: Amazon.com: Books. Accessed June 09, 2017.
https://www.amazon.com/Journalism-1908-Betty-Houchin-
Winfield/dp/0826218113.

Blodget, Henry. "Newspapers Are Losing $13 Of Print Revenue For Every $1 Of
Digital Revenue." Business Insider. December 03, 2012. Accessed July 22,
2017. http://www.businessinsider.com/newspapers-are-losing-13-of-print-
revenue-for-every-1-of-digital-revenue-2012-12.

Bort, Avery Hartmans and Julie. "AOL and Yahoo Plan to Call Themselves by a New
Name after the Verizon Deal Closes: Oath." Business Insider. April 03, 2017.
Accessed May 31, 2017. http://www.businessinsider.com/aol-and-yahoo-will-
become-oath-after-merger-closes-2017-4.

Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford


University Press, 2016.

Bowdoin, Van Riper A. A Biographical Encyclopedia of Scientists and Inventors in


American Film and Tv since 1930. Lanham, Md: Scarecrow Press, 2011.

Bradlee, Ben. A Good Life: Newspapering and Other Adventures. London:


Touchstone, 1997.

"Brady: GOP Plan Doesn't Force Americans to Buy Insurance." Fox News. Accessed
March 14, 2017. http://video.foxnews.com/v/5359032041001/#sp=show-clips.

Branch, John. Snow Fall: The Avalanche at Tunnel Creek. December 2012. Accessed
September 4, 2016. http://www.nytimes.com/projects/2012/snow-
fall/#/?part=tunnel-creek.

354
Braun, Marta. Eadweard Muybridge. London: Reaktion Books, 2012.

Bridger, Darren. Neuro Design: Neuromarketing Insights to Boost Engagement and


Profitability. London: KoganPage, 2017.

Briggs, Asa, and Peter Burke. A Social History of the Media: From Gutenberg to the
Internet. Cambridge, UK: Polity, 2014.

Brown, Hilda Meldrum. The Quest for the Gesamtkunstwerk and Richard Wagner.
Oxford: Oxford University Press, 2016.

Brown, Michelle. Understanding Illuminated Manuscripts: A Guide to Technical


Terms. Malibu, CA: J. Paul Getty Museum in Association with the British
Library, 1994.

Bryant, Darrol. "Kumbha Mela - The Largest Gathering of People on Earth." AJ –


Canada's Environmental Voice. December 21, 2016. Accessed February 26,
2017. http://www.alternativesjournal.ca/kumbha-mela-largest-gathering-
people-earth.

Bryant, J. ALison. Television and the American Family. New York: Routledge; Taylor
& Francis Group, 2008.

Burke, James, and Robert E. Ornstein. The Axemaker's Gift: A Double-edged History
of Human Culture. New York: Putnam, 1995.

Burns, Eric. Infamous Scribblers: The Founding Fathers and the Rowdy Beginnings of
American Journalism. New York: PublicAffairs, 2007.

Burns, Monica. "5 Reasons to Try IBooks Author." Edutopia. January 27, 2014.
Accessed March 08, 2017. https://www.edutopia.org/blog/5-reasons-try-
ibooks-author-monica-burns.

Calkins, Robert G. Monuments of Medieval Art, Issues 51-65. Ithaca: Cornell


University Press, NY. 211.

Cantril, Hadley. The Invasion from Mars ; a Study in the Psychology of Panic. New
York: Harper and Row.

Capote, Truman. In Cold Blood: A True Account of a Multiple Murder and Its
Consequences. NY, NY: Modern Library, an Imprint of the Random House
Publishing Group, 2013.

Carlton, Jim. Apple, the inside Story of Intrigue, Egomania, and Business Blunders.
New York: HarperBusiness, 1998.

355
Carnoy, David. "Amazon's New Fire HD 8 Review." CNET. October 01, 2016.
Accessed March 07, 2017. https://www.cnet.com/products/amazon-fire-hd-8-
2016/review/.

""Casablanca" Plot Summary." IMDb. Accessed March 09, 2017.


http://www.imdb.com/title/tt0034583/plotsummary.

Catherine, and John Plummer. The Hours of Catherine of Cleves. New York: G.
Braziller, 1966.

Cavagna, Mattia, and Costantino Maeder. Philology and Performing Arts: A


Challenge. Louvain-La-Neuve: Presses Universitaires De Louvain, 2014.

Cavill, Paul, and Heather Ward. The Christian Tradition in English Literature: Poetry,
Plays, and Shorter Prose. Grand Rapids, Mich: Zondervan, 2007.

Cefrey, Holly. The Inventions of Alexander Graham Bell: The Telephone. New York:
PowerKids Press, 2003.

Ceruzzi, Paul E. Computing: A Concise History. Cambridge (Mass.): MIT Press,


2012.

Ceruzzi, Paul E. A History of Modern Computing. Cambridge, Mass: MIT Press, 2003.

Clark, Liat, and Ian Steadman. "Turing's Achievements: Codebreaking, AI and the
Birth of Computer Science." WIRED UK. May 23, 2016. Accessed February
25, 2017. http://www.wired.co.uk/article/turing-contributions.

Clarke, Norma. Brothers of the Quill: Oliver Goldsmith in Grub Street. Cambridge,
MA: Harvard University Press, 2016.

Cohen, Patricia Cline. The Murder of Helen Jewett the Life and Death of a Prostitute
in Nineteenth-century New York. New York: Vintage Books, 1999.

Cole, John Young, Henry Hope. Reed, and Herbert Small. The Library of Congress:
The Art and Architecture of the Thomas Jefferson Building. New York:
Norton, 1997.

Connolly, Chris, Jenny Connell Davis, Jeremy Lubman, and Brian Wizermann,
writers. "Mind Meld." In Year Million, directed by Mark Elijah Rosenberg.
National Geographic Channel. 2017.

Coon, Dennis, and John O. Mitterer. Introduction to Psychology: Gateways to Mind


and Behavior. Boston, MA: Cengage Learning, 2016.

356
Copejec, Joan, and Joel Goldbach. Umbr(a) a Journal of the Unconscious 2012:
Technology. Buffalo: State University of New York at Buffalo, 2012.

Cuddy, Luke, and John Nordlinger. World of Warcraft and Philosophy: Wrath of the
Philosopher King. Chicago: Open Court, 2009.

Damrosch, Leopold. Jean-Jacques Rousseau: Restless Genius. Boston: H. Mifflin,


2007.

Danesi, Marcel. Popular Culture: Introductory Perspectives. Lanham: Rowman &


Littlefield, 2015.

Defoe, Daniel. The Storm. XXX: Penguin, 2005.

Dehaene, Stanislas. Reading in the Brain: The New Science of How We Read. New
York: Penguin Books, 2010.

DiGaetani, John Louis. Inside the Ring: Essays on Wagner's Opera Cycle. Jefferson,
NC: McFarland &, 2006.

Dignan, Larry. "Apple's App Store 2016 Revenue Tops $28 Billion Mark, Developers
Net $20 Billion." ZDNet. January 05, 2017. Accessed March 07, 2017.
http://www.zdnet.com/article/apples-app-store-2016-revenue-tops-28-billion-
mark-developers-net-20-billion/.

Dijck, Jose Van. The Culture of Connectivity: A Critical History of Social Media.
Oxford: Oxford University Press, 2013.

Dodd, Robin. From Gutenberg to Open Type: An Illustrated History of Type from the
Earliest Letterforms to the Latest Digital Fonts. Dublin: Hartley & Marks,
2006.

Doidge, Norman. The Brain That Changes Itself: Stories of Personal Triumph from
the Frontiers of Brain Science. New York: Viking, 2007. 209.

Douglas, George H. The Early Days of Radio Broadcasting. Jefferson, NC:


McFarland, 2001.

Doyle, Gillian. Understanding Media Economics. London: SAGE, 2013.

Duff, E. Gordon. William Caxton. New York: B. Franklin, 1970.

Duggan, Maeve. "The Demographics of Social Media Users." Pew Research Center:
Internet, Science & Tech. August 19, 2015. Accessed May 31, 2017.
http://www.pewinternet.org/2015/08/19/the-demographics-of-social-media-
users/.

357
Dvorak, John C. "Apple's Good for Nothing IPad." PCMAG. February 02, 2010.
Accessed March 07, 2017.
http://www.pcmag.com/article2/0,2817,2358684,00.asp.

Eagleton, Terry. Literary Theory: An Introduction. Minneapolis: University of


Minnesota Press, 1983.

Edgerton, Gary R. The Columbia History of American Television. New York, NY:
Columbia Univ. Press, 2009.

(Editor), International Engineering Consortium. "Carrier IP Telephony 2000." Alibris.


Accessed March 04, 2017. http://www.alibris.com/Carrier-IP-Telephony-
2000/book/10367423.

Edwards, Mike. Key Ideas in Media. Cheltenham: Nelson Thornes (Publishers), 2003.

Eidenmuller, Michael E. "American Rhetoric: Newton Minow -- Address to the


National Association of Broadcasters (Television and the Public Interest)."
American Rhetoric: Newton Minow -- Address to the National Association of
Broadcasters (Television and the Public Interest). Accessed March 14, 2017.
http://www.americanrhetoric.com/speeches/newtonminow.htm.

Eisenstein, Elizabeth L. Printing as Divine Art: Celebrating Western Technology in


the Age of the Hand Press. Oberlin, OH: Oberlin College, 1996.

Eisenstein, Elizabeth L. The Printing Revolution in Early Modern Europe. Cambridge:


Cambridge University Press, 1983.

Elements of Telegraph Operating ; Telegraphy (parts 1-3). Scranton: International


Textbook Company, 1906.

Ellis, Sue. Applied Linguistics and Primary School Teaching. Cambridge: Cambridge
UP, 2014.

Ellul, Jacques. The Technological Society. New York: Knopf, 1964.

Elmer, Greg. Critical Perspectives on the Internet. Lanham: Rowman & Littlefield
Publishers, 2002.
"Entered at Stationers' Hall". A Sketch of the History and Privileges of the Company
of Stationers. With Notes, Etc. London, 1871.

Epstein, Adam. ""Rogue One" Features a Computer-generated Character More


Controversial than Jar Jar Binks." Quartz. December 20, 2016. Accessed
March 12, 2017. https://qz.com/868278/rogue-one-a-star-wars-story-features-
a-controversial-cg-peter-cushing/.

358
Figueira, Servulo A., Peter Fonagy, and Ethel Spector Person, eds. On Freud's
"Creative Writers and Day-Dreaming". London: Karnac Books, 2013.

Fine, Richard. James M. Cain and the American Authors' Authority. Austin:
University of Texas Press, 1992.

"The Firestorm After the "Snow Fall" — The Content Strategist." The Content
Strategist. February 29, 2016. Accessed March 04, 2017.
https://contently.com/strategist/2013/06/09/the-firestorm-after-the-snow-fall/.

Fishburn, M. Burning Books. Place of Publication Not Identified: Palgrave Macmillan,


2014.

"Flexible OLED: Samsung Already Defending an Unassailable Lead." Android


Authority. February 21, 2017. Accessed March 14, 2017.
http://www.androidauthority.com/samsung-flexible-oled-production-751220/.

Fourie, Pieter . J. Media Studies: Media History, Media and Society.

Fourie, Pieter J. Media Studies: Content, Audiences and Production. Lansdowne: Juta,
2001.

Frase, Peter. "Four Futures." Jacobin Magazine. Accessed September 15, 2017.
https://www.jacobinmag.com/2011/12/four-futures.

Frase, Peter. Four Futures: Life After Capitalism. London: Verso, 2015.

Freeman, Janet Ing. Johann Gutenberg and His Bible: A Historical Study. New York:
Typophiles, 1988.

Friedman, Avner, and David S. Ross. Mathematical Models in Photographic Science.


Berlin: Springer, 2003.

Friedman, Lex. "Review: IBooks 2 for IOS." Macworld. January 24, 2012. Accessed
February 05, 2017.
http://www.macworld.com/article/1164950/review_ibooks_2_for_ios.html.

Fuller, R. Buckminster. Inventors and Inventions. New York: Marshall Cavendish,


2008.

Gardham, Julie. Ingenious Impressions: Fifteenth-century Printed Books from the


University of Glasgow Library.

359
The Giant Bible of Mainz. Performed by Daniel DeSimone, Mark Dimunation.
Washington, DC: Library of Congress, 2006. Accessed February 7, 2017.
https://www.loc.gov/today/cyberlc/feature_wdesc.php?rec=4249.

Gibbs, Samuel. "Apple Co-founder Steve Wozniak Says Humans Will Be Robots'
Pets." The Guardian. June 25, 2015. Accessed March 12, 2017.
https://www.theguardian.com/technology/2015/jun/25/apple-co-founder-steve-
wozniak-says-humans-will-be-robots-pets.

Giles, David. Media Psychology. New York: Routledge, 2009.

Goodman, David. Radio's Civic Ambition: American Broadcasting and Democracy in


the 1930s. New York: Oxford University Press, 2011.

Goris151515. "John Anderton Usa Today." YouTube. March 22, 2009. Accessed
March 14, 2017. https://www.youtube.com/watch?v=jLEeDla2u40.

Grabowicz, Paul, Richard Hernandez, and Jeremy Rue. "Taxonomy of Digital Story
Packages." Berkeley Advanced Media Institute. July 27, 2015. Accessed
March 10, 2017.
https://multimedia.journalism.berkeley.edu/tutorials/taxonomy-digital-story-
packages/.

Graham Shedden, Ashley Milne - The Big Picture. "Scottish Screen - Digital Media IP
Fund." Scottish Screen - Digital Media IP Fund. Accessed March 10, 2017.
http://www.scottishscreen.com/content/sub_page.php?sub_id=207&page_id=1
9.

Gustafson, Aaron. "Adaptive Web Design: Crafting Rich Experiences with


Progressive Enhancement." Accessed March 05, 2017.
https://adaptivewebdesign.info/1st-edition/read/.

Hadas, Moses. Ancilla to Classical Reading. Pleasantville, NY: Akadine Press, 1999.

Halligan, Benjamin. Arena Concert: Music, Media and Mass Entertainment.


Bloomsbury Publishing Plc, 2016.

Hamel, Christopher De. A History of Illuminated Manuscript. Oxford: Phaidon, 1986.

Hardt, Michael, and Antonio Negri. Commonwealth. Place of Publication Not


Identified: Gallimard, 2014.

Harmon, William. A Handbook to Literature. Boston, MA: Longman, 2012.

Haugeland, John. Artificial Intelligence: The Very Idea. Cambridge, Mass.: MIT-
Press, 1996.

360
Haynes, Norris. Group Dynamics Basics and Pragmatics for Practitioners. Lanham:
University Press of America, 2012.

Headrick, Daniel R. Technology: A World History. La Vergne, TN: Oxford University


Press, 2010.

Heidegger, Martin. Being and Time. London: SCM Press, 1962.

Heil, Alan L. Voice of America a History. New York: Columbia University Press,
2003.

Heisler, Yoni. "AOL’s Fall from Grace, by the Numbers." BGR. May 13, 2015.
Accessed May 31, 2017. http://bgr.com/2015/05/13/aol-decline-numbers-fall-
from-grace/.

Henley, Jon, Laurence Topham, Guardian Interactive Team, Mustafa Khalili, and
Francesca Panetta. "Firestorm: The Story of the Bushfire at Dunalley." The
Guardian. May 26, 2013. Accessed March 05, 2017.
https://www.theguardian.com/world/interactive/2013/may/26/firestorm-
bushfire-dunalley-holmes-family.

Hernandez, Richard Koci, and Jeremy Rue. The Principles of Multimedia Journalism
Packaging Digital News. New York: Routledge, Taylor & Francis Group,
2016.

Holm-Hudson, Kevin. Genesis and The Lamb Lies down on Broadway. London:
Routledge, 2016.

Hong, Sangook. "Wireless: From Marconi's Black-Box to the


Audion." Science Technology, no. & Human Values, Vol. 28, No. 1 (January
01, 2003): 176-80.

"How Do Web Browsers Work and How Are Web Pages Displayed?" How Do Web
Browsers Work and Display a Web Page? Accessed March 11, 2017.
http://www.webdevelopersnotes.com/how-do-web-browser-work.

"How We Made Snow Fall." Source: An OpenNews Project. Accessed March 05,
2017. https://source.opennews.org/articles/how-we-made-snow-fall/.

Hudson, Frederic. Journalism in the United States, from 1690 to 1872. New York:
Harper, 1873.

Hume, Janice. Popular Media and the American Revolution: Shaping Collective
Memory. New York: Routledge, 2014.

361
Huzaifahhb74. "Google Glasses Project." YouTube. May 07, 2012. Accessed April
21, 2017. https://www.youtube.com/watch?v=JSnB06um5r4.

Idea Wars and the Birth of Printing. Directed by Marc Jampolsky. XiveTV, 2016.
Amazon Prime.
Imhoof, David Michael, Anthony J. Steinhoff, and Margaret Eleanor. Menninger. The
Total Work of Art: Foundations, Articulations, Inspirations. New York:
Berghahn, 2016.

Inc., Apple. "Resources - IOS - Apple Developer." Resources - IOS - Apple


Developer. Accessed March 07, 2017.
https://developer.apple.com/ios/resources/.

"Incunabula Short Title Catalogue." The British Library. February 04, 2015. Accessed
February 08, 2017. http://www.bl.uk/catalogues/istc/.

Ingraham, Nathan. "First IPad-only Newspaper 'The Daily' Shutting down on


December 15th (update)." The Verge. December 03, 2012. Accessed March
08, 2017. http://www.theverge.com/2012/12/3/3721544/the-daily-ipad-news-
mag-shutdown-december-15th.

Innis, Harold Adams. The Bias of Communication. Toronto: University of Toronto


Press, 1951.

Innis, Harold Adams, William Buxton, Michael R. Cheney, and Paul Heyer. Harold
Innis's History of Communications. Lanham: Rowman & Littlefield, 2015.

"IPod Shuffle." Apple. Accessed May 25, 2017. https://www.apple.com/shop/buy-


ipod/ipod-shuffle?afid=p238%7Cs3RnWdLgP-
dc_mtid_1870765e38482_pcrid_164139570870_&cid=aos-us-kwgo-ipod--
slid--product-.

Jackson, Donald. The Story of Writing.

Jameson, Fredric. Postmodernism, Or, the Cultural Logic of Late Capitalism. London:
Verso, 1991.

January 11, 2017 - by MarketingCharts Staff. "The State of Traditional TV: Updated
With Q3 2016 Data." MarketingCharts. January 11, 2017. Accessed February
25, 2017. http://www.marketingcharts.com/television/are-young-people-
watching-less-tv-24817/.

Jarenski, Shelly. Immersive Words: Mass Media, Visuality, and American Literature,
1839 - 1893. Tuscaloosa: Univ. of Alabama Press, 2015.

362
Jenkins, Henry. Convergence Culture: Where Old and New Media Collide. New York:
New York University Press, 2006.

Jerald, Jason. The VR Book Human-centered Design for Virtual Reality. San Rafael:
Morgan & Claypool, 2016.

Jeremy Reimer - Dec 15, 2005 5:00 Am UTC. "Total Share: 30 Years of Personal
Computer Market Share Figures." Ars Technica. December 15, 2005.
Accessed March 07, 2017. https://arstechnica.com/features/2005/12/total-
share/7/.

Johnson, L.r. "Coming to Grips with Univac." IEEE Annals of the History of
Computing 28, no. 2 (2006): 32-42. doi:10.1109/mahc.2006.27.

Johnson, Steven. Everything Bad Is Good for You: How Popular Culture Is Making
Us Smarter. London: Penguin Books, 2006.

Johnson, Steven. Everything Bad Is Good for You: How Today's Popular Culture Is
Actually Making Us Smarter. New York: Riverhead Books, 2005.

Jones, Carys Wyn. The Rock Canon: Canonical Values in the Reception of Rock
Albums. Aldershot: Ashgate, 2009.

Jonson, Ben. "Song: To Celia [“Drink to Me Only with Thine Eyes”]." Poetry
Foundation. Accessed March 13, 2017.
https://www.poetryfoundation.org/poems-and-poets/poems/detail/44464.

Kapr, Albert, and Douglas Martin. Johann Gutenberg: The Man and His Invention.
Aldershot, England: Scolar Press, 1996.

Kearney, Richard. On Stories. London: Routledge, 2009.

Kennedy, Ian. "Francis Ford Coppola on the Amateur." Everwas. February 24, 2015.
Accessed May 01, 2017. http://everwas.com/2015/02/francis-ford-coppola-on-
the-amateur/.

Kerrane, Kevin, and Ben Yagoda. The Art of Fact: A Historical Anthology of Literary
Journalism. New York: Simon and Schuster, 1998.

King, Elliot. Key Readings in Journalism. New York, London: Routledge, 2012.

King, J. J. "The New Incunabula." Third Text 21, no. 5 (2007): 599-602.

King, Stephen. On Writing: A Memoir of the Craft. New York: Scribner, 2000.

363
Klein, Alec. Stealing Time: Steve Case, Jerry Levin, and the Collapse of AOL Time
Warner. New York: Simon & Schuster Paperbacks, 2004.

Klein, Ezra. "The Future of Reading." Columbia Journalism Review. 2008. Accessed
March 07, 2017. http://archives.cjr.org/cover_story/the_future_of_reading.php.

Kleiner, Fred S., and Helen Gardner. Gardner's Art through the Ages: The Western
Perspective. Belmont, CA: Wadsworth, 2013.

Koerner, Joseph Leo. The Moment of Self-portraiture in German Renaissance Art.


Chicago: University of Chicago Press, 1996.

Komlos, John. "Has Creative Destruction Become More Destructive." NBER Working
Paper Series, August 2014, 3-5.

Kuniavsky, Mike. Smart Things: Ubiquitous Computing User Experience Design.


Amsterdam: Elsevier, 2010.

Kurlansky, Mark. Paper: Paging through History.

Kurzweil, Ray. The Singularity Is Near: When Humans Transcend Biology. London:
Duckworth, 2016.

LaFrance, Adrienne. "Raiders of the Lost Web." The Atlantic. October 14, 2015.
Accessed March 04, 2017.
https://www.theatlantic.com/technology/archive/2015/10/raiders-of-the-lost-
web/409210/.

Lane, Frederick S. American Privacy: The 400-year History of Our Most Contested
Right. Boston: Beacon, 2011.

"Learning from A.I. Duet." Learning from A.I. Duet. February 16, 2017. Accessed
March 12, 2017. https://magenta.tensorflow.org/2017/02/16/ai-duet/.

Lehdonvirta, Vili, and Edward Castronova. Virtual Economies: Design and Analysis.
Cambridge, MA: MIT Press, 2014.

Leonard-Stuart, Charles, and George J. Hagar. People's Cyclopedia. New York:


Syndicate Publishing Company, 1914.

Lessig, Lawrence. Free Culture: The Nature and Future of Creativity. New York,
N.Y.: Penguin, 2005.

Lessig, Lawrence. Remix: Making Art and Commerce Thrive in the Hybrid Economy.
New York, NY: Penguin Books, 2009.

364
Levy, Steven. The Perfect Thing. London: Ebury, 2006.

Lewis, T. ""A Godlike Presence": The Impact of Radio on the 1920s and 1930s." OAH
Magazine of History 6, no. 4 (1992): 26-33. doi:10.1093/maghis/6.4.26.

Linzmayer, Owen W. Apple Confidential 2.0: The Definitive History of the World's
Most Colorful Company. San Francisco, CA: No Starch Press, 2008.

Linzmayer, Owen W. Apple Confidential 2.0: The Definitive History of the World's
Most Colorful Company. San Francisco, CA: No Starch Press, 2008.

"Literacy." Our World In Data. Accessed February 08, 2017.


https://ourworldindata.org/literacy/.

LoBrutto, Vincent. Becoming Film Literate: The Art and Craft of Motion Pictures.
Westport Conn.: Praeger, 2005.

Logan, Peter Melville, Olakunle George, Susan Hegeman, and Efrain Kristal. The
Encyclopedia of the Novel. Malden, MA: Wiley-Blackwell, 2011.

Logan, Robert K. Understanding New Media: Extending Marshall McLuhan. New


York: Peter Lang, 2016.

Lohr, Steve. "In Case You Wondered, a Real Human Wrote This Column." The New
York Times. September 10, 2011. Accessed March 12, 2017.
http://www.nytimes.com/2011/09/11/business/computer-generated-articles-
are-gaining-traction.html.

Lopez, Linette. "Steve Mnuchin Tried to Bury a Number That Tells You Whom
Trump's Tax Plan Is Really for." Business Insider. September 29, 2017.
Accessed September 29, 2017. http://www.businessinsider.com/mnuchin-
buries-research-paper-on-corporate-tax-2017-9.

MacGregor, Neil. A History of the World in 100 Objects. London: Penguin Books,
2012.

MacLuhan, Eric, Marshall McLuhan, and Frank Zingrone. Essential McLuhan.


London: Routledge, 2006.

"Magenta." Magenta. Accessed March 12, 2017. https://magenta.tensorflow.org/.

Magoun, Alexander B. Television the Life Story of a Technology. Baltimore: Hopkins


Univ. Press, 2009.

Manjoo, Farhad. "“Snow Fall,” “The Jockey,” and the Scourge of Bell-and-Whistle-
Laden Storytelling." Slate Magazine. August 15, 2013. Accessed March 05,

365
2017.
http://www.slate.com/articles/technology/technology/2013/08/snow_fall_the_j
ockey_the_scourge_of_the_new_york_times_bell_and_whistle.html.

Marcovici, Michael. The Wealthy Blogger. Books on Demand, 2014.

Martin, Chuck. The Third Screen: The Ultimate Guide to Mobile Marketing. Boston:
Brealey, 2015.

Mateas, Michael, and Phoebe Sengers. Narrative Intelligence. Amsterdam: J.


Benjamins Pub., 2003.

Matsa, Katerina Eva. "Local TV News Fact Sheet." Pew Research Center's Journalism
Project. July 13, 2017. Accessed July 22, 2017.
http://www.journalism.org/fact-sheet/local-tv-news/.

Maxwell, Mary. Human Evolution: A Philosophical Anthropology. London: Croom


Helm, 1984.

McClellan, James E., and Harold Dorn. Science and Technology in World History: An
Introduction. Baltimore: Johns Hopkins University Press, 2006.

McFarland, Matt. "Analysis | Google’s Computers Are Creating Songs. Making Music
May Never Be the Same." The Washington Post. June 06, 2016. Accessed
March 12, 2017.
https://www.washingtonpost.com/news/innovations/wp/2016/06/06/googles-
computers-are-creating-songs-making-music-may-never-be-the-
same/?utm_term=.b92286993252.

McIntyre, Hugh. "50 Years Later, The Beatles Are Back At No. 1 With 'Sgt. Pepper's
Lonely Hearts Club Band'." Forbes. June 03, 2017. Accessed June 03, 2017.
https://www.forbes.com/sites/hughmcintyre/2017/06/03/50-years-later-the-
beatles-are-back-at-no-1-with-sgt-peppers-lonely-hearts-club-
band/#4043903c7029.

McKenzie, Jai. Light and Photomedia: A New History and Future of the Photographic
Image. London: I.B. Tauris, 2013.

McKitterick, David. The Cambridge History of the Book in Britain. Cambridge:


Cambridge University Press, 2014.

McLean, Ruari. The Thames and Hudson Manual of Typography. London: Thames
and Hudson, 1988.

McLuhan, Marshall. The Gutenberg Galaxy: The Making of Typographic Man.


Toronto: University of Toronto Press, 1962.

366
McLuhan, Marshall. Understanding Media: The Extensions of Man.

Meggs, Philip B., and Alston W. Purvis. Meggs' History of Graphic Design. Hoboken,
NJ: J. Wiley & Sons, 2006.

Merritt, Bob. "The Digital Revolution." Synthesis Lectures on Emerging Engineering


Technologies 2, no. 4 (2016): 1-109.
doi:10.2200/s00697ed1v01y201601eet005.

Metz, Cade. "Google’s AI Wins Fifth And Final Game Against Go Genius Lee
Sedol." Wired. March 15, 2016. Accessed March 14, 2017.
https://www.wired.com/2016/03/googles-ai-wins-fifth-final-game-go-genius-
lee-sedol/.

"Mexican Lawmaker Says He Scaled Border Fence - CNN Video." CNN. Accessed
March 14, 2017. http://www.cnn.com/videos/world/2017/03/03/mexican-
lawmaker-scales-border-fence-sje-orig.cnn/video/playlists/donald-trump-
immigration/.

Meyrowitz, Joshua. No Sense of Place: The Impact of Electronic Media on Social


Behavior. New York: Oxford University Press, 1985.

Mezrich, Ben. The Accidental Billionaires: The Founding of Facebook, a Tale of Sex,
Money, Genius and Betrayal. Bridgewater, NJ: Distributed by Paw
Prints/Baker & Taylor, 2011.

Mitchell, Amy, and Jesse Holcomb. "State of the News Media 2016." Pew Research
Center's Journalism Project. June 15, 2016. Accessed July 23, 2017.
http://www.journalism.org/2016/06/15/state-of-the-news-media-
2016/?utm_content=buffer6871f&utm_medium=social&utm_source=twitter.c
om&utm_campaign=buffer.

Mitchell, Jack W. Listener Supported the Culture and History of Public Radio.
Westport (Conn.): Praeger, 2005.

Monro, Alexander. The Paper Trail: An Unexpected History of a Revolutionary


Invention. New York: Alfred A. Knopf, 2016.

More, Paul Elmer. Benjamin Franklin. Place of Publication Not Identified: Nabu
Press, 2010.

Mouly, Françoise, and Mina Kaneko. "Cover Story: Christoph Niemann's Rainy Day."
The New Yorker. February 09, 2015. Accessed March 12, 2017.
http://www.newyorker.com/culture/culture-desk/cover-story-2014-10-06.

367
Mullan, John. "LRB · John Mullan · High-Meriting, Low-Descended: The Unpolished
Pamela." London Review of Books. December 11, 2002. Accessed July 18,
2017. https://www.lrb.co.uk/v24/n24/john-mullan/high-meriting-low-
descended.

Mullett, Michael A. Martin Luther. London: Routledge, 2015.

Munk, Nina. Fools Rush In: Steve Case, Jerry Levin, and the Unmaking of AOL Time
Warner. New York: HarperCollins, 2005.

Nathan, John. Sony: The Extraordinary Story behind the People and the Products.
London: HarperCollinsBusiness, 1999.

News, CBS. "45 Years Ago: First Message Sent over the Internet." CBS News.
October 29, 2014. Accessed December 19, 2017.
https://www.cbsnews.com/news/first-message-sent-over-the-internet-45-years-
ago/.

North, S. N. D. History and Present Condition of the Newspaper and Periodical Press
of the United States with a Catalogue of the Publications of the Census Year.
Washington: Government Printing Office, 1884.

O'Falt, Chris. "Robin Wright Digitally Preserved in Trippy New Film." The
Hollywood Reporter. July 16, 2014. Accessed March 12, 2017.
http://www.hollywoodreporter.com/news/robin-wright-digitally-preserved-
trippy-718682.

Ong, Walter J. Interfaces of the Word: Studies in the Evolution of Consciousness and
Culture. Ithaca: Cornell University Press, 1977.

Ong, Walter J. Orality and Literacy: The Technologizing of the Word. London:
Methuen, 1982.

Others, Edward O. Wilson and. "E. O. Wilson's Life on Earth Unit 1 by Edward O.
Wilson, Morgan Ryan & Gaël McGill on IBooks." IBooks. July 08, 2014.
Accessed February 03, 2017. https://itunes.apple.com/us/book/e.-o.-wilsons-
life-on-earth/id888107968?mt=13.

"Overwhelmed America: Why Don't We Use Our Paid Time Off?" Project: Time Off.
June 28, 2016. Accessed September 26, 2017.
https://www.projecttimeoff.com/research/overwhelmed-america.

Paehlke, R. Hegemony and Global Citizenship: Transitional Governance for the 21st
Century. S.l.: Palgrave Macmillan, 2016.

368
Paine, Thomas, and Edward Larkin. Common Sense. Peterborough, Ont.: Broadview
Press., 2004.

Pavlik, John V. Masterful Stories: Lessons from Golden Age Radio. New York, NY:
Routledge, 2017.

Peter Ackroyd's London. Performed by Peter Ackroyd. Peter Ackroyd's London Part
1: Fire and Destiny. May 27, 2014. Accessed April 2, 2017.
https://www.youtube.com/watch?v=wEKQb6IDO0Q.

Peyser, Joan. The Orchestra: A Collection of 23 Essays on Its Origins and


Transformations. Milwaukee, WI: Hal Leonard, 2006.

Plotz, David. "Six Myths about Sept. 11." Slate Magazine. September 10, 2003.
Accessed September 12, 2017.
http://www.slate.com/articles/news_and_politics/hey_wait_a_minute/2003/09/
what_you_think_you_know_about_sept_11_.html.

Pogue, David. "The IPhone Matches Most of Its Hype." The New York Times. June
26, 2007. Accessed February 26, 2017.
http://www.nytimes.com/2007/06/27/technology/circuits/27pogue.html.

Polgreen, Erin. "Virtual Reality Is Journalism’s next Frontier." Columbia Journalism


Review. November 19, 2014. Accessed April 22, 2017.
https://www.cjr.org/innovations/virtual_reality_journalism.php.

Postman, Neil. Amusing Ourselves to Death: Public Discourse in the Age of Show
Business. New York: Viking, 1985.

Postman, Neil. Technopoly: The Surrender of Culture to Technology. New York:


Knopf, 1992.

"Princeton University Digital Library -- Item Overview." Princeton University.


Accessed March 12, 2017. http://pudl.princeton.edu/objects/7d278t10z.

"Princeton University Digital Library -- Item Overview." Princeton University.


Accessed September 03, 2016. http://pudl.princeton.edu/objects/7d278t10z.

"Princeton University Digital Library." Princeton University. Accessed September 03,


2016.
http://pudl.princeton.edu/viewer.php?obj=7d278t10z&vol=phys1#page/8/mod
e/2up.

The Pulitzer Prizes. Accessed March 04, 2017. http://www.pulitzer.org/prize-winners-


by-year/2013.

369
"Q. and A.: The Avalanche at Tunnel Creek." The New York Times. December 21,
2012. Accessed March 05, 2017.
http://www.nytimes.com/2012/12/22/sports/q-a-the-avalanche-at-tunnel-
creek.html?_r=0.

"Quotes from The Matrix." Matrix Wiki. Accessed April 21, 2017.
http://matrix.wikia.com/wiki/Quotes_from_The_Matrix.

Rainey, James. "‘Increasingly Dire’ Film Industry Has Fewer Winning Films, Studios
(Analyst)." Variety. March 04, 2016. Accessed March 09, 2017.
http://variety.com/2016/film/news/hollywood-dire-outlook-tentpoles-
1201722775/.

Ramos, Dino-Ray. "Jodie Foster Slams Superhero Movies, Compares Studios' "Bad
Content" To Fracking." Deadline. January 02, 2018. Accessed January 24,
2018. http://deadline.com/2018/01/jodie-foster-black-mirror-superhero-
movies-marvel-studios-dc-1202234126/.

Rebel, Ernst, Andrew Robison, Klaus Albrecht. Schroder, and Albrecht


Durer. Albrecht Durer: Master Drawings, Watercolors, and Prints from the
Albertina. Washington: National Gallery of Art, 2013.

Reilly, Edwin D. Milestones in Computer Science and Information Technology.


Westport, Conn.: Greenwood Press, 2003.

Ridder-Symoens, Hilde De, and Walter Rüegg. A History of the University in Europe.
Cambridge: Cambridge University Press, 1992.

Riordan, Michael, and Lillian Hoddeson. Crystal Fire: Invention of the Transistor and
the Birth of the Information Age. W.W. Norton, 1999.

Roberts, David. The Total Work of Art in European Modernism. Ithaca, N.Y: Cornell
University Press, 2011.

Rojas, Raul, and Ulf Hashagen. The First Computers: History and Architectures.
Cambridge, MA: MIT Press, 2002.

Rowles, Daniel. Building Digital Culture: A Practical Guide to Business Success in a


Constantly Changing World. Place of Publication Not Identified: Kogan Page
Stylus, 2017.

Rushkoff, Douglas. "Signs of the times." The Guardian. July 25, 2002. Accessed June
01, 2017.
https://www.theguardian.com/technology/2002/jul/25/onlinesupplement.newm
edia.

370
Salter, Chris. Entangled: Technology and the Transformation of Performance.
Cambridge, MA: MIT Press, 2010.

Scherer, Marge. Challenging the Whole Child: Reflections on Best Practices in


Learning - Teaching - and Leadership. Alexandria: Association for
Supervision and Curriculum Development, 2009.

Schiffer, Michael B. The Portable Radio in American Life. Tucson: University of


Arizona Press, 1991.

Schiffrin, Anya. Global Muckraking: 100 Years of Investigative Journalism from


around the World. New York: Perseus Distribution Services, 2014.

Schill, Dan, Rita Kirk, and Amy E. Jasperson. Political Communication in Real Time:
Theoretical and Applied Research Approaches. New York, NY: Routledge,
2017.

Schumpeter, Joseph A. Capitalism, Socialism, and Democracy. Mansfield Centre, CT:


Martino, 2011.

Schwartz, A. Brad. Broadcast Hysteria: Orson Welles's War of the Worlds and the Art
of Fake News. Place of Publication Not Identified: Hill & Wang, 2016.

Schwartz, Evan I. The Last Lone Inventor: A Tale of Genius, Deceit, and the Birth of
Television. New York: Perennial, 2003.

Scully, Michael. "10 Rules for Digital Journalism." Scribd. January 26, 2013.
Accessed July 13, 2017. https://www.scribd.com/document/147681342/10-
Rules-for-Digital-Journalism.

"The Second Life Economy in Q3 2010." SecondLife Community. October 28, 2010.
Accessed June 26, 2017. https://community.secondlife.com/blogs/entry/46-the-
second-life-economy-in-q3-2010/.

"The Secret to Rolling Stone's Success." Columbia Journalism Review. Accessed


March 12, 2017.
http://archives.cjr.org/behind_the_news/the_secret_to_rolling_stones_s.php.

Shakespeare, William, and Mark McMurray. The Tempest. Canton, NY: Caliban
Press, 2001.

Shakespeare, William. Richard II. Hamburg, Germany: Tredition GmbH, 2015.

Shenefelt, Michael, and Heidi White. If A, Then B: How the World Discovered Logic.
New York: Columbia University Press, 2013.

371
Sheppard, Si. The Partisan Press: A History of Media Bias in the United States.
Jefferson, NC: McFarland & Company, 2007.

Simone, Daniel De. A Heavenly Craft: The Woodcut in Early Printed Books:
Illustrated Books Purchased by Lessing J. Rosenwald at the Sale of the Library
of C.W. Dyson Perrins. New York: George Braziller, 2004.

Slingerland, Janet. Nanotechnology. Minneapolis, MN: Essential Library, an Imprint


of Abdo Publishing, 2016.

Smith, Dave. "GOOGLE CHAIRMAN: 'The Internet Will Disappear'." Business


Insider. January 25, 2015. Accessed March 05, 2017.
http://www2.businessinsider.com/google-chief-eric-schmidt-the-internet-will-
disappear-2015-1.

Smith, Jeffery Alan. Printers and Press Freedom: The Ideology of Early American
Journalism. New York: Oxford University Press, 1990.

Smith, William Anton. The Reading Process. New York: Macmillan, 1923.

Standage, Tom. The Victorian Internet. London: Weidenfeld & Nicolson, 1998.

Statt, Nick. "Elon Musk Launches Neuralink, a Venture to Merge the Human Brain
with AI." The Verge. March 27, 2017. Accessed April 21, 2017.
http://www.theverge.com/2017/3/27/15077864/elon-musk-neuralink-brain-
computer-interface-ai-cyborgs.

Steinberg, S. H., and John Trevitt. Five Hundred Years of Printing. London: British
Library, 1996.

Swedin, Eric Gottfrid., and David L. Ferro. Computers: The Life Story of a
Technology. Baltimore: Johns Hopkins University Press, 2007.

Team, Imaging Resource. "Canon 7D Review: Full Review." Imaging Resource.


December 16, 2016. Accessed March 08, 2017. http://www.imaging-
resource.com/PRODS/E7D/E7DA.HTM.

Tennis, Cary. "Tom Wolfe." Salon. February 1, 2000. Accessed April 19, 2017.
http://www.salon.com/2000/02/01/wolfe_3/.

Thompson, Anne. "Why Jonathan Glazer’s ‘Under the Skin’ Took a Decade to Make
(VIDEOS)." IndieWire. October 23, 2014. Accessed March 10, 2017.
http://www.indiewire.com/2014/10/why-jonathan-glazers-under-the-skin-took-
a-decade-to-make-videos-190464/.

372
"Top 20 Facebook Statistics - Updated May 2017." Zephoria Inc. May 08, 2017.
Accessed May 31, 2017. https://zephoria.com/top-15-valuable-facebook-
statistics/.

Treglown, Jeremy, and Bridget Bennett. Grub Street and the Ivory Tower Literary
Journalism and Literary Scholarship from Fielding to the Internet. Oxford:
Clarendon Press, 1998.

"Under the Skin (2014) - Financial Information." The Numbers. Accessed March 10,
2017. http://www.the-numbers.com/movie/Under-the-Skin#tab=summary.

"Updates to LindeX and Credit Processing Fees." SecondLife Community. June 13,
2017. Accessed June 26, 2017.
https://community.secondlife.com/blogs/entry/2187-updates-to-lindex-and-
credit-processing-fees/.

Urry, John, and Jonas Larsen. The Tourist Gaze 3.0. London: SAGE Publications,
2011.

Vaughan, Kevin. "The Crossing Story." The Crossing Story. 2007. Accessed March
04, 2017. http://thecrossingstory.com/chapters/1.html.

Virilio, Paul, and Michael Degener. Negative Horizon: An Essay in Dromoscopy.


London: Continuum, 2008.

Wagner, Bettina, and Marcia Reed. Early Printed Books as Material Objects:
Proceedings of the Conference Organized by the IFLA Rare Books and
Manuscripts Section, Munich, 19-21 August 2009. Berlin: De Gruyter Saur,
2010.

Walker, Greg. Writing under Tyranny: English Literature and the Henrician
Reformation. Oxford: Oxford University Press, 2005.

Walker, Rob. "The Guts of a New Machine." The New York Times. November 29,
2003. Accessed February 26, 2017.
http://www.nytimes.com/2003/11/30/magazine/the-guts-of-a-new-
machine.html.

Webster, Andrew. "The Daily Reportedly Put 'on Watch' as News Corp. Looks to Cut
Costs." The Verge. July 12, 2012. Accessed March 08, 2017.
http://www.theverge.com/2012/7/12/3155678/the-daily-on-watch-cost-cutting.

Wednesday, On. "Facebook Tops 1.9 Billion Monthly Users." CNNMoney. Accessed
May 31, 2017. http://money.cnn.com/2017/05/03/technology/facebook-
earnings/.

373
Weinberger, David. Everything Is Miscellaneous: The Power of the New Digital
Disorder. New York: Holt, 2008.

White, Hayden. The Content of the Form. Baltimore, MD: Johns Hopkins University
Press, 1987.

Whitlock, Keith. The Renaissance in Europe: A Reader. New Haven: Yale University
Press, 2000.

Wight, C. "Gutenberg Bible: View the British Library's Digital Versions Online."
British Library. September 07, 2004. Accessed March 12, 2017.
http://www.bl.uk/treasures/gutenberg/homepage.html.

Wilson, Edward O. Consilience: The Unity of Knowledge. New York: Knopf, 1998.

Winfield, Betty Houchin. Journalism, 1908: Birth of a Profession. Columbia:


University of Missouri Press, 2008.

Winner, Langdon. The Whale and the Reactor: A Search for Limits in an Age of High
Technology. Chicago: University of Chicago Press, 1986.

Witt, Stephen. How Music Got Free: The Inventor, the Mogul, and the Thief. London:
Vintage, 2016.

Wolfe, Tom. The Kandy-kolored Tangerine-flake Streamline Baby. New York: Farrar,
Straus and Giroux, 1965.

Womack, Kenneth, and Todd F. Davis. Reading the Beatles: Cultural Studies,
Literary Criticism, and the Fab Four. Albany: State University of New York
Press, 2006.

Woolf, Virginia. "The Cinema." Full Text | Woolf Online. Accessed June 03, 2017.
http://www.woolfonline.com/timepasses/?q=essays%2Fcinema%2Ffull.
"Words Onscreen." Hardcover - Naomi S. Baron - Oxford University Press. June 18,
2017. Accessed June 23, 2017.
https://global.oup.com/academic/product/words-onscreen-9780199315765.

"World Internet Users Statistics and 2017 World Population Stats." Internet World
Stats. Accessed May 24, 2017. http://www.internetworldstats.com/stats.htm.

Wright, Alex. Glut: Mastering Information through the Ages. Ithaca, NY: Cornell
University Press, 2008.

Yorker, The New. "Jason Schwartzman Introduces The New Yorker IPad App." The
New Yorker. August 13, 2014. Accessed March 08, 2017.

374
http://www.newyorker.com/news/news-desk/jason-schwartzman-introduces-
the-new-yorker-ipad-app.

Zinsser, William. On Writing Well. Harper Paperbacks, 2013.

375

You might also like