Professional Documents
Culture Documents
Fall 2015 Reading
Fall 2015 Reading
Getting Started
Review course description, objectives and expectations
Distribute classroom leadership assignment schedule and reading binder
Distribute iPads and begin setup
1.
2.
3.
Settings > iCloud > Find My iPad toggle to the ON (green) position
4.
Log on to Canvas. Open the Canvas app. Type: Free Canvas Accounts, this will take you to
canvas.instructure.com. Log on using your email and password. (If you do not already have a Canvas
email/password from another BHSEC class, follow the create new account instructions in your email.)
5.
6.
7.
8.
9.
Password: cutandpaste
Current Events
Video
We will watch and discuss this in class.
Internet Rising (53:03-watch first 3:12)
Look for: David Weinberger, Douglas Rushkoff, Andrew Keen, Howard Rheingold, Kevin Kelly
http://internetrising.net
Reading
We will read and discuss this in class.
Time for a Pause
(Thomas Friedman, NYT op-ed, 2015)
Preview Sunday Story 1 (due Sun Sep 20) and Homework for Class 2 (Tue
Sep 22)
Assign classroom leaders for Class 2 (Tue Sep 22)
Course Description
Each semester, we gather to consider the internets profound impact on
contemporary society and its wide range of effects, from impacts on privacy and
intellectual property to technologys evolving effect on socialization and the
changing neuro-circuitry of our brains. Within our learning communities we give
critical and creative thought to the cultural, economic, political and social
implications of evolving internet technologies, borrowing from our everyday
experiences and making connections to ideas and arguments presented by external
audiences. For each class, we will examine ideas and arguments excerpted from
books, blogs, news articles and scholarly journals and will demonstrate active
engagement with this material by annotating this text with marginalia. We will also
explore a wide range of video material, including the movies Her and Citizenfour. We
will share ruminations via classroom seminar, online discussion, Twitter, original
video (using iMovie and Vimeo) and Medium. My goal is to be a facilitator through
texts that we examine together. Your contributions should be thoughtful, on topic,
and respectful of others. You will do well to remember that this is not my class, but
ours, and its success is dependent upon the contributions of every class member.
Course Objectives
Consider the internets impact on society and our daily lives
Sharpen skills in using the internet as a tool for inquiry (reading, listening and watching)
Practice higher order thinking in writing, discussion and media production
Develop a personal plan for living in the digital age with focus and integrity
Expected Time Commitment
As a general guideline, you should invest at least two hours of outside work for every
class meeting. If you have questions as you prepare for class, I am invariably available
via email:
rgreenberg@bhsec.bard.edu
Grading
Connections among course texts, your own experience and the external world
Interactive participation and leadership within our learning community
Innovation and originality
Thoughtful and honest self-assessment
Sunday Stories
Share a creative and course-related reflection via Vimeo or Medium.
Timeliness and Academic Integrity
You will be penalized for submitting late work and/or unexcused class absence.
Plagiarism or other acts that violate the integrity of our learning community will also
incur consequences.
You will earn 40 points each week for on-time completion of work and class attendance.
These points are yours to lose. Five points will be subtracted for any of the following:
Not completing the online discussion assignment before class
Not completing the lesson link assignment before class
Evidence of not completing the homework reading and video
(e.g., no login on Zaption, not marking up the reading and/or failing a reading quiz)
Not bringing your reading to class
Cutting class
Contribution to online discussion
(10 points/discussion, 2 discussions/week)
There will be an online discussion assignment before each class. We will begin this online
discussion
work on Canvas and over the course of the semester will rotate the the work to other platforms
including Vimeo and Medium.
Completion
Good Work
Exemplary Work
Multiple paragraphs
Writing in a silo
No specific effort to engage
Shooting from the hip
No connections to course texts
No evidence of outside research
No interaction with others
6
Good Work
Overall effort
Exemplary Work
Multiple contributions
Helping others
10
Completion
Exemplary Work
Completion
10
Purpose:
To develop your ability to communicate using recorded sounds and images
To deepen your personal understanding and synthesis of a course theme
To engage and entertain your peers in a way that enhances learning
Completion
(40 points)
Technical (effort and complexity)
(up to 20 points)
Entertainment value
(up to 20 points)
Evidence of higher order thinking
(up to 20 points)
Rubric
Completion
Technical
(effort and
complexity)
12
16
20
One-dimensional
Overall complexity
images
Moving images
Multi-dimensional
(combines a variety of modalities)
Complex and thoughtful
use of still images
Original video with script and actors
No graphics
Entertainmen
t value
Evidence of
higher order
thinking
Overall
Good Work
Sound
Exemplary Work
Multi-dimensional
(e.g., live action, voiceover, music)
Thoughful and innovative use of
formatting and graphics tools
Easy to hear and see
12
16
20
12
16
20
Not really
Regurgitation of course texts
Demonstration of thoughtful
engagement with course themes
Higher Order Thinking
36
48
60
Earnback (optional)
Earn back up to half of all missed points based on feedback
Submit new draft of your work within 7 days of receiving feedback
http://nyti.ms/1xCQUuL
Thomas L. Friedman
You could easily write a book, or, better yet, make a movie about the drama that
engulfed Sony Pictures and The Interview, Sonys own movie about the
fictionalized assassination of North Koreas real-life dictator. The whole saga
reflects so many of the changes that are roiling and reshaping todays world before
weve learned to adjust to them.
Think about this: In November 2013, hackers stole 40 million credit and debit
card numbers from Targets point-of-sale systems. Beginning in late August 2014,
nude photos believed to have been stored by celebrities on Apples iCloud were
spilled onto the sidewalk. Thanksgiving brought us the Sony hack, when, as The
Times reported: Everything and anything had been taken. Contracts. Salary lists.
Film budgets. Medical records. Social Security numbers. Personal emails. Five
entire movies. And, on Christmas, gaming networks for both the Sony PlayStation
and the Microsoft Xbox were shut down by hackers. But rising cybercrime is only
part of the story. Every day a public figure is apologizing for something crazy or
foul that he or she muttered, uttered, tweeted or shouted that went viral
including the rantings of an N.B.A. owner in his girlfriends living room.
Whats going on? Were in the midst of a Gutenberg-scale change in how
information is generated, stored, shared, protected and turned into products and
services. We are seeing individuals become superempowered to challenge
governments and corporations. And we are seeing the rise of apps that are putting
strangers into intimate proximity in each others homes (think Airbnb) and into
each others cars (think Uber) and into each others heads (think Facebook, Twitter
and Instagram). Thanks to the integration of networks, smartphones, banks and
markets, the world has never been more tightly wired. As they say: Lost there, felt
his fiance; and private photos of movie stars. They all have different moral and
societal significance. We need to deal with them differently.
We need to pause more to make sense of all the M.R.I.s were being exposed
to, argued Seidman. In the pause, we reflect and imagine a better way. In some
cases, that could mean showing empathy for the fact that humans are imperfect. In
others, it could mean taking principled stands toward those whose behaviors
make this interdependent world unsafe, unstable or unfree.
In short, theres never been a time when we need more people living by the
Golden Rule: Do unto others as you would have them do unto you. Because, in
todays world, more people can see into you and do unto you than ever before.
Otherwise, were going to end up with a gotcha society, lurching from outrage to
outrage, where in order to survive youll either have to disconnect or constantly
censor yourself because every careless act or utterance could ruin your life. Who
wants to live that way?
(For 2015, I will just be writing on Wednesdays while I work on a book.)
A version of this op-ed appears in print on January 7, 2015, on page A23 of the New York edition with the
headline: Time for a Pause.
Sunday Story #1
Due Sun Sep 20 (before midnight)
fixed linear path. You cant control their narratives in anyfashion you simply sit back and have
the story dictated to you. This risksinstilling a general passivity in our children, making them feel
as thoughtheyre powerless to change their circumstances. Reading is not an active,participatory
process; its a submissive one. The book readers of the youngergeneration are learning to follow
the plot instead of learning to lead.
If you're a pessimistand chances are you areyou should read "Future Perfect" by the
technophilic science writer Steven Johnson. In fact, read it even if you're an optimist, because Mr.
Johnson's book will give you lots of material to brighten the outlook of your gloomy friends.
Mr. Johnson notes that, contrary to popular perception, humanity has achieved enormous
progress over the past century. Life spans have almost doubled since 1900, and over the past
half-century the percentage of humanity living in extreme poverty has been cut in half. In the
U.S. over the past 20 years, crime, traffic fatalities, air pollution and infant mortality have
dropped. Yes, Mr. Johnson concedes, we still face daunting problems, but we will surely solve
them, given how far we have come.
Mr. Johnson traces much of our progress to what he calls "peer networks." Conventional
organizations, he says, whether corporations or governments, tend to be hierarchical and
centralized, with information tightly controlled by the people in charge. Peer networks, by
contrast, consist of individuals of roughly equal status achieving goals by sharing, criticizing and
revising information and ideas.
The Internet is both the product of peer networks and an astonishingly effective enabler of them.
Peer networks helped to spawn Wikipedia and launch a lot of other things as well: crowd-sourced
fundraisers such as Kickstarter; the Arab Spring and Occupy Wall Street protests; and the 311
systems of New York and other cities, by means of which citizens alert authorities to potholes,
noisy bars and other problems.
Future Perfect: The case for progress in a networked age (review)
Steven Johnson (2013)
So what does the Internet want? It wants to lower the cost for creating and sharing information.
The notion sounds unimpeachable when you phrase it like that, until you realize all the strange
places that kind of affordance ultimately leads to. The Internet wants to breed algorithms that
can execute thousands of financial transactions per minute, and it wants to disseminate the
#occupywallstreet meme across the planet. The Internet wants both the Wall Street tycoons
and the popular insurrection at its feet.
Can that strange, contradictory cocktail drive progress on its own? Perhaps for the simple
reason that it democratizes the control of information. When information is expensive and
scarce, powerful or wealthy individuals or groups have a disproportionate impact on how that
information circulates. But as it gets cheaper and more abundant, the barriers to entry are
lowered. This is hardly a new observation, but everything that has happened over the last twenty
years has confirmed the basic insight. That democratization has not always led to positive
outcomes think of those spam artists but there is no contesting the tremendous, orders-ofmagnitude increase in the number of people creating and sharing, thanks to the mass adoption
of the Internet.
The peer progressives faith in the positive effects of the Internet rests on this democratic
principle: When you give people more control over the flow of information and decision making
in their communities, their social health improves incrementally, in fits and starts, but also
inexorably. Yes, when you push the intelligence out to the edges of the network, sometimes
individuals or groups abuse those newfound privileges; a world without gatekeepers or planners
is noisier and more chaotic. But the same is true of other institutions that have stood the test of
time. Democracies on occasion elect charlatans or bigots or imbeciles; markets on occasion
erupt in catastrophic bubbles, or choose to direct resources to trivial problems while ignoring the
more pressing ones. We accept these imperfections because the alternatives are so much worse.
The same is true of the Internet and the peer networks it has inspired. They are not perfect, far
from it. But over the long haul, they produce better results than the Legrand Stars that came
before them. Theyre not utopias. Theyre just leaning that way.
Sunday Story #2
Due Sun Sep 27 (before midnight)
Prompt
environments is of immense importance, and a steady hand of discipline should they ever
start to question it. Alfred North Whitehead called it soul murder.
The getting by game.
Reports from my teaching assistants sitting in the back of the room tell a different story.
Apparently, several students standing in the back cranked up their iPods as I started to
lecture and never turned them off, sometimes even breaking out into dance. My lecture
could barely be heard nearby as the sound-absorbing panels and state of the art speakers
were apparently no match for those blaring iPods. Scanning the room my assistants also saw
students cruising Facebook, instant messaging, and texting their friends. The students were
undoubtedly engaged, just not with me.
My teaching assistants consoled me by noting that students have learned that they can get
by without paying attention in their classes. Perhaps feeling a bit encouraged by my look of
incredulity, my TAs continued with a long list of other activities students have learned that
they can get by without doing. Studying, taking notes, reading the textbook, and coming
to class topped the list. It wasnt the list that impressed me. It was the unquestioned
assumption that getting by is the name of the game. Our students are so alienated by
education that they are trying to sneak right past it.
If you think this little game is unfair to those students who have been duped into playing,
consider those who have somehow managed to maintain their inherent desire to learn. One
of the most thoughtful and engaged students I have ever met recently confronted a
professor about the nuances of some questions on a multiple choice exam. The professor
politely explained to the student that he was overthinking the questions. What kind of
environment is this in which overthinking is a problem? Apparently he would have been
better off just playing along with the getting by game.
Last spring I asked my students how many of them did not like school. Over half of them
rose their hands. When I asked how many of them did not like learning, no hands were
raised. I have tried this with faculty and get similar results. Last years U.S. Professor of the
Year, Chris Sorensen, began his acceptance speech by announcing, I hate school. The
crowd, made up largely of other outstanding faculty, overwhelmingly agreed. And yet he
went on to speak with passionate conviction about his love of learning and the desire to
spread that love. And theres the rub. We love learning. We hate school. Whats worse is that
many of us hate school because we love learning.
What went wrong?
How did institutions designed for learning become so widely hated by people who love
learning?
The video seemed to represent what so many were already feeling, and it became the focal
point for many theories. While some simply blamed the problems on the students
themselves, others recognized a broader pattern. Most blamed technology, though for very
different reasons. Some simply suggested that new technologies are too distracting and
superficial and that they should be banned from the classroom. Others suggested that
students are now wired differently. Created in the image of these technologies, luddites
imagine students to be distracted and superficial while techno-optimists see a new
generation of hyper-thinkers bored with old school ways.
But the problems are not new. They are the same as those identified by Neil Postman and
Charles Weingartner nearly 40 years ago when they described the plight of totally alienated
students involved in a cheating scandal (a true art form in the getting by game) and
asked, What kind of vicious game is being played here, and who are the sinners and who
the sinned against? (1969:51).
Texting, web-surfing, and iPods are just new versions of passing notes in class, reading
novels under the desk, and surreptitiously listening to Walkmans. They are not the problem.
They are just the new forms in which we see it. Fortunately, they allow us to see the problem
in a new way, and more clearly than ever, if we are willing to pay attention to what they are
really saying.
They tell us, first of all, that despite appearances, our classrooms have been fundamentally
changed. There is literally something in the air, and it is nothing less than the digital
artifacts of over one billion people and computers networked together collectively producing
over 2,000 gigabytes of new information per second. While most of our classrooms were
built under the assumption that information is scarce and hard to find, nearly the entire body
of human knowledge now flows through and around these rooms in one form or another,
ready to be accessed by laptops, cellphones, and iPods. Classrooms built to re-enforce the
top-down authoritative knowledge of the teacher are now enveloped by a cloud of
ubiquitous digital information where knowledge is made, not found, and authority is
continuously negotiated through discussion and participation. In short, they tell us that our
walls no longer mark the boundaries of our classrooms.
And thats what has been wrong all along. Some time ago we started taking our walls too
seriously not just the walls of our classrooms, but also the metaphorical walls that we have
constructed around our subjects, disciplines, and courses. McLuhans statement about
the bewildered child confronting the education establishment where information is scarce
but ordered and structured by fragmented, classified patterns, subjects, and schedules still
holds true in most classrooms today. The walls have become so prominent that they are
even reflected in our language, so that today there is something called the real world
which is foreign and set apart from our schools. When somebody asks a question that seems
irrelevant to this real world, we say that it is merely academic.
Not surprisingly, our students struggle to find meaning and significance inside these walls.
They tune out of class, and log on to Facebook.
The solution.
Fortunately, the solution is simple. We dont have to tear the walls down. We just have to
stop pretending that the walls separate us from the world, and begin working with students
in the pursuit of answers to real and relevant questions.
When we do that we can stop denying the fact that we are enveloped in a cloud of
ubiquitous digital information where the nature and dynamics of knowledge have shifted. We
can acknowledge that most of our students have powerful devices on them that give them
instant and constant access to this cloud (including almost any answer to almost any
multiple choice question you can imagine). We can welcome laptops, cell phones, and iPods
into our classrooms, not as distractions, but as powerful learning technologies. We can use
them in ways that empower and engage students in real world problems and activities,
leveraging the enormous potentials of the digital media environment that now surrounds us.
In the process, we allow students to develop much-needed skills in navigating and
harnessing this new media environment, including the wisdom to know when to turn it off.
When students are engaged in projects that are meaningful and important to them, and that
make them feel meaningful and important, they will enthusiastically turn off their cellphones
and laptops to grapple with the most difficult texts and take on the most rigorous tasks.
Activity
Select a Christine Rosen TechnoSapiens podcast
Episode 1: As Twitter, Facebook, and all the other tech companies hoover up our
information,
learning more and more about us, is it time to ask: If we have
nothing to hide, do we
have nothing to fear?
Episode 2: Do fitness and diet tracking technology help to improve our physical
selves, or
should we be more hesitant about uploading such personal
information to the cloud?
Episode 3: Will massive open online courses (or MOOCs) will give us all greater
access to a
first-rate college educationor will they be the death knell for
higher learning as we
know it?
Episode 4: Have we let our obsession with grading everything from restaurants to
books on
sites like Yelp, TripAdvisor and Amazon undermining our expertise
and serendipity, or
are finally getting the facts rather than the overrated
opinions of critics?
Episode 5: Are we removing human error by letting algorithms take over
everything from the
stock market to driving, or are ceding too much control to
calculations that may have
serious flaws?
Episode 6: Will civilian and commercial drones will soon be as ubiquitous as the
Internet, or
will be compelled by privacy fears to curtail their use?
Homework video and reading
(discussion leader: y)
Current Events / iPad activity
(discussion leader: z)
Links:
(on Canvas lesson page)
Michael Wesch
Christine Rosen
Smarter Than You Think: How technology is changing our minds for the
(Clive Thompson, 2014)
13 reasons the web makes us smarter
David Weinberger, Huffington Post (2014)
Excerpts: How the internet makes us smarter
Video
Watch the assigned video on Zaption and post a one-sentence response.
Marshall Davis Jones: Touchscreen (3:12)
http://zapt.io/trdtmge6
Select an article or video from the lesson page links and Tweet its gist or
guiding question
Please include this address in your Tweet: @bhsecinternet
Online discussion
Respond to this prompt on Canvas:
How do you find cool things online?
https://canvas.instructure.com/courses/936923/discussion_topics/4017327
Youve written a book about how technology is changing our minds for the better. Have
readers been agreeing or disagreeing with you?
For many of the more avid users, it provides a lot of new, useful things to think about
serendipitous stories, insights from others. Its a sort of global watercooler, with all the good
and bad that suggests. Its good because many folks feel like theyre immersed in an
interesting conversation thats going onand even if theyre just lurking, not actually talking
(studies show the majority of folks using Twitter are listening but not contributing that often),
they get exposed to all sorts of material theyd never see otherwise. And bad, because, well,
you can really get swept it in and distracted from work that youre supposed to be doing.
The one complaint about the Internet that I wholeheartedly endorse is that most of these tools
have been designed to peck at us like ducks: Hey, theres a new reply to your comment!
Come look at it! And if you dont develop good skills of mindfulnesspaying attention to your
attentionit can really wind up colonizing much of your day. This is precisely why I suspect
Twitter going public will be bad for its users. The whole reason these services need to peck at
us like ducks is that their business models are built on advertising, and advertising wants as
many minutes of your day as possible. As the pressure builds for Twitter to make more money,
its pressure to redesign itself to interrupt us even more.
What do you think of the idea that the Internetas a virtual space where we can trade ideas,
art projects, and videos, and become enormously popular doing sohas replaced cities as the
centers of creative cultural ferment?
I dont think the Internet has replaced cities in any significant way, nor really could it. Cities
are dynamicand deeply seductive for the people who flock therebecause they broker all
sorts of fantastic and useful connections, cultural and economic and social. And the types of
face-to-face connections and serendipity that you get in a city are quite different from the
ones you get online. That said, there are deep similarities in the things we enjoy about cities
and the things we enjoy about the Internet! In both cases, the density of connections is what
brings the real intellectual fun and joy. Edward Glaeser famously argued that cities increase
the productivity and creativity of people within them. I suspect the Internet has a very similar
effect for folks who use it mindfully.
Nicholas Carr has written about how book-based learning taught us certain habits of mind, a
more empathetic way of thinking that we are rapidly losing with screens and screen-reading.
Do you agree?
I quite agree with Carr that tools affect how we thinkand considered as a tool, books have
many absolutely fantastic and magical effects on the way we think. They encourage us to slow
down, which is good; they synthesize large volumes of knowledge. But what Carr sells short
are the enormous benefits that come from social thinkingand social thinking is where the
Internet really shines. Theres an idea, popular with many text-based folkslike myself, and
many journalists and academicsthat reading books is thinking; that if youre not sitting for
hours reading a tome, youre not, in some essential way, thinking. This is completely false. A
huge amount of our everyday thinkingpowerful, creative, and resonant stuffis done
socially: talking to other people, arguing with them, relying on them to recall information for
us. This has been true for aeons in the offline world. But now we have new ways to think
socially onlineand to do so with likeminded folks around the world, which is still insanely
mind-blowing. It never stops being lovely for me. I was in a radio station the other day, and
while I was waiting to go on the air I watched the staff work. There were six or seven of them,
and they were all engaged in this incredibly complex activity thats behind the scenes of the
show: theyre talking about the next segment, writing down ideas, looking things up,
organizing the next batch of things the host is going to talk about. This is what thinking looks
like in the real world. A lot of it is incredibly, deeply social. And it has the effect of making the
host put on this much smarter, richer show than he or she could do on their own.
When people get into discussions and arguments online, whether its on Twitter or in a forum
about their favorite TV show or even in a thread underneath an Instagram photo, this is the
same thing transpiring. In the Phaedrus, Socrates worried that this dialogic nature of
knowledge would die out with text, because text was inert: you asked it a question, and it
couldnt answer back. What I love about the online world is that its pitched neatly between
those two poles.
Everyone is staring at their phones all the time. Are we a generation that has been
overwhelmed by a device we cant handle?
No, I dont think were doomed to be overwhelmed. We have a long track record of adapting to
the challenges of new technologies and new media, and of figuring out self-control. Cities,
considered as a sort of technology in themselves, were enormously overstimulating and
baffling for the new residents when the West began urbanizing, in the nineteenth century.
More recently, we tamed our addition to talking incessantly on mobile phones. People forget
this, but when mobile phones came along, in the nineties, people were so captivated by the
idea that you could talk to someone elseanywhereon the sidewalk, on a mountaintop
that they answered them every single time they rang. It took ten years, and a ton of quite
useful scrutinyand mockery of our own poor behaviorto pull back.
What are the downsides of ambient contact? Besides knowing way too much about sports
scores and who has eaten a cronut?
I think the big downside of todays ambient contact is that it makes us too present-focussed.
Psychologists talk about something called recencyour tendency to assume that whatever
is happening to us right now is the most important thing going on. Its a long-standing bias in
our psychology, long predating the Internet. But modern media have made it worse. By
modern Im beginning with, probably, the telegraph, and certainly the newspaper. When you
read the novels of the late nineteenth century and the early twentieth century, theyre already
complaining about people being far too fascinated with the events of the day instead of
paying attention to history.
And this got seismically worse once cable TV realized that you could keep everyone riveted to
their seat with live coverage of basically anything. Todays self-publishing tools, almost from
the get-go, were designed to privilege the present and ignore the past. When blogs first
became popular, they were all organized in reverse chronology, with the most recent post at
the top, the older ones fading into the background, and the clear implication of that design is
that whats written today is more important than what was written last week or last year. That
design has carried over into basically every tool of social media. And, again, because most of
the big social-media tools are paid for by advertising, they have even more economic impetus
to reinforce recency in their design. They want us to be constantly refreshing the feed over
and over again, because thatll give them more eyeballs to which to sell ads. What this
suggests, though, is that one could design all sorts of quite delightful tools for expression and
contact that didnt prize recency. If you founded a social network that charged a minimal
amount of money, for example, you wouldnt need ads at all, and suddenly the economic need
to reinforce recency is gone. Facebook only makes five dollars a year off of each user. Thats
actually an amazingly piddling amount, when you think about it.
Sunday Story #3
Due Sun Oct 4 (before midnight)
Prompt
IsGoogleMakingUsStupid?
theatlantic.com
also begun mentioning the phenomenon. Scott Karp, who writes a blog about online media, recently
confessed that he has stopped reading books altogether. I was a lit major in college, and used to be [a]
voracious book reader, he wrote. What happened? He speculates on the answer: What if I do all my
reading on the web not so much because the way I read has changed, i.e. Im just seeking
convenience, but because the way I THINK has changed?
Bruce Friedman, who blogs regularly about the use of computers in medicine, also has described how
the Internet has altered his mental habits. I now have almost totally lost the ability to read and absorb a
longish article on the web or in print, he wrote earlier this year. A pathologist who has long been on the
faculty of the University of Michigan Medical School, Friedman elaborated on his comment in a
telephone conversation with me. His thinking, he said, has taken on a staccato quality, reflecting the
way he quickly scans short passages of text from many sources online. I cant read WarandPeace
anymore, he admitted. Ive lost the ability to do that. Even a blog post of more than three or four
paragraphs is too much to absorb. I skim it.
Anecdotes alone dont prove much. And we still await the long term neurological and psychological
experiments that will provide a definitive picture of how Internet use affects cognition. But a recently
published study of online research habits , conducted by scholars from University College London,
suggests that we may well be in the midst of a sea change in the way we read and think. As part of the
five year research program, the scholars examined computer logs documenting the behavior of visitors
to two popular research sites, one operated by the British Library and one by a U.K. educational
consortium, that provide access to journal articles, e books, and other sources of written information.
They found that people using the sites exhibited a form of skimming activity, hopping from one source
to another and rarely returning to any source theyd already visited. They typically read no more than
one or two pages of an article or book before they would bounce out to another site. Sometimes theyd
save a long article, but theres no evidence that they ever went back and actually read it. The authors of
the study report:
It is clear that users are not reading online in the traditional sense indeed there are signs
that new forms of reading are emerging as users power browse horizontally through
titles, contents pages and abstracts going for quick wins. It almost seems that they go
online to avoid reading in the traditional sense.
Thanks to the ubiquity of text on the Internet, not to mention the popularity of text messaging on cell
phones, we may well be reading more today than we did in the 1970s or 1980s, when television was our
medium of choice. But its a different kind of reading, and behind it lies a different kind of thinking
perhaps even a new sense of the self. We are not only what we read, says Maryanne Wolf, a
developmental psychologist at Tufts University and the author of ProustandtheSquid:TheStoryand
ScienceoftheReadingBrain. We are how we read. Wolf worries that the style of reading promoted by
the Net, a style that puts efficiency and immediacy above all else, may be weakening our capacity for
the kind of deep reading that emerged when an earlier technology, the printing press, made long and
complex works of prose commonplace. When we read online, she says, we tend to become mere
decoders of information. Our ability to interpret text, to make the rich mental connections that form
when we read deeply and without distraction, remains largely disengaged.
Reading, explains Wolf, is not an instinctive skill for human beings. Its not etched into our genes the
way speech is. We have to teach our minds how to translate the symbolic characters we see into the
language we understand. And the media or other technologies we use in learning and practicing the
craft of reading play an important part in shaping the neural circuits inside our brains. Experiments
demonstrate that readers of ideograms, such as the Chinese, develop a mental circuitry for reading that
is very different from the circuitry found in those of us whose written language employs an alphabet. The
variations extend across many regions of the brain, including those that govern such essential cognitive
functions as memory and the interpretation of visual and auditory stimuli. We can expect as well that the
circuits woven by our use of the Net will be different from those woven by our reading of books and
other printed works.
Sometime in 1882, Friedrich Nietzsche bought a typewritera Malling Hansen Writing Ball, to be
precise. His vision was failing, and keeping his eyes focused on a page had become exhausting and
painful, often bringing on crushing headaches. He had been forced to curtail his writing, and he feared
that he would soon have to give it up. The typewriter rescued him, at least for a time. Once he had
mastered touch typing, he was able to write with his eyes closed, using only the tips of his fingers.
Words could once again flow from his mind to the page.
But the machine had a subtler effect on his work. One of Nietzsches friends, a composer, noticed a
change in the style of his writing. His already terse prose had become even tighter, more telegraphic.
Perhaps you will through this instrument even take to a new idiom, the friend wrote in a letter, noting
that, in his own work, his thoughts in music and language often depend on the quality of pen and
paper.
You are right, Nietzsche replied, our writing equipment takes part in the forming of our thoughts.
Under the sway of the machine, writes the German media scholar Friedrich A. Kittler , Nietzsches prose
changed from arguments to aphorisms, from thoughts to puns, from rhetoric to telegram style.
The human brain is almost infinitely malleable. People used to think that our mental meshwork, the
dense connections formed among the 100 billion or so neurons inside our skulls, was largely fixed by
the time we reached adulthood. But brain researchers have discovered that thats not the case. James
Olds, a professor of neuroscience who directs the Krasnow Institute for Advanced Study at George
Mason University, says that even the adult mind is very plastic. Nerve cells routinely break old
connections and form new ones. The brain, according to Olds, has the ability to reprogram itself on
the fly, altering the way it functions.
As we use what the sociologist Daniel Bell has called our intellectual technologiesthe tools that
extend our mental rather than our physical capacitieswe inevitably begin to take on the qualities of
those technologies. The mechanical clock, which came into common use in the 14th century, provides a
compelling example. In TechnicsandCivilization, the historian and cultural critic Lewis Mumford
described how the clock disassociated time from human events and helped create the belief in an
independent world of mathematically measurable sequences. The abstract framework of divided time
became the point of reference for both action and thought.
The clocks methodical ticking helped bring into being the scientific mind and the scientific man. But it
also took something away. As the late MIT computer scientist Joseph Weizenbaum observed in his
1976 book, ComputerPowerandHumanReason:FromJ udgmenttoCalculation, the conception of the
world that emerged from the widespread use of timekeeping instruments remains an impoverished
version of the older one, for it rests on a rejection of those direct experiences that formed the basis for,
and indeed constituted, the old reality. In deciding when to eat, to work, to sleep, to rise, we stopped
listening to our senses and started obeying the clock.
The process of adapting to new intellectual technologies is reflected in the changing metaphors we use
to explain ourselves to ourselves. When the mechanical clock arrived, people began thinking of their
brains as operating like clockwork. Today, in the age of software, we have come to think of them as
operating like computers. But the changes, neuroscience tells us, go much deeper than metaphor.
Thanks to our brains plasticity, the adaptation occurs also at a biological level.
The Internet promises to have particularly far reaching effects on cognition. In a paper published in
1936, the British mathematician Alan Turing proved that a digital computer, which at the time existed
only as a theoretical machine, could be programmed to perform the function of any other information
processing device. And thats what were seeing today. The Internet, an immeasurably powerful
computing system, is subsuming most of our other intellectual technologies. Its becoming our map and
our clock, our printing press and our typewriter, our calculator and our telephone, and our radio and TV.
When the Net absorbs a medium, that medium is re created in the Nets image. It injects the mediums
content with hyperlinks, blinking ads, and other digital gewgaws, and it surrounds the content with the
content of all the other media it has absorbed. A new e mail message, for instance, may announce its
arrival as were glancing over the latest headlines at a newspapers site. The result is to scatter our
attention and diffuse our concentration.
The Nets influence doesnt end at the edges of a computer screen, either. As peoples minds become
attuned to the crazy quilt of Internet media, traditional media have to adapt to the audiences new
expectations. Television programs add text crawls and pop up ads, and magazines and newspapers
shorten their articles, introduce capsule summaries, and crowd their pages with easy to browse info
snippets. When, in March of this year, TheNewYorkTimes decided to devote the second and third
pages of every edition to article abstracts , its design director, Tom Bodkin, explained that the
shortcuts would give harried readers a quick taste of the days news, sparing them the less efficient
method of actually turning the pages and reading the articles. Old media have little choice but to play by
the new media rules.
Never has a communications system played so many roles in our livesor exerted such broad
influence over our thoughtsas the Internet does today. Yet, for all thats been written about the Net,
theres been little consideration of how, exactly, its reprogramming us. The Nets intellectual ethic
remains obscure.
About the same time that Nietzsche started using his typewriter, an earnest young man named
Frederick Winslow Taylor carried a stopwatch into the Midvale Steel plant in Philadelphia and began a
historic series of experiments aimed at improving the efficiency of the plants machinists. With the
approval of Midvales owners, he recruited a group of factory hands, set them to work on various
metalworking machines, and recorded and timed their every movement as well as the operations of the
machines. By breaking down every job into a sequence of small, discrete steps and then testing
different ways of performing each one, Taylor created a set of precise instructionsan algorithm, we
might say todayfor how each worker should work. Midvales employees grumbled about the strict new
regime, claiming that it turned them into little more than automatons, but the factorys productivity
soared.
More than a hundred years after the invention of the steam engine, the Industrial Revolution had at last
found its philosophy and its philosopher. Taylors tight industrial choreographyhis system, as he
liked to call itwas embraced by manufacturers throughout the country and, in time, around the world.
Seeking maximum speed, maximum efficiency, and maximum output, factory owners used time and
motion studies to organize their work and configure the jobs of their workers. The goal, as Taylor
defined it in his celebrated 1911 treatise, ThePrinciplesofScientificManagement, was to identify and
adopt, for every job, the one best method of work and thereby to effect the gradual substitution of
science for rule of thumb throughout the mechanic arts. Once his system was applied to all acts of
manual labor, Taylor assured his followers, it would bring about a restructuring not only of industry but
of society, creating a utopia of perfect efficiency. In the past the man has been first, he declared in
the future the system must be first.
Taylors system is still very much with us it remains the ethic of industrial manufacturing. And now,
thanks to the growing power that computer engineers and software coders wield over our intellectual
lives, Taylors ethic is beginning to govern the realm of the mind as well. The Internet is a machine
designed for the efficient and automated collection, transmission, and manipulation of information, and
its legions of programmers are intent on finding the one best methodthe perfect algorithmto carry
out every mental movement of what weve come to describe as knowledge work.
Googles headquarters, in Mountain View, Californiathe Googleplexis the Internets high church,
and the religion practiced inside its walls is Taylorism. Google, says its chief executive, Eric Schmidt, is
a company thats founded around the science of measurement, and it is striving to systematize
everything it does. Drawing on the terabytes of behavioral data it collects through its search engine and
other sites, it carries out thousands of experiments a day, according to the HarvardBusinessReview,
and it uses the results to refine the algorithms that increasingly control how people find information and
extract meaning from it. What Taylor did for the work of the hand, Google is doing for the work of the
mind.
The company has declared that its mission is to organize the worlds information and make it
universally accessible and useful. It seeks to develop the perfect search engine, which it defines as
something that understands exactly what you mean and gives you back exactly what you want. In
Googles view, information is a kind of commodity, a utilitarian resource that can be mined and
processed with industrial efficiency. The more pieces of information we can access and the faster we
can extract their gist, the more productive we become as thinkers.
Where does it end? Sergey Brin and Larry Page, the gifted young men who founded Google while
pursuing doctoral degrees in computer science at Stanford, speak frequently of their desire to turn their
search engine into an artificial intelligence, a HAL like machine that might be connected directly to our
brains. The ultimate search engine is something as smart as peopleor smarter, Page said in a
speech a few years back. For us, working on search is a way to work on artificial intelligence. In a
2004 interview with Newsweek, Brin said, Certainly if you had all the worlds information directly
attached to your brain, or an artificial brain that was smarter than your brain, youd be better off. Last
year, Page told a convention of scientists that Google is really trying to build artificial intelligence and to
do it on a large scale.
Such an ambition is a natural one, even an admirable one, for a pair of math whizzes with vast
quantities of cash at their disposal and a small army of computer scientists in their employ. A
fundamentally scientific enterprise, Google is motivated by a desire to use technology, in Eric Schmidts
words, to solve problems that have never been solved before, and artificial intelligence is the hardest
problem out there. Why wouldnt Brin and Page want to be the ones to crack it?
Still, their easy assumption that wed all be better off if our brains were supplemented, or even
replaced, by an artificial intelligence is unsettling. It suggests a belief that intelligence is the output of a
mechanical process, a series of discrete steps that can be isolated, measured, and optimized. In
Googles world, the world we enter when we go online, theres little place for the fuzziness of
contemplation. Ambiguity is not an opening for insight but a bug to be fixed. The human brain is just an
outdated computer that needs a faster processor and a bigger hard drive.
The idea that our minds should operate as high speed data processing machines is not only built into
the workings of the Internet, it is the networks reigning business model as well. The faster we surf
across the Webthe more links we click and pages we viewthe more opportunities Google and other
companies gain to collect information about us and to feed us advertisements. Most of the proprietors of
the commercial Internet have a financial stake in collecting the crumbs of data we leave behind as we
flit from link to linkthe more crumbs, the better. The last thing these companies want is to encourage
leisurely reading or slow, concentrated thought. Its in their economic interest to drive us to distraction.
Maybe Im just a worrywart. Just as theres a tendency to glorify technological progress, theres a
countertendency to expect the worst of every new tool or machine. In Platos Phaedrus, Socrates
bemoaned the development of writing. He feared that, as people came to rely on the written word as a
substitute for the knowledge they used to carry inside their heads, they would, in the words of one of the
dialogues characters, cease to exercise their memory and become forgetful. And because they would
be able to receive a quantity of information without proper instruction, they would be thought very
knowledgeable when they are for the most part quite ignorant. They would be filled with the conceit of
wisdom instead of real wisdom. Socrates wasnt wrongthe new technology did often have the effects
he fearedbut he was shortsighted. He couldnt foresee the many ways that writing and reading would
serve to spread information, spur fresh ideas, and expand human knowledge (if not wisdom).
The arrival of Gutenbergs printing press, in the 15th century, set off another round of teeth gnashing.
The Italian humanist Hieronimo Squarciafico worried that the easy availability of books would lead to
intellectual laziness, making men less studious and weakening their minds. Others argued that
cheaply printed books and broadsheets would undermine religious authority, demean the work of
scholars and scribes, and spread sedition and debauchery. As New York University professor Clay
Shirky notes, Most of the arguments made against the printing press were correct, even prescient.
But, again, the doomsayers were unable to imagine the myriad blessings that the printed word would
deliver.
So, yes, you should be skeptical of my skepticism. Perhaps those who dismiss critics of the Internet as
Luddites or nostalgists will be proved correct, and from our hyperactive, data stoked minds will spring a
golden age of intellectual discovery and universal wisdom. Then again, the Net isnt the alphabet, and
although it may replace the printing press, it produces something altogether different. The kind of deep
reading that a sequence of printed pages promotes is valuable not just for the knowledge we acquire
from the authors words but for the intellectual vibrations those words set off within our own minds. In
the quiet spaces opened up by the sustained, undistracted reading of a book, or by any other act of
contemplation, for that matter, we make our own associations, draw our own inferences and analogies,
foster our own ideas. Deep reading, as Maryanne Wolf argues, is indistinguishable from deep thinking.
If we lose those quiet spaces, or fill them up with content, we will sacrifice something important not
only in our selves but in our culture. In a recent essay, the playwright Richard Foreman eloquently
described whats at stake:
I come from a tradition of Western culture, in which the ideal (my ideal) was the complex,
dense and cathedral like structure of the highly educated and articulate personalitya
man or woman who carried inside themselves a personally constructed and unique version
of the entire heritage of the West. [But now] I see within us all (myself included) the
replacement of complex inner density with a new kind of selfevolving under the pressure
of information overload and the technology of the instantly available.
As we are drained of our inner repertory of dense cultural inheritance, Foreman concluded, we risk
turning into pancake peoplespread wide and thin as we connect with that vast network of
information accessed by the mere touch of a button.
Im haunted by that scene in 2001. What makes it so poignant, and so weird, is the computers
emotional response to the disassembly of its mind: its despair as one circuit after another goes dark, its
childlike pleading with the astronautI can feel it. I can feel it. Im afraidand its final reversion to
what can only be called a state of innocence. HALs outpouring of feeling contrasts with the
emotionlessness that characterizes the human figures in the film, who go about their business with an
almost robotic efficiency. Their thoughts and actions feel scripted, as if theyre following the steps of an
algorithm. In the world of 2001, people have become so machinelike that the most human character
turns out to be a machine. Thats the essence of Kubricks dark prophecy: as we come to rely on
computers to mediate our understanding of the world, it is our own intelligence that flattens into artificial
intelligence.
seems to create a persistent memory. In that vein, recent imaging studies of people have found
that major cross sections of the brain become surprisingly active during downtime. These brain
studies suggest to researchers that periods of rest are critical in allowing the brain to
synthesize information, make connections between ideas and even develop the sense
of self. Researchers say these studies have particular implications for young people, whose
brains have more trouble focusing and setting priorities. Downtime is to the brain what sleep is
to the body, said Dr. Rich of Harvard Medical School. But kids are in a constant mode of
stimulation.
White moths
David Shenk, Data Smog (1997)
Physically, we are what we are. So while we like to think of humans as adaptable creatures, the
plain truth is that because of our complexity and longevity, we arent nearly as quick to
physically adapt as are many other species. In the nineteenth century, the thick smoke from
factories in England annihilated white lichens covering tree bark, rendering the previously wellcamouflaged white Peppered Moth extremely vulnerable to bird predators. But in just a few
years, the previously rare black moths from the same species became dominant, and the
Peppered Moth was saved. In recent years, as factory soot has waned in that area and the tree
bark has subsequently become light again, evolution has pulled a quick about-face: The moths,
too, are white again. Human evolution, for better or worse, is not so swift; because of this, we
may not be able to keep pace with our own technology. Our brains have remained structurally
consistent for over 50,000 years, yet exposure to processed information in this century has
increased by a factor of thousands (lately, the volume and speed of information has been
increasing as much as 100 percent each year). Something has to give.
Automation is making us dumb
Nicholas Carr, WSJ (2014)
The first wave of automation rolled through U.S. industry after World War II, when manufacturers
began installing electronically controlled equipment in their plants. The new machines made
factories more efficient and companies more profitable. They were also heralded as
emancipators. By relieving factory hands of routine chores, they would do more than boost
productivity. They would elevate laborers, giving them more invigorating jobs and more valuable
talents. The new technology would be ennobling. Then, in the 1950s, a Harvard Business School
professor named James Bright went into the field to study automations actual effects on a
variety of industries, from heavy manufacturing to oil refining to bread baking. Factory
conditions, he discovered, were anything but uplifting. More often than not, the new machines
were leaving workers with drabber, less demanding jobs. An automated milling machine, for
example, didnt transform the metalworker into a more creative artisan; it turned him into a
pusher of buttons. Bright concluded that the overriding effect of automation was (in the jargon of
labor economists) to de-skill workers rather than to up-skill them. The lesson should be
increasingly clear, he wrote in 1966. Highly complex equipment did not require skilled
operators. The skill can be built into the machine. We are learning that lesson again today on a
much broader scale. As software has become capable of analysis and decision-making,
automation has leapt out of the factory and into the white-collar world. Computers are taking
over the kinds of knowledge work long considered the preserve of well-educated, well-trained
professionals: Pilots rely on computers to fly planes; doctors consult them in diagnosing
ailments; architects use them to design buildings. Worrisome evidence suggests that our own
intelligence is withering as we become more dependent on the artificial variety. Rather than
lifting us up, smart software seems to be dumbing us down. Automation traps people in a vicious
cycle of de-skilling. By isolating them from hard work, it dulls their skills and increases the odds
that they will make mistakes. When those mistakes happen, designers respond by seeking to
further restrict peoples responsibilitiesspurring a new round of de-skilling.
Is Google making students stupid?
Nick Romeo, The Atlantic (2014)
When do technologies free students to think about more interesting and complex questions, and
when do they erode the very cognitive capacities they are meant to enhance? The effect of
ubiquitous spell check and AutoCorrect software is a revealing example. Psychologists studying
the formation of memories have found that the act of generating a word in your mind
strengthens your capacity to remember it. When a computer automatically corrects a spelling
mistake or offers a drop-down menu of options, were no longer forced to generate the correct
spelling in our minds. This might not seem very important. If writers dont clutter their minds
with often-bizarre English spelling conventions, this might give them more energy to consider
interesting questions of style and structure. But the process of word generation is not just
supplementing spelling skills; its also eroding them. When students find themselves without
automated spelling assistance, theyre more likely to make errors. The solution might seem to be
improving battery life and making spelling assistance even more omnipresent, but this creates a
vicious cycle: The more we use the technology, the more we need to use it in all circumstances.
Suddenly, our position as masters of technology starts to seem more precarious.
Is smart making us dumb?
Evgeny Morozov, WSJ (2013)
Many smart technologies are heading in another, more disturbing direction. A number of thinkers
in Silicon Valley see these technologies as a way not just to give consumers new products that
they want but to push them to behave better. Sometimes this will be a nudge; sometimes it will
be a shove. But the central idea is clear: social engineering disguised as product engineering.
there is reason to worry about this approaching revolution. As smart technologies become more
intrusive, they risk undermining our autonomy by suppressing behaviors that someone
somewhere has deemed undesirable. Smart forks inform us that we are eating too fast. Smart
toothbrushes urge us to spend more time brushing our teeth. Smart sensors in our cars can tell if
we drive too fast or brake too suddenly. These devices can give us useful feedback, but they can
also share everything they know about our habits with institutions whose interests are not
identical with our own. Insurance companies already offer significant discounts to drivers who
agree to install smart sensors in order to monitor their driving habits. How long will it be before
customers can't get auto insurance without surrendering to such surveillance? And how long will
it be before the self-tracking of our health (weight, diet, steps taken in a day) graduates from
being a recreational novelty to a virtual requirement?
No matter how creatively we name it, however, the effects of information overload do not add up
to one single debilitating syndrome that we can easily highlight, recoil in horror from, and muster
a simple defense against. A careful review of thirty years of psychological research reveals a
wide variety of effects from information and stimulus overload.
Increased cardiovascular stress. Blood pressure rises, leading to strain on the heart and
other organs. (Ettema)
Weakened vision. Researchers in Japan have documented an alarming decline in visual acuity
as a result of increased exposure to screens. Based on recent trends, a prediction is made that at
some point in the not-too-distant future, virtually everyone in Japan will be nearsighted.
(Ishikawa)
Confusion. consumers [are] unable to effectively and efficiently process the information.
(Malhotra, Journal of Consumer Research)
Frustration. loud speech [and other background noise] lowers frustration tolerance and
cognitive complexity (Rotten et al., Journal of Applied Psychology)
Impaired judgment. as Information load increases, integrated decision making first
increases, reaches an optimumand then decreases (Streufert et al., Journal of Experimental
Social Psychology)
Decreased benevolence. a persons response to someone needing assistance decreases in
likelihood as his environment increases its input bombardment (Korte et al., Journal of
Personality and Social Psychology)
Side Effects
William Davidow, Overconnected: The Promise and Threat of the Internet (2011)
The automobile, which gave us the freedom to go where we wanted when we wanted, which
created the suburbs, served as the backbone of much of the prosperity boom of the twentieth
century. When it was first invented, no one foresaw its grim side effects urban sprawl, long
commutes, dependence on unstable political regimes for fossil fuels, pollution, and hollowed-out
cities. And now we have the Internet, whose side effects we are experiencing like
nothing else in the past.
We are becoming mentally obese
Sam Anderson, New York Magazine (2009)
http://nymag.com/news/features/56793/
Our attention crisis is already chewing its hyperactive way through the very foundations of
Western civilization. Google is making us stupid, multitasking is draining our souls, and the
dumbest generation is leading us into a dark age of bookless power browsing. Adopting
the Internet as the hub of our work, play, and commerce has been the intellectual
equivalent of adopting corn syrup as the center of our national diet, and weve all
become mentally obese.
Thesis in Tweetform
Nicholas Carr (roughtype.com, 2012)
Virtually you
William Saletan, New York Times (2011)
Humanity is migrating to cyberspace. In the past five years, Americans have doubled the hours
they spend online, exceeding their television time and more than tripling the time they spend
reading newspapers or magazines. Most now play computer or video games regularly, about 13
hours a week on average. By age 21, the average young American has spent at least three times
as many hours playing virtual games as reading. It took humankind eight years to spend 100
million hours building Wikipedia. We now spend at least 200 million hours a week playing World
of Warcraft.
Elias Aboujaoude, a Silicon Valley psychiatrist, finds this alarming. In Virtually You, he argues
that the Internet is unleashing our worst instincts. It connects you to whatever you want:
gambling, overspending, sex with strangers. It speeds transactions, facilitating impulse
purchases and luring you away from the difficulties of real life. It lets you customize your
fantasies and select a date from millions of profiles, sapping your patience for imperfect
partners. It lets you pick congenial news sources and avoid contrary views and information. It
conceals your identity, freeing you to be vitriolic or dishonest. It shields you from detection and
disapproval, emboldening you to download test answers and term papers. It hides the pain of
others, liberating your cruelty in games and forums. It rewards self-promotion on blogs and
Facebook. It teaches you how to induce bulimic vomiting or kill yourself.
In short, everything you thought was good about the Internet information, access,
personalization is bad. Aboujaoude isnt shy in his indictment. He links the Internet to
consumer debt, the housing crash, eating disorders, sexually transmitted infections,
psychopathy, racism, terrorism, child sexual abuse, suicide and murder. Everything online
worries him: ads, hyperlinks, even emoticons. The Internet makes us too quarrelsome. It makes
us too like-minded. It makes us work too little. It makes us work too much.
In part, this grim view stems from Aboujaoudes work. He sees patients with online compulsions.
He believes in the Freudian id a shadowy swirl of infantile impulses and perceives its
modern incarnation in what he calls the e-personality, a parallel identity that hijacks your mind
online. In the physical world, your superego restrains your id. But in the virtual world, where you
can instantly fulfill your whims, the narcissism and grandiosity of the e-personality run wild.
To Aboujaoude, the Internet is a mechanical alien, a new type of machine . . . that can efficiently
prey on our basic instincts. It converts children into bullies almost automatically. It turned
Philip Markoff, the accused Craigslist killer, who committed suicide in jail, into a serial
assailant. Lori Drew, the woman whose online impersonation of a teenage boy supposedly drove
a girl to suicide, seemed normal until the Internet made her fleeting dark wish . . . take on a life
of its own. Again and again, computers get the blame.
Jane McGonigal, the author of Reality Is Broken, sees the Internet differently. Shes a game
designer. To her, the virtual world isnt a foreign contraption. Its our own evolving creation. She
agrees that bad online games can addict people, make them belligerent, distract them from
reality and leave them empty. But this is our fault, not the Internets. When virtual life brings out
the worst in us, redesign it.
If Aboujaoude is the Internets Hobbes, McGonigal is its Rousseau. In the rise of multiplayer
games, she sees a happier picture of human nature a thirst for community, a craving for hard
work and a love of rules. This, she argues, is the essence of games: rules, a challenge and a
shared objective. The trick is to design games that reward good behavior. The Internets
unprecedented power, its ability to envelop and interact with us, is a blessing, not a threat. We
can build worlds in which nice guys finish first.
The point isnt just to enhance virtual reality. Its to fix the real world, too. McGonigal offers
several examples, some of which she helped create. Chore Wars, an alternate-reality game,
builds positive attitudes toward housework by rewarding virtual housework. Cruel 2 B Kind
invites players to kill competitors with smiles or compliments. The Extraordinaries hands out
missions like one in which the player must GPS-tag a defibrillator so its location can be registered
for later use. Groundcrew assigns players to help people with transportation, shopping or
housekeeping.
The premise is that since games motivate us more effectively than real life, making them
altruistic and bringing them into the physical world will promote altruistic behavior. But is this
motivating power transferable? What draws us to virtual worlds, McGonigal notes, is their
carefully designed pleasures and thrilling challenges customized to our strengths. Theyre
never boring. They let us choose our missions and control our work flow. They make us feel
powerful. They offer a guarantee of productivity in every quest. And when we fail, they make
our failure entertaining.
Reality doesnt work this way. Floors need scrubbing. Garbage needs hauling. Invalids need their
bedpans washed. This work isnt designed for your pleasure or stimulation. It just needs to be
done.
McGonigal points to studies suggesting that games that reward socially constructive behavior
promote such behavior in real life. But the only outputs measured by these studies are selfreported values, self-reported behavior in the real world, and objectively measured behavior in
games. Wheres the reliable evidence that this data translates to peoples doing more real work?
Projects like Groundcrew, McGonigal concedes, have produced modest if any results so far.
Hundreds of thousands of people play Free Rice, a game designed to feed the hungry, but the
rice comes from advertisers, not players. Thousands sign up every day for Folding@home, a
game to cure diseases, but all these players contribute is processing power on game consoles.
If reality is inherently less attractive than games, then the virtual world wont save the physical
world. It will empty it. Millions of gamers, in McGonigals words, are opting out of the bummer
of real life. And they arent coming back. Halo 3, for example, has become a complete virtual
world, with its own history documented in an online museum and Ken Burns-style videos.
McGonigal calls this war game a model for inspiring mass cooperation. Two years ago, its 15
million players reached a long-sought objective: They killed their 10 billionth alien. Fresh off one
collective achievement, Halo players were ready to tackle an even more monumental goal,
McGonigal writes. And what goal did they choose? Feeding the hungry? Clothing the poor? No.
The new goal was to kill 100 billion aliens.
Game designers cant be counted on to arrest this trend. McGonigal says the game industry
wants to help users avoid addiction so that theyll remain functional and keep buying its
products. But weve heard that argument before from the tobacco industry. Addiction, as a
business model, is too addictive to give up. She says Foursquare, a game that rewards you for
going out with friends and checking in at restaurants, promotes sociability. That would be nice,
but the games Web site devotes a whole section (Foursquare for Business) to commercial
exploitation.
The Internet isnt heaven. It isnt hell, either. Its just another new world. Like other worlds, it can
be civilized. It will need rules, monitoring and benevolent designers who understand the flaws of
its inhabitants. If Aboujaoude is right about our weakness for virtual vice, well need all the
McGonigals we can get.From: Virtually You: The Dangerous Powers of the e-Personality
(Elias Aboujaoude, 2012)
calories a day; how the research showing that the brain of anorexics shrinks as the disease
progresses was seriously flawed because it didnt include a control group and how most
religions of the world have incorporated some element of fasting into their rituals, so how bad
can it be.
Liz
Liz, a forty-four-year-old homemaker with no previous psychiatric problems, came to me for help
with out-of-control e-tail expenditures. She had heard a radio ad for a clinical trial we were doing
at Stanford University to test Celexa, a serotonin-based antidepressant, in the treatment of
compulsive shoppers. She hoped she would qualify to be in the study. For every dress I like, I
have to buy three, each in a different color, she told me at our initial meeting, sounding puzzled
at her own behavior. Then, for each color I get, I have to buy two additional sizeswhat if I gain
weight, what if I lose weight? Meanwhile, Lizs brick-and-mortar shopping habits remained
relatively reasonable. She made it a point to tell me how she had just missed the annual Labor
Day weekend sale at the big furniture outlet near where she livesan event she looked forward
to year after year. I didnt think I could afford it. Not this year. Not with my online spending, she
explained. Then, sounding almost nostalgic for her days of responsible shopping, she added, I
never lost control at a real store the way I lose control online. Until I discovered Buy.com, I was
actually putting money aside for retirement. What is it about me?
Or what is it about the Internet? I encouraged Liz to ask herself this question and to explore how
the medium itself might be contributing to her behavior. The idea seemed to resonate. I guess if
its online, somehow it doesnt feel real, she said. Or not as real. Its innocent and fun. Almost
guilt-freejust like a computer game. And how bad can a computer game be? Quite bad,
unfortunately, as Liz discovered. Her online shopping sprees had already caused her to file for
bankruptcy, which is what finally prompted her to seek help in my clinic. Although her problem
was clearly shopping-based, it wasnt the traditional compulsive buying disorder our study was
recruiting for. Instead of those crowded modern-day cathedrals we call malls, Lizs problem
manifested itself solely in virtual joints that are always open and where parking is never a
problem. Most of our experience with compulsive shopping, and most of what had been
described in the psychiatric literature, involved real acquisitions from real stores. Our screening
questionnaire for the study, which included questions about the role of in-store advertising, the
effect of product display, whether she shopped alone or with friends, and how much effort was
involved in getting to her favorite storesall seemed irrelevant and fell woefully short of
capturing her problem.
Richard
For Richard, a thirty-six-year-old married Human Resources specialist and father of two, it was
the threat of divorce that made him make his first appointment. For years, his gambling habits
were what one might call social, not remarkably different from what his coworkers would
describe in conversations around the water cooler on Monday mornings. Like them, Richard
enjoyed the occasional weekend skiing trip to Reno, a three-hour drive from home, where he and
his wife would sometimes play the slot machines in their hotel lobby well into the night. Beyond
that, however, he never sought it out and never stopped to try his luck at the local Indian casino,
despite a couple of memorable wins in Reno that might have served to draw him back to a
gaming institution. Simply put, when it came to gambling, Richard could take it or leave it.
A single spam e-mail in his in-box, however, led Richard to a virtual casino that became his
undoing. It started with an offer for a free trial of an online Texas Hold em game that Richard
took advantage of one night when he was having trouble falling asleep. Before long, he was
waiting for his wife to go to bed in order to log on, eventually using her credit card to do so after
exhausting his promotional account and maxing out his personal credit cards.
Intriguingly, on his morning drive to work, Richard would speed past the Indian casino, counting
the minutes till he could get to his desk and log on to his favorite gambling Web site. Its a very
different experience than being in a real casino, he explained to me, opening his wallet to show
unused coupons he had received from the Reno resort where he had won big. See, theyre
always mailing me offers for a free stay, free meals, free shows, he said. None of it seems to
move me. Somehow it feels better online. Youre free of inhibitions, whether theyre your own or
imposed on you by other people. Its just you and your computer screen, with no one to
disapprove of you or give you dirty looks, and no one to remind you of your responsibilities and
your credit card debt. Richards behavior caught up with him when reality, in the form of his
wife leaving him with their two kids, encroached on his virtual life.
Raf
RAFFI WAS A forty-year-old married man who came to see me for what he described as selfesteem issues. As the only son of first-generation Italian immigrants, he grew up in a religious
household feeling pampered and doted on. The altar boy who did well in school and excelled in
sports became a successful civil engineer, marrying his high school sweetheart and forming a
family that also included two beautiful teenage daughters. The couple of years before our first
appointment, however, had been very trying. Two years ago, at the age of thirteen, one daughter
was diagnosed with diabetes. Shortly after that, his beloved mother died unexpectedly of a heart
attack. And when the recession hit, Raffi was laid off as part of a restructuring of his company,
becoming unemployed for the first
time since finishing college. Multiple job interviews had led nowhere.
The string of family and professional losses happening in a relatively short period of time led to
clinical depression, which further compounded the situation by taking away Raffis energy and
his motivation to exercise, eat healthily, and take care of his physical appearance. The once
vibrant forty-year-old with the boyish looks and charmed life started seeing himself as fat,
unemployable, and all-around worthless.
His wife of fifteen years, still working and as striking and active as when they first fell in love
during their senior year, tried to be supportive. In his state of extreme vulnerability and low selfesteem, however, all Raffi could focus on was how she must be having an affair. Not that he had
any reason to suspect infidelity: Any extramarital activities his wife partook in seemed to focus
on attending diabetes support groups and investigating insulin pumps for her child. But she has
to be having an affair because I have nothing to offer, went his faulty logic. Just look at her and
look at me.
About ten years before this, in the old days, Raffi might have worked hard to convince himself
she wasnt. He may have sought a reality check from a friend or therapist. He may have tried
to seek reassurance from his wife that she still loved him and that she would stay by his side
until the dark clouds had passed and they were able to go back to their normal lives. If all
alternatives failed, he might, out of desperation and as a last resort, decide to hire a private eye.
But this was today, a time for shortcuts and immediate results, so Raffi decided to cut to the
chase and start with the desperate act right away. Only instead of a costly detective, he
installed keyloggers on his wifes laptop.
Keyloggers is a family of easily downloadable, relatively inexpensive software programs that
track (or log) the keys struck on a computer keyboard without the person using it knowing that
his actions are being monitored. They have some defendable applications: They can be used to
control kids Internet surfing habits and can help maintain an automatic backup for ones typed
data. Increasingly, however, their main use is as spying tools for people who want to snoop on
one another in personal and business affairsa way to extend ones knowledge of, and
potentially control over, the other person into the virtual realm.
It is now generally accepted that if there is dirt to be found in someones life, a good place to
start looking for it is in that persons e-mail or text message in-box. Only a clean in-box is proof
of a clean record, so Raffi prayed for a chaste log of e-mails that would exonerate his wife. He
quickly retrieved her passwords and, for six months, scrutinized her outgoing and incoming
messages. He studied her saved contacts and researched the ones he didnt recognize. He
visited every unfamiliar Web site she visited and joined (under a pseudonym) every community
she signed up for. But the only real secret he uncovered involved a surprise fortieth birthday
party she was planning for him behind his back.
Reassuring himself that no affair was taking place, however, hardly brought him relief. Raffis
problem may have started with negative-territory self-esteem, but he was now suffering from
overwhelming guilt over the sting operation he had conducted against his loving and faithful
wife. How could he not trust her, and how did it become so easy for him to turn into a spy? For
six whole months, he had spied on a woman who had given him no real reason for doubt. What
he had done did not fit the mental image he had of himself, his wife, and his marriage, and Raffi
could not forgive himself for it.
had enough insight into her psychology to realize how, in many ways, this had been the perfect
relationship for her, providing an outlet for libidinal energy without the unbearable stress of
social performance and interpersonal contact. But why wasnt Tom more persistent? What was
his problem? This question plagued Jill and, more than any other consideration, made her insist
on the fateful meeting. Despite his cold feet and her untruths, she was still intent on trying to
bring her virtual romance to a smooth landing in the real world, and I was to help her navigate
this emotional minefield.
Where to meet, what to wear, and in what order to begin correcting the many fallacies that
separated Jill from Tess? My patient struggled mightily with these questions as she prepared to
meet her online love. As it turned out, however, she neednt have worried at all. Or, perhaps
more accurately, she should have been doubly worried, for Tom had a few secrets of his own to
confess. For starters, he was really Ted, and the doctor Jill thought she had landed was a
pharmacist who had always wanted to go to medical school. The two were playing the same
game: Jill was increasing her appeal by pretending to be less socially inhibited, and Tom was
increasing his by elevating himself on the socioeconomic and alpha male ladder. Somewhere in
cyberspace their trajectories collided, with much potential and heartache resulting.
Yet, far from con artists, Tom and Jill were using the virtual world to overcome limitations that
they felt were unjust, and both were helpless in front of a game that had self-perpetuating,
snowballing tendencies built into
itand real rewards associated with it, too. For, regardless of how we might judge it and
regardless of the outcome, it would be impossible to deny the intensity, even the genuineness,
of the pleasurable, ego-boosting emotions/emoticons that the Internet helped generate between
Tess and Tom, whoever they really were.
For them, the face-to-face meeting must have been a terrifying reminder of all the old anxieties,
inhibitions, and baggage that they were able to ignore for a long while onlinea reminder of
what mere mortals they still were. In its ultimate philosophical interpretation, this confrontation
with reality was a brutal reminder of death itself, the devastating death of an ideal personality
that they wished they hadfreer, sexier and, in their eyes, more worthy of being loved than the
one they were stuck with in real life. And neither he nor she could tolerate it. According to Jill, the
meeting ended with the couple, having shared a string of disappointing revelations, separately
leaving the caf where they had met, having decided not to pursue a relationship that was too
burdened by lies to have a true chance at success. That evening, they logged on from their
home computers, found each other online, chatted briefly, then found a reason to log off. From
what I could glean, it was a perfectly pleasant but superficial IM exchange, devoid of blame
gametype accusations, but also of any big love pronouncements. They would not, the following
day, obsessively check e-mail to see if the other person had sent a sweet note.
Learning involves playing with identities so that the learner has real choices (in developing
the virtual
identity) and opportunity to meditate on the relationship between new identities and old
ones.
Self-Knowledge Principle
The virtual world is constructed in such a way that learners learn not only about the domain
but about themselves and their current and potential capacities.
Achievement Principle
For learners of all levels of skill there are intrinsic rewards from the beginning, customized to
each
learner's level, effort, and growing mastery and signaling the learner's ongoing
achievements.
Sunday Story #4
Due Sun Oct 11 (before midnight)
Prompt
medium-gee
In a recent issue of the Harvard Business Review, two major thinkers have even argued that the
teenage gamer might signal the skills needed for potential leadership in politics and commerce
for our age. John Seely Brown, one of the earliest and perennial visionaries of the information
age, and Douglas Thomas, a communications professor at the University of Southern California
and himself a champion gamer, have noted that multiplayer online games are large, complex,
constantly evolving social systems. These researchers have defined five key attributes of what
they term the gamer disposition:
they are bottom-line oriented (because games have embedded systems of measurement and
assessment);
they understand the power of diversity (because success requires teamwork among those with a
rich mix of talents and abilities);
they thrive on change (because nothing is constant in a game);
they see learning as fun (because the fun of the game lies in learning how to overcome
obstacles); and they marinate on the edge (because, to succeed, they must absorb radical
alternatives and imbibe innovative strategies for completing tasks).
It is hard to think of five better qualities for success in our always-on digital age.
sources like CNN left standing, we may regret having nowhere to read about recent city council
meetings, church picnics, school fundraisers, and other matters of the kind of community
concern that have long been integral to American civic life.
\
Homework for Thu Oct 15 (class 8)
The rise of digital journalism
Reading
Consider and mark-up the assigned reading, with pen and/or highlighter.
Excerpts: The rise of digital journalism
Video
Watch the assigned videos on Zaption and post a one-sentence response to each.
How is Social Media Changing Journalism? (2:19)
http://zapt.io/txh3hzdq
Social media is (3:00)
http://zapt.io/t3zhx4pr
Select an article or video from the lesson page links and Tweet its gist or
guiding question
Please include this address in your Tweet: @bhsecinternet
Online discussion
Respond to this prompt on Canvas:
What role do newspapers play in local communities? In democratic
society?
https://canvas.instructure.com/courses/936923/discussion_topics/4017322
Excerpt 1:
Distilling the effects on news, we can separate out three irreversible shifts:
First, in the quantity of information available. When journalism began, reliable information was
scarce; despite the inaccuracy of much that you can find nowadays, news is in glut.
Second big change: the instant alteration of information. Cable and satellite gave us rolling 24hour news. The internet allows that to be updated, nuanced, corrected continuously from many
different directions. Those who enjoy this say that news has become a process or a
conversation. Those who do not enjoy this say that news is losing at least some of its authority,
clarity and coherence.
The third change is the most profound. The production and consumption of news has been
decoupled from advertising and its previous sources of income. First and foremost that causes an
economic crisis. The readers and watchers of mass media news have never paid the full cost in
subscription or cover price. In print, advertising is somewhere between half and three-quarters of
the income needed to keep a quality newspaper going. Many newspapers, and particularly local
and regional ones, received the majority of their income from classified ads, usually for jobs,
houses and cars. Those small ads have transferred to the web. Not will transfer, but have;
past tense.
Excerpt 4:
Imagine a common enough event in any large city. It is a Friday evening and a young woman on
her way home after a night out is attacked and robbed. The attack is not fatal, but serious
enough to put her in hospital overnight and to involve the police. By the next morning, her social
network will have been alerted. She might have triggered this by tweeting, posting on Facebook
or another social networking site or simply by sending emails or texts from a phone. Before 24
hours have gone by, anything up to several hundred people will have read and discussed the
details. What does the local paper do? A routine police check which might put a few paragraphs
on the website on the Saturday and in Monday evenings edition. It wasnt a murder after all. In
print, more potential readers but less engagement. Conventional newss disadvantage is not
really lack of speed, although it will be slower. It is also that the formal, one-size-fits-all news
arrives in less satisfying form. For this is both a public event (police, hospital and a possible court
case) and a private one belonging to a network. The private version of the news will be swifter,
richer and more detailed and authentic, carried by voices who know eachother. As young
consumers of news say nowadays: If the news is important enough it will find me.
The Exhilaration of Direct Broadcasting
Andrew Sullivan, The Atlantic (2008)
The simple experience of being able to directly broadcast my own words to readers was an
exhilarating literary liberation...It was obvious from the start that (blogging) was revolutionary.
Every writer since the printing press has longed for a means to publish himself and reach
instantlyany reader on Earth. Every professional writer has paid some dues waiting for an
editors nod, or enduring a publishers incompetence, or being ground to literary dust by a legion
of fact-checkers and copy editors. If you added up the time a writer once had to spend finding an
outlet, impressing editors, sucking up to proprietors, and proofreading edits, youd find another
lifetime buried in the interstices. But with one click of the Publish Now button, all these troubles
evaporated. Reporters and columnists tend to operate in a relative sanctuary, answerable mainly
to their editors, not readers. For a long time, columns were essentially monologues published to
applause, muffled murmurs, silence, or a distant heckle. Ive gotten blowback from pieces before
but in an amorphous, time-delayed, distant way. Now feedback is instant, personal, and brutal.
The Cocktail: Social Networks, Live Searching, and Link-Sharing
Steven Johnson, TIME magazine (2009)
Twitter put three elements together social networks, live searching and link-sharing and
created a cocktail that poses what may amount to the most interesting alternative to Google's
near monopoly in searching. At its heart, Google's system is built around the slow, anonymous
accumulation of authority: pages rise to the top of Google's search results according to, in part,
how many links point to them, which tends to favor older pages that have had time to build an
audience. That's a fantastic solution for finding high-quality needles in the immense, spamplagued haystack that is the contemporary Web. But it's not a particularly useful solution for
finding out what people are saying right now, the in-the-moment conversation that industry
pioneer John Battelle calls the "super fresh" Web. Even in its toddlerhood, Twitter is a more
efficient supplier of the super-fresh Web than Google. If you're looking for interesting articles or
sites devoted to Kobe Bryant, you search Google. If you're looking for interesting comments from
your extended social network about the three-pointer Kobe just made 30 seconds ago, you go to
Twitter.
was ashamed of and family expectations to live up to. His pattern, even before the crash, was to
dissemble in order not to make trouble for those around him. Once the tragedy happened,
Richtel writes, the intensity with which the family undertook the defense had a selfperpetuating and escalating force: Reggie denied texting, the family backed him up and Reggie,
never someone to let others down, dug deeper.
Richtel locates not one but two Inspector Javert types: the state trooper who responded to the
crash and almost immediately decided Shaw was lying about not texting (He kind of goes after
people, an attorney says about him), and a victims advocate named Terryl Warner, whose own
story is every bit as fascinating and redemptive as Shaws. The prelude to the trial is fascinating:
Should Reggie be charged with negligence or manslaughter, or nothing at all? Even if texting and
driving is wrong, should he have known that? In Richtels sensitive account, we come face to
face with the horrible Catch-22 of accident litigation that discourages one party from apologizing
to another, for fear of admitting liability. This apparent standoffishness helped persuade the
prosecutor to make Shaw a test case for texting and driving. Which in turn caused Shaws family
to accuse the prosecutor of waging a witch hunt. Which only appalled the victims widows and
families and advocates even more.
Richtel displays admirable empathy for everyone involved but reserves a special place in his
heart for Reggie impassive and forlorn, monosyllabic but tortured, evasive yet sincere. Shaws
conversion is depicted with revelatory precision, his epiphany realistically subdued and
painstakingly gradual. The fight seemed to be going out of him bit by bit, Richtel writes, before
the floodgates opened, his private and public selves beginning to reconcile. By the books end,
Shaw is a raw nerve, unable to stop confessing in speeches around the country. Even the
relatives of those he killed worry hell never be able to close the floodgates again.
The most powerful question raised by A Deadly Wandering is a simple one: If we know texting
and driving is so bad for us, why do we still do it? Richtel tries out several analogies to describe
the rush we get from a phone: alcohol, drugs, television, video games, junk food, the fight-orflight response to a tap on a shoulder. (The television comparison is weakest, perhaps because
so few of the people in the book agree with it.) My favorite analogy of Richtels is the slot
machine. Our bodies love the little hit of dopamine we get each time we check our phones for
something, anything. And just like a one-armed bandit, more often than not, our phones rarely
offer terribly exciting results when we check them. Even so, that doesnt stop us from coming
back for more dozens of times a day during movies, out at dinner, on our way to wherever
were going, unsafe at any speed.
Sunday Story #5
Due Sun Oct 18 (before midnight)
Prompt
medium-richtel
Its easy to forget that until recently, people thought of Amazon primarily as an online bookseller. Today,
as it nears its 20th anniversary, its the Everything Store, a company with around $75 billion in annual
revenue, a $140 billion market value, and few if any discernible limits to its growth. In the past few
months alone, it launched a marketplace in India, opened a website to sell high-end art, introduced
another Kindle reading device and three tablet computers, made plans to announce a set-top box for
televisions, and funded the pilot episodes of more than a dozen TV shows. Amazons marketplace hosts
the storefronts of countless smaller retailers; Amazon Web Services handles the computer infrastructure
of thousands of technology companies, universities, and government agencies.
Bezos, 49, has a boundless faith in the transformative power of technology. He constantly reinvests
Amazons profits to improve his existing businesses and explore new ones, often to the consternation of
shareholders. He surprised the world in August when he personally bought the Washington Post
newspaper, saying his blend of optimism, innovation, and long-term orientation could revive it. One day
a week, he moonlights as the head of Blue Origin, his private rocket ship company, which is trying to
lower the cost of space travel.
Amazon has a few well-known peculiaritiesthe desks are repurposed doors; meetings begin with
everyone in the room sitting in silence as they read a six-page document called a narrative. Its a
famously demanding place to work. And yet just how the company worksand what Bezos is like as a
personis difficult to know.
Bezos rarely speaks at conferences and gives interviews only to publicize new products, such as the
latest Kindle Fire. He declined to comment on this account, saying that its too early for a reflective
look at Amazons history, though he approved many interviews with friends, family, and senior Amazon
executives. John Doerr, the venture capitalist who backed Amazon early and was on its board of
directors for a decade, calls Amazons Berlin Wall approach to public relations the Bezos Theory of
Communicating. Its really just a disciplined form of editing. Bezos takes a red pen to press releases,
product descriptions, speeches, and shareholder letters, crossing out anything that doesnt convey a
simple message: You wont find a cheaper, friendlier place to get everything you need than Amazon.
The one unguarded thing about Bezos is his laugha pulsing, mirthful bray that he leans into while
craning his neck back. He unleashes it often, even when nothing is obviously funny to anyone else. And
it startles people. You cant misunderstand it, says Rick Dalzell, Amazons former chief information
officer, who says Bezos often wields his laugh when others fail to meet his lofty standards. Its
disarming and punishing. Hes punishing you.
Intensity is hardly rare among technology CEOs. Steve Jobs was as famous for his volatility with Apple
(AAPL) subordinates as he was for the clarity of his insights about customers. He fired employees in the
elevator and screamed at underperforming executives. Bill Gates used to throw epic tantrums at
Microsoft (MSFT); Steve Ballmer, his successor, had a propensity for throwing chairs. Andy Grove, the
former CEO of Intel (INTC), was so harsh and intimidating that a subordinate once fainted during a
performance review.
Bezos fits comfortably into this mold. His drive and boldness trumps other leadership ideals, such as
consensus building and promoting civility. While he can be charming and capable of great humor in
public, in private he explodes into what some of his underlings call nutters. A colleague failing to meet
Bezoss exacting standards will set off a nutter. If an employee does not have the right answers or tries
to bluff, or takes credit for someone elses work, or exhibits a whiff of internal politics, uncertainty, or
frailty in the heat of battlea blood vessel in Bezoss forehead bulges and his filter falls away. Hes
capable of hyperbole and harshness in these moments and over the years has delivered some
devastating rebukes. Among his greatest hits, collected and relayed by Amazon veterans:
Are you lazy or just incompetent?
Im sorry, did I take my stupid pills today?
Do I need to go down and get the certificate that says Im CEO of the company to get you to stop
challenging me on this?
Are you trying to take credit for something you had nothing to do with?
the company had expanded into selling baby wipes, infant formula, clothes, strollers, and other survival
gear for new parents. In an October 2010 Bloomberg Businessweek cover story, the Quidsi founders
admitted to studying Amazon closely and idolizing Bezos. In private conversations, they referred to
Bezos as sensei.
In 2009, Jeff Blackburn, Amazons senior vice president for business development, ominously informed
the Quidsi co-founders over an introductory lunch that the e-commerce giant was getting ready to
invest in the category and that the startup should think seriously about selling to Amazon. According to
conversations with insiders at both companies, Lore and Bharara replied that they wanted to remain
private and build an independent company. Blackburn told the Quidsi founders that they should call him
if they ever reconsidered.
Soon after, Quidsi noticed Amazon dropping prices up to 30 percent on diapers and other baby
products. As an experiment, Quidsi executives manipulated their prices and then watched as Amazons
website changed its prices accordingly. Amazons pricing botssoftware that carefully monitors other
companies prices and adjusts Amazons to matchwere tracking Diapers.com.
At first, Quidsi fared well despite Amazons assault. Rather than attempting to match Amazons low
prices, it capitalized on the strength of its brand and continued to reap the benefits of strong word of
mouth. After a while, the heated competition took a toll on the company. Quidsi had grown from nothing
to $300 million in annual sales in just a few years, but with Amazon focusing on the category, revenue
growth started to slow. Venture capitalists were reluctant to furnish Quidsi with additional capital, and
the company was not yet mature enough for an initial public offering. For the first time, Lore and
Bharara had to think about selling.
Meanwhile, Wal-Mart Stores (WMT) was looking for ways to make up ground it had lost to Amazon and
was shaking up its online division. Wal-Marts then-vice chairman, Eduardo Castro-Wright, took over
Walmart.com, and one of his first calls was to Lore to initiate acquisition talks. Lore said Quidsi wanted
to get close to Zappos moneymore than $500 million, plus additional bonuses spread out over many
years tied to performance goals. Wal-Mart agreed in principle and started due diligence. Mike Duke, WalMarts CEO, visited a Diapers.com fulfillment center in New Jersey.
The formal offer from Bentonville was around $450 millionnowhere near Zappos money.
So Lore picked up the phone and called Amazon. In September 2010, he and Bharara traveled to Seattle
to discuss the possibility of Amazon acquiring Quidsi. While they were in that early morning meeting
with Bezos, Amazon sent out a press release introducing a service called Amazon Mom. It was a sweet
deal for new parents: They could get up to a years worth of free two-day Prime shipping (a program
that usually cost $79 a year). Customers also could get an additional 30 percent off the alreadydiscounted diapers if they signed up for regular monthly deliveries as part of a service called Subscribe
and Save. Back in New Jersey, Quidsi employees desperately tried to call their founders to discuss a
public response to Amazon Mom. The pair couldnt be reached: They were still in the meeting at
Amazons headquarters.
Quidsi could now taste its own blood. At one point, Quidsi executives took what they knew about
shipping rates, factored in Procter & Gambles (PG) wholesale prices, and calculated that Amazon was
on track to lose $100 million over three months in the diaper category alone.
Inside Amazon, Bezos rationalized these moves as being in the companys long-term interest of
delighting its customers and building its consumables business. He told Peter Krawiec, the business
development vice president, not to spend more than a certain amount to buy Quidsi but to make sure
that Amazon did not, under any circumstance, lose the deal to Wal-Mart.
As a result of Bezoss meeting with Lore and Bharara, Amazon had an exclusive three-week period to
study Quidsis financial results and come up with an offer. At the end of that period, Krawiec offered
Quidsi $540 million and called the number a stretch price. Knowing that Wal-Mart hovered on the
sidelines, he gave Quidsi a window of 48 hours to respond and made it clear that if the founders didnt
take the offer, the Amazon Mom onslaught would continue.
Wal-Mart should have had a natural advantage. Jim Breyer, the managing partner at one of Quidsis
venture capital backers, Accel, was also on the Wal-Mart board. But Wal-Mart was caught flat-footed. By
the time it increased its offer to $600 million, Quidsi had tentatively accepted the Amazon term sheet.
Duke left phone messages for several Quidsi board members, imploring them not to sell to Amazon.
Those messages were then transcribed and sent to Seattle, because Amazon had stipulated in the
preliminary term sheet that Quidsi turn over information about any subsequent offer.
When Bezoss lieutenants learned of Wal-Marts counterbid, they ratcheted up the pressure, telling the
Quidsi founders that sensei was such a furious competitor that he would drive diaper prices to zero if
they sold to Bentonville. The Quidsi board convened to discuss the possibility of letting the Amazon deal
expire and then resuming negotiations with Wal-Mart. But by then, Bezoss Khrushchev-like willingness
to use the thermonuclear option had had its intended effect. The Quidsi executives stuck with Amazon,
largely out of fear. The deal was announced on Nov. 8, 2010.
Blackburn, Amazons mergers-and-acquisitions chief, said in a 2012 interview that everything the
company did in the diapers market was planned beforehand and was unrelated to competing with
Quidsi. He said that Quidsi was similar to shoe retailer Zappos, which Amazon acquired in 2009: a
stubbornly independent company building an extremely flexible franchise.
The Federal Trade Commission scrutinized the acquisition for four and a half months, going beyond the
standard review to the second-request phase, where companies must provide more information about a
transaction. The deal raised a host of red flags, such as the elimination of a major player in a
competitive category, according to an FTC official familiar with the review. The merger was eventually
approved, in part because it did not result in a monopoly. Costco Wholesale (COST), Target, and plenty
of other companies sold diapers online and off.
Bezos won, neutralizing an incipient competitor and filling another set of shelves in his Everything Store.
Quidsi soon expanded into pet supplies with Wag.com and toys with Yoyo.com. Wal-Mart missed the
chance to acquire a talented team of entrepreneurs whod gone toe to toe with Amazon in a new
product category. And insiders were once again left marveling at how Bezos had engineered another
acquisition by driving his target off a cliff.
The people who do well at Amazon are often those who thrive in an adversarial atmosphere with almost
constant friction. Bezos abhors what he calls social cohesion, the natural impulse to seek consensus.
Hed rather his minions battle it out backed by numbers and passion, and he has codified this approach
in one of Amazons 14 leadership principlesthe companys highly prized values that are often
discussed and inculcated into new hires:
Have Backbone; Disagree and Commit
Leaders are obligated to respectfully challenge decisions when they disagree, even when doing so is
uncomfortable or exhausting. Leaders have conviction and are tenacious. They do not compromise for
the sake of social cohesion. Once a decision is determined, they commit wholly.
Some employees love this confrontational culture and find they cant work effectively anywhere else.
Everybody knows how hard it is and chooses to be there, says Faisal Masud, who spent five years in
the retail business. You are learning constantly, and the pace of innovation is thrilling. I filed patents; I
innovated. There is a fierce competitiveness in everything you do. The professional networking site
LinkedIn (LNKD) is full of boomerangsAmazon-speak for executives who left the company and then
returned.
But other alumni call Amazons internal environment a gladiator culture and wouldnt think of
returning. Many last less than two years. Its a weird mix of a startup that is trying to be
supercorporate and a corporation that is trying hard to still be a startup, says Jenny Dibble, who was a
marketing manager there for five months in 2011. She found her bosses were unreceptive to her ideas
about using social media and that the long hours were incompatible with raising a family. It was not a
friendly environment, she says. Even leaving Amazon can be a combative processthe company is not
above sending letters threatening legal action if an employee takes a similar job at a competitor. Masud,
who left Amazon for EBay (EBAY) in 2010, received such a threat. (EBay resolved the matter privately.)
Employee churn doesnt seem to damage Amazon, though. The company, aided by the appeal of its
steadily increasing stock price, is an accomplished recruiter of talent. In its second-quarter earnings
report in July, Amazon said its ranks had swelled to 97,000 full-time and part-time employees, up 40
percent from the year before. New hires are given an industry-average base salary, a signing bonus
spread over two years, and a grant of restricted stock units spread over four years. Unlike Google
(GOOG) and Microsoft, whose stock grants vest evenly year by year, Amazon backloads the vesting
toward the end of the four-year period. Employees typically get 5 percent of their shares at the end of
their first year, 15 percent their second year, and then 20 percent every six months over the final two
years. Ensuing grants vest over two years and are also backloaded to ensure that employees keep
working hard and are never inclined to coast.
Managers in departments of 50 people or more are often required to top-grade their subordinates on
a curve and must dismiss the least effective performers. As a result, many Amazon employees live in
perpetual fear; those who manage to get a positive review are often genuinely surprised.
There are few perks or unexpected performance bonuses at Amazon, though the company is more
generous than it was the 1990s, when Bezos refused to give employees city bus passes because he
didnt want to give them any reason to rush out of the office to catch the last bus of the day. Employees
now get cards that entitle them to free rides on Seattles regional transit system. Parking at the
companys offices in South Lake Union costs $220 a month, and Amazon reimburses employeesfor
$180. Conference room tables are a collection of blond-wood door-desks shoved together side by side.
The vending machines take credit cards, and food in the company cafeterias is not subsidized. New
hires get a backpack with a power adapter, a laptop dock, and orientation materials. When they resign,
theyre asked to hand in all that equipmentincluding the backpack. These practices are also
embedded in the sacrosanct leadership principles:
Frugality
We try not to spend money on things that dont matter to customers. Frugality breeds resourcefulness,
self-sufficiency, and invention. There are no extra points for head count, budget size, or fixed expense.
Bezos molded Amazons business principles through two decades of surviving in the thin atmosphere of
low profit margins and fierce skepticism from the outside world. In a way, the entire company is built
around his brainan amplification machine meant to disseminate his ingenuity and drive across the
greatest possible radius. Its scaffolding to magnify the thinking embodied by Jeff, says Wilke, the
senior vice president for North American retail. Jeff was learning as he went along. He learned things
from each of us who had expertise and incorporated the best pieces into his mental model. Now
everyone is expected to think as much as they can like Jeff.
Bezos runs the final meetings in the biannual operating reviews, dubbed OP1 (held over the summer)
and OP2 (after the holidays). Teams work intensely for months preparing for their sessions with the CEO,
drawing up six-page narratives that spell out their plans for the year ahead. A few years ago, the
company refined this process further to make the narratives more easily digestible for Bezos and other
members of his senior leadership group, called the S Team, who cycle through many topics during these
reviews. Now every narrative includes at the top of the page a list of a few rules, called tenets, that
guide the groups hard decisions and allow it to move fast, without constant supervision.
Once a week, usually on Tuesdays, various departments meet with their managers to review
spreadsheets of data important to their business. Customer anecdotes have no place at these meetings;
numbers alone must demonstrate whats working and whats broken, how customers are behaving, and
ultimately how well the company overall is performing. This is what, for employees, is so absolutely
scary and impressive about the executive team. They force you to look at the numbers and answer
every single question about why specific things happened, says Dave Cotter, who spent four years at
Amazon as a general manager in various divisions. Because Amazon has so much volume, its a way to
make very quick decisions and not get into subjective debates. The data doesnt lie.
The metrics meetings culminate every Wednesday with the Weekly Business Review, one of the
companys most important rituals, which is run by Wilke. Sixty managers in the retail business gather in
one room to discuss their departments, share data about defects and inventory turns, and talk about
forecasts and the complex interactions between different parts of the company.
Bezos does not attend these meetings. He spends more time on Amazons newer businesses, such as
Amazon Web Services, the streaming video and music initiatives, and, in particular, the Kindle and
Kindle Fire efforts. (Executives joke darkly that employees cant even pass gas in the Kindle buildings
without the CEOs permission.) But Bezos can always make his presence felt anywhere in the company.
After the lubricant fracas of 2010, for example, e-mail marketing fell squarely under his purview. He
carefully monitored efforts to filter the kinds of messages that could be sent to customers, and he tried
to think about the challenge of e-mail outreach in fresh ways. Then, in late 2011, he had what he
considered to be a significant new idea.
Bezos is a fan of e-mail newsletters such as veryshortlist.com, a daily assortment of cultural tidbits from
the Web, and Cool Tools, a compendium of technology tips and product reviews written by Kevin Kelly, a
co-founder of Wired. Both are short, well-written, and informative. Perhaps, Bezos reasoned, Amazon
should be sending a single well-crafted e-mail every weeka short digital magazineinstead of a
succession of bland, algorithm-generated marketing pitches. He asked Shure, the marketing vice
president, to explore the idea.
From late 2011 through early 2012, Shures group presented a variety of concepts to Bezos. One version
revolved around celebrity Q&As, another highlighted interesting historical facts about products. The
project never progressedit fared poorly in tests with customersand several participants remember
the process as being particularly excruciating. In one meeting, Bezos quietly thumbed through the
mock-ups as everyone waited in silence. Heres the problem with this, Bezos said, according to people
who were present. Im already bored. He liked the last concept the most, which suggested profiling a
selection of products that were suddenly hot, such as Guy Fawkes masks and CDs by the Grammywinning British singer Adele. But the headlines need to be punchier, he told the group, which included
the writers of the material. And some of this is just bad writing. If you were doing this as a blogger, you
would starve.
Finally he turned his attention to Shure, who, like so many other marketing vice presidents throughout
Amazons history, was an easy target.
Steve, why havent I seen anything on this in three months?
Well, I had to find an editor and work through mock-ups.
This is developing too slow. Do you care about this?
Yes, Jeff, we care.
Strip the design down, its too complicated. Also, it needs to move faster!
Jeff Bezos grew up in a tight-knit family, with two deeply involved and caring parents, Jackie and Mike,
and two close younger siblings, Christina and Mark. Jackie, who gave birth to Bezos just two weeks after
she turned 17, was a towering figure of authority to Bezos and his friends. Mike, also known as Miguel,
was a Cuban immigrant who arrived in America at age 18, alone and penniless, knowing only one
English word: hamburger. Through grit and determination, he got a college education and climbed
through the ranks of Exxon (XOM) as a petroleum engineer and manager, in a career that took the
Bezos family to Houston, Pensacola, Fla., Miami, and, after Bezos left for college, cities in Europe and
South America.
Yet for a brief period early in his life, before this ordinary if peripatetic childhood, Bezos lived alone with
his mother and grandparents. And before that, he lived with his mother and his biological father, a man
named Ted Jorgensen. Bezos has said the only time he thinks about Jorgensen is when hes filling out a
medical form that asks for his family history. He told Wired in 1999 that hed never met the man. Strictly
speaking, thats not true: Bezos last saw him when he was 3 years old. And while Bezoss professional
life has been closely studied and celebrated over the last two decades, this story has never been told.
Jorgensen was a circus performer and one of Albuquerques best unicyclists in the 1960s. A newspaper
photograph taken in 1961, when he was 16, shows him standing on the pedals of his unicycle facing
backward, one hand on the seat, the other splayed theatrically to the side, his expression tense with
concentration. The caption says he was awarded most versatile rider in the local unicycle club.
That year, Jorgensen and a half-dozen other riders traveled the country playing unicycle polo in a team
managed by Lloyd Smith, the owner of a local bike shop. Jorgensens team was victorious in places such
as Newport Beach, Calif., and Boulder, Colo. The Albuquerque Tribune has an account of the event: Four
hundred people showed up at a shopping center parking lot in freezing weather to watch the teams
swivel around in four inches of snow wielding three-foot-long plastic mallets in pursuit of a six-inch
rubber ball. Jorgensens team swept a doubleheader, 3 to 2 and 6 to 5.
In 1963, Jorgensens troupe resurfaced as the Unicycle Wranglers, touring county fairs, sporting events,
and circuses. They square-danced, did the jitterbug and the twist, skipped rope, and rode on a high
wire. The group practiced constantly, rehearsing three times a week at Smiths shop and taking dance
classes twice a week. Its like balancing on greased lightning and dancing all at the same time, one
member told the Tribune. When the Ringling Brothers and Barnum & Bailey Circus came to town, the
Wranglers performed under the big top, and in the spring of 1965 they performed in eight local shows of
the Rude Brothers Circus.
Jorgensen was born in 1944 in Chicago to a family of Baptists. His father moved the family to
Albuquerque when Jorgensen and his younger brother, Gordon, were in elementary school. Teds father
took a job as a purchase agent at Sandia Base (todays Sandia National Laboratories), then the largest
nuclear weapons installation in the country, handling the procurement of supplies at the base.
In high school, Jorgensen started dating Jacklyn Gise, a girl two years his junior whose father also
worked at Sandia Base. Their dads knew each other. Her father, Lawrence Preston Gise, known to
friends as Preston and to his family as Pop, ran the local office of the U.S. Atomic Energy Commission,
the federal agency that managed the nuclear weapons program after Harry S Truman took it from the
military following World War II.
Jorgensen was 18 and finishing his senior year in high school when Gise became pregnant. She was a
sophomore. They were in love and decided to get married. Her parents gave them money to fly to
Jurez, Mexico, for a ceremony. A few months later, on July 19, 1963, they repeated their vows at the
Gises house. Because she was underage, both of their mothers signed the application for a marriage
license. The baby was born on Jan. 12, 1964. They named him Jeffrey Preston Jorgensen.
The new parents rented an apartment in Albuquerques Southeast Heights neighborhood. Jackie finished
high school, and during the day, her mother, Mattie, took care of the baby. Life was difficult. Jorgensen
was perpetually broke, and they had only one car, his cream-colored 55 Chevy. Belonging to a unicycle
troupe didnt pay much. The Wranglers divided their fees among all members, with Smith taking a
generous cut off the top. Eventually, Jorgensen got a $1.25-an-hour job at the Globe Department Store,
part of Walgreens (WAG) short-lived foray into the promising discount retail market being pioneered at
the time by Kmart (SHLD) and Wal-Mart. Occasionally Jackie brought the baby to the store to visit.
Their marriage was probably doomed from the start. Jorgensen had a habit of drinking too much and
staying out too late. He was an inattentive dad and husband. Jackies father tried to help him; he paid
his son-in-laws tuition at the University of New Mexico, but Jorgensen dropped out after a few
semesters. Preston Gise then tried to get Jorgensen a job with the New Mexico State Police, but
Jorgensen wasnt interested.
Eventually, Jackie took the child and moved back in with her parents at Sandia. In June 1965, when the
baby was 17 months old, she filed for divorce. The court ordered Ted to pay $40 a month in child
support. Court records indicate that his income at the time was $180 a month. Over the next few years,
he visited his son occasionally but missed many support payments.
Then Jackie took a job working in the bookkeeping department of the Bank of New Mexico and met
Miguel Bezos, who was working the overnight shift while he attended the University of Albuquerque. On
several occasions when Ted was visiting his son, Bezos would be there, and they avoided each other.
But Ted asked around and heard he was a good man.
In 1968, Jackie called Ted and told him she was marrying Miguel and moving to Houston. She told him
he could stop paying child support and asked him not to interfere in their lives. Her father confronted
him and made him promise to stay away. Jackie also wanted to give Jeffrey her new husbands surname
and let Miguel adopt him. Teds permission was needed for the adoption. After thinking it over and
reasoning that the boy would probably have a better life as the son of Jackie and her new husband, Ted
obliged. After a few years, he lost track of the family and then forgot their last name.
If you were to search the world for the polar opposite of sprawling, secretive, powerful Amazon, you
might arrive at a small bike shop in Glendale, Ariz., just north of Phoenix. Its called the Roadrunner Bike
Center. It sits in a shoebox-shape space in an ordinary shopping center next to the Hot Cutz Salon & Spa
and down a ways from a Walmart grocery store. It offers a small selection of premium BMX and dirt
bikes from companies such as Giant, Haro, and Redline, brands that carefully select their retail partners
and generally do not sell to websites or discount outlets. The old guy that runs this is always there and
you can tell he loves to fix and sell bikes, writes one customer in a typically favorable online review of
the store. When you buy from him he will take care of you. He also is the cheapest place I have ever
taken a bike for a service, I think sometimes he runs a special for $30! Thats insane!
A red poster board with the hand-scrawled words, Layaway for the holidays! leans against the window.
Hanging on a wall next to the front counter, theres a framed newspaper clipping with a photograph of a
16-year-old boy with a flattop haircut, standing up on the pedals of his unicycle, one hand on the seat
and the other flared daringly out to the side.
I found Ted Jorgensen, Jeff Bezoss biological father, behind the counter of his bike shop in late 2012. Id
considered a number of ways he might react to my unannounced appearance but gave a very low
probability to the likelihood of what actually happened: He had no idea what I was talking about.
Jorgensen said he didnt know who Jeff Bezos was and was baffled by my suggestion that he was the
father of this famous CEO.
I mentioned Jacklyn Gise and Jeffrey, the son they had during their brief teenage marriage. The old
mans face flushed with recognition. Is he still alive? he asked, not yet fully comprehending.
Your son is one of the most successful men on the planet, I told him. I showed him some Internet
photographs on my smartphone, and for the first time in 45 years, Jorgensen saw his biological son. His
eyes filled with sorrow and disbelief.
I took Jorgensen and his wife, Linda, to a steak dinner, and his story tumbled out. When the Bezos
family moved from Albuquerque to Houston in 1968 and Jorgensen promised Jackie and her father that
he would stay out of their lives, he remained in Albuquerque. He performed with his troupe and took
odd jobs. He drove an ambulance and worked as an installer for Western Electric, a local utility.
In his twenties, he moved to Hollywood to help Smith, the Wranglers manager, start a new bike shop,
and then to Tucson, looking for work. In 1972 he was mugged outside a convenience store after buying
cigarettes. The assailants hit him with a two-by-four and broke his jaw in 10 places.
Ted Jorgensen today
Then he finally started to take control of his life. In 1974 he moved to Phoenix and quit drinking. Six
years later he put together every cent he had and bought the bike shop from its owner. Hes run the
store ever since, moving it several times, eventually settling into its current location on the northern
edge of the Phoenix metropolitan area, adjacent to the New River Mountains. He met Linda at the bike
shop. She stood him up on their first date but showed up the second time he asked her out. Theyve
been married for 25 years. Linda says theyve been talking privately about Jeffrey and replaying Teds
youthful mistakes for years.
Ted has no other children; Linda has four sons from a previous marriage. All are close with their
stepfather, especially the youngest, Darin Fala, who spent the most time with him growing up. But Ted
never told them that he had another child. He says he was sure he would never see or hear anything
about his son again, so what was the point?
Ted is 69 now and has heart problems, emphysema, and an aversion to the idea of retirement. I dont
want to sit at home and rot in front of the television, he says. Hes friendly and, his wife says, deeply
compassionate. The store is less than 30 miles from four Amazon fulfillment centers, but if he ever saw
Bezos on television or read an article about Amazon, he didnt make the connection. I didnt know
where he was, if he had a good job or not, or if he was alive or dead, he says. The face of his child,
frozen in infancy, has been stuck in his mind for nearly half a century.
He says he always wanted to reconnect with Jeffreywhatever his occupation or stationand seems
ashamed that he agreed to stay out of his life all those years ago. I wasnt a good father or a
husband, he says. It was really all my fault. I dont blame Jackie at all.
When I left Ted and his wife after dinner, they were still in shock and decided that they werent going to
tell Lindas sons. The story seemed too far-fetched. But a few months later, in early 2013, I got a phone
call from Fala, a senior project manager at Honeywell (HON) who also lives in Phoenix. Ted, Fala said,
had called a family meeting the previous Saturday afternoon. I bet hes going to tell us he has a son or
daughter out there, Falas wife had guessed correctly.
The gathering was wrenching. My wife calls me unemotional because she has never seen me cry, Fala
says. Ted is the same way. Saturday was the most emotion Ive ever seen out of him, as far as sadness
and regret. It was overwhelming. Ted decided he wanted to reach out to the Bezos family and
reestablish contact and asked Fala to help him craft letters to Bezos and Jackie.
Curious about Bezos, Fala had watched online clips of the Amazon CEO being interviewed, including one
from The Daily Show with Jon Stewart. He was startled to hear Bezoss laugh. Hed heard it before. He
grew up listening to it. He has Teds laugh! Fala said in amazement. Its almost exact.
New
Slate, 2008)
Debate
Can social media solve real-world problems? (Evgeny Morozov vs. Steven Johnson,
Republic, 2013)
Full Text: Keen vs. Weinberger (WSJ, 2007)
The Book Club: True Enough, Entry 1 of 4 (Farhad Manjoo vs. Steven Johnson,
But enough about Morozov ignoring my words. The most revealing omission in the review revolves
around his words. Future Perfect has a chapter called What does the Internet want? which Morozov
predictably enough invokes as a telltale sign of Internet centrism:
The totalizers would happily follow Johnson in seeking answers to questions such as So what does the
Internet want?as if the Internet were a living thing with its own agenda and its own rights.
The problem with this diagnosis is that the chapter is explicitly about the difficulty of imagining the
Internet as a unified positive force. It points out that decentralized architectures can be used to build
terrorist networks as readily as crowdfunded charity initiatives. Consider this crucial passage from the
chapter:
Perhaps it was a mistake to treat the Internet as a deterministic one-directional force for either global
liberation or oppression, for cosmopolitanism or xenophobia. The reality is that the Internet will enable
all of these forcesas well as many otherssimultaneously. But as far as laws of the Internet go, this is
all we know. Which of the numerous forces unleashed by the Web will prevail in a particular social and
political context is impossible to tell without first getting a thorough theoretical understanding of that
context.
Youd think that Morozov would want to mention that passage from What Does The Internet Want?"if
only because the words were written by Morozov himself, in his earlier book Net Delusion. I quoted them
at a very prominent early place in the chapter, precisely to make it clear that easy generalizations
about the logic of the Internet were prone to failure. The whole chapter is a meditation on avoiding
the pitfalls of naive tech essentialism; its answer to the question what does the Internet want is: a lot
of contradictory things. But Morozov is so keen to denounce Internet-centrism that he doesnt even
seem to notice when his own words are being invoked enthusiastically as a critique of Internet-centrism.
Now, it would be perfectly reasonable to argue that my critique doesnt go far enough, or that Ive
misinterpreted Morozovs position, or invoked it in bad faith. But instead, Morozov just charges ahead as
if I havent engaged with his argument at all.
The argument that Morozov wants to make here is that we Internet-centrists (a group that apparently
also includes Clay Shirky and Yochai Benkler) begin with our one true devotion to TCP/IP, and then
conveniently backfill a history of lower-tech antecedents in order to justify our love, as Madonna might
say. You wouldnt suspect it from Morozovs review, but the discussion of the Internet makes up only a
fraction of Future Perfects content. He does manage to allude to my section on participatory budgeting
in Brazil, but the book also includes long riffs on the prize-backed challenges offered by the Royal
Society of the Arts in the mid-1700s; the democracy vouchers solution for campaign finance; the
extraordinary rise in aviation safety over the past thirty years; the internal organization of corporations;
childhood malnutrition in Vietnam, and so on.
These stories hail from very different historical and conceptual frames, but they share two important
qualities: they are all directly related to the peer progressive worldview, and they have nothing to do
with the Internet, or computers in general.
I can understand why Morozov wants to see Internet-centrism in my work: Hes built his career around
debunking that belief system, after all. And yes, Im glad the Internet and the Web were invented; I
think that the world is, on the whole, better off for their existence. I would be surprised if Morozov
doesnt feel that way himself. But Future Perfect goes to great lengths to separate the promise of peer
networks from some naive faith in Internet liberation. The main lines of its argument arose in part out of
two book-length studies of peer collaboration in the 18th and 19th centuries: The Ghost Map and The
Invention Of Air. My last book, Where Good Ideas Come From, ended with a survey of hundreds of peerproduced innovations from the Renaissance to today. The deep roots of the idea date back to reading
Jane Jacobs on the organized complexity of the city in my twenties, which ultimately led to my
arguments for decentralization in my 2001 book Emergence. Im giving Morozov the benefit of the
doubt that he just hasnt bothered to read any of those books, since he doesnt mention them anywhere
in the review. But if you added up all the words Ive published on peer network architecture, I wager
somewhere around 90 percent of them are devoted to pre-digital forms of collaboration: in the
commonplace book or the 18th-century coffeehouse, or urban neighborhood formation, or the traditions
of academic peer review, or in the guild systems of Renaissance Florence. If Morozov were only a little
less obsessed with the Internet himself, he might have some very interesting things to say about that
history. Instead, he has decided to reduce that diverse web of influences into a story of single-minded
zealotry. Hes like a vampire slayer that has to keep planting capes and plastic fangs on his victims to
stay in business.
The point I tried to make explicit in Future Perfect is one that Ive been implicitly making for more than a
decade now: that peer collaboration is an ancient tradition, with a history as rich and illustrious as the
more commonly celebrated histories of states or markets. The Internet happens to be the most visible
recent achievement in that tradition, but it is hardly the basis of my worldview. And there is nothing in
Future Perfect (or any of these other works) that claims that decentralized, peer-network approaches
will always outperform top-down approaches. Its simply a question of emphasis. Liberals can still
believe in the power and utility of markets, even if they tend to emphasize big government solutions; all
but the most radical libertarians think that there are some important roles for government in our lives.
Peer progressives are no different. We dont think that everything in modern life should be reengineered to follow the logic of the Internet. We just think that society has long benefited from nonmarket forms of open collaboration, and that theyre arent enough voices in the current political
conversation reminding us of those benefits. For peer progressives, the Internet is a case-study and a
role model, yes, but hardly a deity. We would be making the same argument had the Internet never
been invented.
EVGENY MOROZOV:
In his response, Steven Johnson raises four main objections to my review:
Objection I: Johnson claims that he is not an Internet-centrist because 1) Future Perfect went "out of its
way to avoid ...naive techno-determinism" and 2) one of the book's chapters is about "the difficulty of
imagining the Internet as a unified positive force."
Objection II: Johnson claims that his book is not really about "the Internet," as it also discusses "Royal
Society of the Arts in the mid-1700s; the 'democracy vouchers' solution for campaign finance; the
extraordinary rise in aviation safety over the past thirty years; the internal organization of corporations;
childhood malnutrition in Vietnam"; these stories "have nothing to do with the Internet."
Objection III: Johnson claims that he is, in fact, making an effort to engage with political philosophyas
evidence by his discussion of the limitations of direct democracy.
Objection IV: Johnson claims that I mischaracterize his position on New York's 311 service.
All four objections lead me to conclude that Johnson doesn't understand the substance of my critique.
Let's begin with his sly conflation of Internet-centrism with what he dubs "naive techno-determinism.
As I state in the review's second paragraph, Internet-centrists have no problem acknowledging that the
"Internet" can be deployed to do bad, evil things. The kind of naive determinism that views the
Internet as a "positive force" and that Johnson seeks to distance himself from has nothing to do with
Internet-centrism; it's a feature more commonly attributed to cyber-utopianism, as I clearly state at the
very beginning of the review. That Johnson is not a starry-eyed techno-determinist doesn't make him
less of an Internet-centrist.
What should we make of Johnson's questionon Page 120of What does the Internet want? It's a
question that he derives from Kevin Kelly's question - What does technology want?; both Kelly and
Johnson assume that there is some kind of neat intellectual and practical coherence to these two ideas
a view that I vehemently oppose. This question does allow us to make the utopian/centrist distinction
even sharper: An Internet-centrist asks the question: What does the Internet want? as if that question
made sense. An Internet-utopian doesn't even ask that question, assuming that the Internet wants
democracy and freedom. I don't know if Johnson is an Internet utopian but he is certainly an Internetcentrist. So while it's clear Objection I doesn't stand, let us still examine Johnson's answer:
So what does the Internet want? It wants to lower the costs for creating and sharing information. The
notion sounds unimpeachable when you phrase it like that, until you realize all the strange places that
kind of affordance ultimately leads to. The Internet wants to breed algorithms that can execute
thousands of financial transactions per minute, and it wants to disseminate the #occupywallstreet
meme across the planet. The Internet wants both the Wall Street tycoons and the popular insurrection
at its feet.
I leave it to the reader to decide if this passage implies that there's a certain logic to the Internet; my
reading is that Johnson does believe this while also arguing that the exact manifestations of that logic
would be different in each and every contextwhich, if one closely looks at the Shirky quote about the
logic of the Internet that I mention in the review, is very much in line with Internet-centric thinking.
But it might be useful to step back and ask whether the very fact of bringing the Internet in our
explanatory accounts is enhancing or impoverishing our understanding of the technological world that
we inhabit. Are we gaining anything by lumping the algorithms used in high-frequency trading with a
very different set of algorithms that Twitter uses to decide on its popular trends while using the sexy
but highly elusive label of the Internet to do all that lumping? I don't think sowhich is why I've been
calling for a highly particularized approach to studying digital technologiesone that would treat each
of them on their own terms without having to smuggle in some abstract, macro-level concept such as
the Internet to smooth over the rough empirical and theoretical edges (As I point out in one of the
footnotes, game theorist Ian Bogost has a related but clunky term for this methodhe calls it media
microecology).
Second, I'm well aware that Johnson sees the same spirit of peer progressivism that he believes to be
at work in the Internet also sweeping through modern-day Vietnam, through Britain of the mid-1700s
and through half a dozen other non-Internet contexts. But this is exactly what I have criticized him for!
This tendency to travel back in time and rummage through other contexts and eras in search of some
imaginary proto-Internetwhich can then be repackaged in a sexy ideology like peer progressivism
is precisely what I find so problematic about Johnson's work in particular and Internet-centrism in
general. (As I put it in my review: Once the elusive logic of the Internet has been located, it is not
uncommon to see Internet-centrists move to deflate its actual novelty.). One cannot refute an
accusation of Internet-centrism by proclaiming one's adherence to one of its key principles!
To understand the role that the notion of the Internet plays in Johnson's argument, consider a simple
thought experiment. Remove the Internet and all its affiliated projects, from Kickstarter to Wikipedia,
from a long list of examples of Johnson's peer progressivism from childhood malnutrition in modernday Vietnam to the hurdles of navigation in 18th century Britainand see how far you'll go in
convincing your readers that these examples amount to an original political philosophyso original, in
fact, that it deserves the fancy label of peer progressivism. If a book does come out of this, my bet is
that you'll have to self-publish. Virtually every political idea that Johnson articulates in Future Perfect
has been with us for decadesand it's precisely the vague, lazy and innovation-obsessed culture of our
Internet debate that lets Johnson get away with inventing an original theory without doing his
homework.
What allows Johnson to cut so many intellectual corners is his ability to capitalize on everyone's
excitement about the Internet. He does so by selecting those pieces from its rich and ambiguous
history that fit his overall peer progressive narrative while turning non-Internet history into a fishing
expedition that supplies intellectual gravitas to the carefully selected Internet anecdotes that define
what peer progressivism is all about. No wonder all his historical connections make sense: his
argument is designed that way! On these grounds, I suspect that Objection II doesn't stand either.
As for Johnson's peeve that I unfairly attack him for not engaging with political philosophy, look no
further than peer progressivism. What, one might ask, is new about this political ideology? According
to Johnson, at least two things. First, its adherents believe that there are some areas of expertise where
the publicor the crowdare more knowledgeable than the experts. Second, peer progressives
unlike all those pre-peer progressivesdon't have to choose between the state and the market; the two
can co-exist, tapping into networks of crowd expertise along the way.
My problem lies not so much with the thrust of these two propositions; both are quite sensible. Rather,
my problem is with the manner in which Johnson arrives at them, the fuzzy language that he deploys in
the process, the revolutionary novelty that he ascribes to his own insights, and the carelessness with
which he treats decades of serious thinking on this subject. Johnson, comfortably ensconced in his
Internet-centric bubble, seems to sincerely believe that no one had ever thought about ways to make
democratic politics more participatory before the onset of blogs, chats, and social networks. This, of
course, is nonsense. The most unfortunate consequence of Johnson's project might be that, in his halfbaked efforts to make a case for peer progressivism, he might undermine public support for more
serious government reforms that are not as excited about the Internet but have developed
sophisticated theories about involving crowds and networks in both deliberative and participatory
processes.
So what does Johnson omit? Quite a few things, in fact. The idea that progressive politics can be
combined with market-oriented and decentralized solutions was already in circulationin the writings
of Samuel Bowles and Herbert Gintis but also of Joshua Cohen and Charles Sabelby the end of the
last century (and, more recently, in the work of scholars like Archon Fung). Here are, for example, Cohen
and Sabelwriting in 1997 -- on how governments can become more participatory and profit from more
decentralized ways of knowledge aggregation: Instead of seeking to solve problems, the agencies
[would] see their task as reducing the costs of information faced by different problem-solvers: helping
them to determine which deliberative bodies are similarly situated, what projects those bodies are
pursuing, and what modifications of those projects might be needed under local conditions...Here is
Cohen in another essay written at the time:The availability of alternative methods of problem-solving
imposes on legislatures a greater burden in justifying their own direct efforts: They must explicitly make
the case that the benefits of those efforts suffice to overcome the advantages of direct-deliberative
solutions.
These two quoteswritten a good decade before buzzwords like open government hijacked the public
conversationpack more reform wisdom than Johnson's entire book. But Johnson prefers to ignore
virtually everything written on this subjectthe faux novelty of the Internet licenses him to such
frivolity. So he completely ignores Josiah Ober, who has made a fascinating use of Hayek, game theory
and political philosophy to argue that the democracy of classical Athens was so effective because it
deployed highly innovative and decentralized schemes of aggregating the knowledge of its citizens. Nor
does he mention a new strand of scholarship on the political implications of cognitive diversitybest
exemplified by the work of Jon Elster and Helene Landemorewhich has advanced sophisticated,
context-sensitive arguments about ways to bring more diverse voices into democratic policy-making. All
these efforts start from where reform proposals ought to start: they acknowledge the complexity of the
problem that they are trying to tackle and only then do they work their way to their preferred solution.
This is not the case with Johnson, who starts with his preferred solutionthe Internetand then
searches for problems that it can help him solve. Yes, it's nice to see him quote the Federalist Papers but
I hope he's at least aware that this is hardly the latest word on innovations in participatory governance.
So Proposition III doesn't stand either.
Finally, did I misread Johnson's treatment of 311? I don't think so. My claim is that the reason why
Johnson is fascinated by 311 is because he views it through the Internet-centric lens of Wikipedia and
other seminal Internet projects. That lens assumes an argument that goes something like this:
encyclopedias used to be centralized and now they are decentralized. Likewise, tip-reporting systems
used to be centralized and now they are decentralized, with hierarchies being replaced by peer
networks. But is it actually true? Were tip-reporting systems ever centralized? Was there ever a
competent Big Brother, perhaps in the form of some omniscient inspectorthat proverbial expert
hated by Internet-centristswho was tasked with tracking all of New York's problems and who now,
thankfully, has been replaced by the crowds? Perhaps, there was such an expert long time ago but
involving crowds in the process of reporting incidents and crimes has a very long history that is not very
relevant to the 311 project. The 311 project was not about replacing centralized experts with
decentralized crowds; it was about turning a slew of previously decentralized tip-reporting systems and
hotlinessystems that already relied on crowdsinto one highly centralized system.
Is this what Johnson means when he writes that 311 is not a purely decentralized system and that
top-down element may be inevitable? No, it isn't: what he means here is that while under 311, the
inputsthe tipsmight still be coming from decentralized sources, it still takes a centralized system
some city agencyto deal with the reported problem. But it would be silly to think otherwisenot
unless we expect the New Yorkers to bypass various city agencies and fix problems on their own.
Johnson completely misses what's novel about the story he is discussing that the 311 hotline works
because it centralized many different hotlines under one roofand focuses on that part of the story
which fits his Internet-centric view of peer progressivismnamely that 311 works because many
people report tips to it in much the same way that many people edit Wikipedia. But to believe this is to
miss the fact many people were already reporting tips to New York's various hotline systems even
before 311! Thus, I don't think that Objection IV should be allowed to stand either.
Now, there's one point I must concede to Johnson. I fully agree with him when he writes that "the point I
tried to make explicit in Future Perfect is one that Ive been implicitly making for more than a decade
now." This hasn't escaped my attention; the original version of my review ran at 8000 words and
contained a long section situating Future Perfect in Johnson's entire oeuvre a section that had to be
cut for space reasons. (Given that I managed to keep a reference to one of his little-known essays from
2005, it is a bit unfair to accuse me of not being familiar with his work.) But I do agree with the thrust of
Johnson's remarks: he has, in fact, managed to write yet another book about the same subjectlet's
just call it buzz this time invoking the notion of the "Internet" to justify the publication. In fact, a
close analysis of the source material for Future Perfect reveals that it's based on many essays and blog
posts that Johnson had penned before the idea of peer progressivism took hold of his imagination.
This is the same trick Johnson pulled with his turn to "innovation" in his previous book, Where Good
Ideas Come From, and with neuroscience and sociobiology in his earlier works. Fortunately, we know
how Johnson goes about deciding what specific intellectual form to give to this buzz in his future
projects. Emphasizing the useful feedback that Johnson got from speaking to the clients booked through
his agency, Bill Leigh, his speaking agent, recently told New York magazine that Johnson wanted to
take his book sales to the next level...Out of those conversations [with clients] came his decision to
slant his material with a particular innovation feel to it. Where good ideas come from still remains a
mystery; where lucrative ideas come from everybody knows. It's surprising that it has taken Johnson so
long to discover one such lucrative idea in the Internet.
Full Text: Keen vs. Weinberger
This is the full text of a "Reply All" debate on Web 2.0 between authors Andrew Keen and David
Weinberger.
Mr. Keen begins: So what, exactly, is Web 2.0? It is the radical democratization of media which is
enabling anyone to publish anything on the Internet. Mainstream media's traditional audience has
become Web 2.0's empowered author. Web 2.0 transforms all of us -- from 90-year-old grandmothers to
eight-year-old third graders -- into digital writers, music artists, movie makers and journalists. Web 2.0 is
YouTube, the blogosphere, Wikipedia, MySpace or Facebook. Web 2.0 is YOU! (Time Magazine's Person of
the Year for 2006).
Is Web 2.0 a dream or a nightmare? Is it a remix of Disney's "Cinderella" or of Kafka's "Metamorphosis"?
Have we -- as empowered conversationalists in the global citizen media community -- woken up with the
golden slipper of our ugly sister (aka: mainstream media) on our dainty little foot? Or have we -- as
authors-formerly-know-as-the-audience -- woken up as giant cockroaches doomed to eternally stare at
our hideous selves in the mirror of Web 2.0?
Silicon Valley, of course, interprets Web 2.0 as Disney rather than Kafka. After all, as the sales and
marketing architects of this great democratization argue, what could be wrong with a radically flattened
media? Isn't it dreamy that we can all now publish ourselves, that we each possess digital versions of
Johannes Gutenberg's printing press, that we are now able to easily create, distribute and sell our
content on the Internet? This is personal liberation with an early 21st Century twist -- a mash-up of the
countercultural Sixties, the free market idealism of the Eighties, and the technological determinism and
consumer-centricity of the Nineties. The people have finally spoken. The media has become their
message and the people are self-broadcasting this message of emancipation on their 70 million blogs,
their hundreds of millions of YouTube videos, their MySpace pages and their Wikipedia entries.
Yes, the people have finally spoken. And spoken. And spoken.
Now they won't shut up. The problem is that YOU! have forgotten how to listen, how to read, how to
watch. Thus, the meteoric rise of Web 2.0's free citizen media is mirrored by the equally steep decline in
paid mainstream media and the mass redundancies amongst journalists, editors, recording engineers,
cameramen and talent agents. Newspapers and the music business are in structural crisis, Hollywood
and the publishing business aren't far behind. We've lost truth and interest in the objectivity of
mainstream media because of our self-infatuation with the subjectivity of our own messages. It's what,
in "Cult of the Amateur," I call digital narcissism. A flattened media is a personalized, chaotic media
without that the essential epistemological anchor of truth. The impartiality of the authoritative,
accountable expert is replaced by murkiness of the anonymous amateur. When everyone claims to be
an author, there can be no art, no reliable information, no audience.
Everything becomes miscellaneous. And miscellany is a euphemism for anarchy.
That's the dark side of the Web 2.0 story, more Kafka than Disney. While we are all busy embracing our
inner user-generated-content, the world -- real life rather than Second Life -- is passing us by. This is
infantilized self-stimulation rather than serious media for adults. Web 2.0's democratization of
information and entertainment is creating a generation of media illiterates. That's the nightmare. And
it's easy to see. Just go online and look at YouTube, the blogosphere, Wikipedia, MySpace or Facebook.
Mr. Weinberger responds: You're right. The Web is a problem. It has been from the beginning and it
always will be.
But your dichotomy is false. The Web isn't Cinderella facing Gregor "The Cockroach" Samsa in a
deathmatch. Despite Time -- which, as a pillar of the mainstream press is of course free of the
hyperbole so common on the Web -- the Web isn't even You. It's us. And that is the problem.
Your wildly unflattering picture of life on the Web could also be painted of life before the Web. People
chatter endlessly. They believe the most appalling things. They express prejudices that would peel the
paint off a park bench. They waste their time watching endless hours of TV, wear jerseys as if they were
members of the local sports team, are fooled by politicians who don't even lie convincingly, can't find
Mexico on a map, and don't believe humans once ran with the dinosaurs. So, Andrew, you join a long list
of those who predict the decline of civilization and pin the blame on the latest popular medium, except
this time it's not comic books, TV, or shock jock radio. It's the Web.
This time, of course, you might be right...especially since you and I seem to agree that the Web isn't yet
another medium. Something important and different is going on.
We also agree that the Web is a problem. The problem endemic to the Web even before anyone gave
the Web version numbers -- and the problem that leads to your issue with "cockroaches" -- is that
because anyone can contribute and because there are no centralized gatekeepers, there's too much
stuff and too many voices, most of which any one person has no interest in. But, the Web is also the
continuing struggle to deal with that problem. From the most basic tools of the early Internet, starting
with UseNet discussion threads, through Wikipedia, and sites that enable users to tag online resources,
the Web invents ways to pull together ideas and information, finding the connections and relationships
that keep the "miscellaneous" from staying that way.
But, why should we trust the way "monkeys" (as you refer to Web users in your book) connect the
pieces? We shouldn't trust them blindly. Open up The Britannica at random and you're far more likely to
find reliable knowledge than if you were to open up the Web at random. That's why we don't open up
the Web at random. Instead, we rely upon a wide range of trust mechanisms, appropriate to their
domain, to guide us. Amazon gives you ways of checking to see if a particular reviewer is trustworthy ,
but the mechanisms are not particularly rigorous because not all that much is at stake when considering
the 6,001st review of a Harry Potter book. At eBay, where your money is at risk, the trust mechanisms
are more reliable. On a blog, the persistence of previous posts means you can read further to see if you
trust the blogger. More important, the recommendation of other bloggers you already trust is a good
indicator. At Wikipedia, the rather sophisticated governance processes help establish trust, as does the
complete transparency of the discussions behind the articles. On mailing lists, we learn over time who's
a blowhard and who's a source of knowledge even if we don't know what her real name is. These
examples are not exceptions. They are the rule and they have been from the beginning, because from
the beginning the Web has been about inventing ways to make its own massness -- its
miscellaneousness -- useful.
Compare that to the previous generation of media. The traditional media are not Cinderella to the Web's
cockroach, and not just because the traditional media have their own cockroaches. The Web is far
better understood as providing more of everything: More slander, more honor. More porn, more love.
More ideas, more distractions. More lies, more truth. More experts, more professionals. The Web is
abundance, while the old media are premised -- in their model of knowledge as well as in their
economics -- on scarcity.
Amateurs aren't driving out the pros, Andrew. The old media are available on line. If some falter, other
credentialed experts will emerge. But the criteria governing our choice of whom to listen to are
expanding from "Those are the only channels I get" and "I read it in a book" to "I've heard this person
respond intelligently when challenged," "People I respect recommend her," and even "A mob finds this
person amusing." This is the new media literary, suited to the new abundance.
Will we choose wisely? Compared to what? We are never going to be a species of Solons, moved only by
higher thoughts and the finer emotions. But the history of the Web so far says that we are highly
motivated to come up with ways to make sense of a world richer and more interesting than the
constrained resources of the traditional media let on.
So, Andrew, a question for you. You bemoan the loss of "the essential epistemological anchor of truth"
and the "impartiality of the authoritative, accountable expert." It's easy to agree with that when it
comes to facts, the sort of stuff we consult almanacs for. But when it comes to the more important and
harder issues, where we want to understand our world -- science, politics, the arts -- are you quite as
comfortable with the notion that there are identifiable epistemological anchors? Or is your epistemology
in fact rooted in the scarcity that has silently shaped the traditional media?
Mr. Keen: I agree that the Web is us. It's a mirror rather than a medium. When we go online, we are
watching ourselves. So the question is do we want to be looking at ourselves as our best (Cinderella) or
our worst (the giant cockroach)? My point is that what appears to the Web 2.0 crowd to be a Disney
production is actually a Kafka remix.
You are right that people have always chattered endlessly about the silliest things. But the selfpublishing Internet is the greatest of great seduction. Web 2.0 tells us that we all have something
interesting to say and that we should broadcast it to the world. As I argue in my book, Web 2.0
transforms us into monkeys. :-) That's the new abundancy, the long tail, if you like. Infinite primates
with infinite messages on infinite channels. The only good news is that broadband is still pathetically
slow. But what happens when fiber-to-the-home becomes a reality for all of us? What happens with the
monkeys have the technology of the Gods at their paw tips? Media will be transformed into ubiquitous
chatter -- into an audio-video version of Twitter.
Yes, the web does represent an abundancy of everything -- "more porn, more love, more ideas, more
distractions." This is fascinating to a philosopher of knowledge like yourself, but for mere mortals who
rely on their media to "understand the world", new digital abundance will lead to intellectual poverty.
The more we know, the less we will know. You see, to use this chaotic media efficaciously, we need to
invent our own taxonomies -- which isn't realistic for the majority of ordinary people (seeking to
understand the world) who think a "taxonomy" is something that drives us to the airport.
Meanwhile, traditional scarcity is getting scarcer. We've always had a scarcity of seriousness, of talent,
of the artist/intellectual able to monetize their expertise. As you know better than most, it's hard work
thinking up, writing, selling and then marketing a good book (both "Cluetrain" and "Everything is
Miscellaneous" are really good, albeit wrong). Traditional media has done a good job in discovering,
polishing and distributing that talent. But once everything is flattened, when books are digitalized, when
libraries become adjuncts of Google, when writers are transformed into sales and marketing reps of
their own brands, then what?
Which brings me to back to your question about epistemological anchors. I value people like yourself
who are able to package up interesting arguments in a physical product which has monetary exchange
value. You do a great job helping your reader understanding their world and they do a great job buying
your book, thereby allowing you to pay your mortgage and write more books thereby helping more
people understand their world. My concern is that this scarcity, the scarcity of the intellectual authority
able to help people understand the world, is indeed endangered -- particularly if the physical book goes
the way of the physical CD and the physical newspaper. So let me end with a question to your question.
Are you convinced that Web 2.0 is of benefit to traditional intellectuals like yourself? Are you confident
that, in a flattened media in which authors give away their books for free and collect their revenue on
the back-end, the David Weinberger 2.0 of the future will flourish (or even survive)?
Mr. Weinberger: When you claim the Web is a "Kafka remix," you can't mean that everything on the Web
is bad, if only because, well, you have your own blog, which is good but wrong. :) So, you must mean
that the preponderance of what's on the Web is bad, as bad as cockroaches. And, as I said, I suspect
you're right. That'd be a problem if we had no way of locating what's of value. But we do. Lots of ways.
More ways every day, as I described earlier.
Rather than re-treading that ground, let's talk about the nature of talent, and why you see monkeys and
cockroaches everywhere you look on the Web.
You and I agree that genuine talent is scarce and needs nurturing. But your picture of talent is formed
by the binary view the traditional media have forced on us. Because it's been so expensive to produce,
market and distribute cultural products (books, records, films), the lucky few who get published get
access to a mass audience, and the rest trail off the map. So, traditional distribution makes it look like
talent is a you-got- it-or-you-don't proposition -- you're an artist or you're a monkey. That doesn't reflect
the scarcity of talent so much as the scarcity of distribution, a result of the high cost of delivering the
first copy of a mass-produced item.
In fact, we have every reason to believe that talent is distributed in a far smoother (but still steep)
curve. My friend Joe is an amazing guitarist, but he's not the best guitarist around. Neither is my sisterin-law Maria the best singer in the world, but she's good and you would spend an enjoyable, and
sometimes moving, night listening to her in the local chorus. Talent is not either/or. Recording contracts
are.
With the Web, we can still listen to the world's greatest, but we can find others who touch us even
though their technique isn't perfect.
Note the "we can find." We couldn't if finding required creating our own taxonomies, as you say. Instead,
we rely on (1) Taxonomies created by experts (newspapers that categorize their stories, stores that
categorize their offerings); (2) Computer-assisted ways of locating what's relevant (search engines); and
(3) Recommendations made by people we trust. We're getting better at all of these. It's where some of
the (4) most surprising innovation is occurring.
I certainly do agree with your concerns about how we're going to pay talent. I don't have any answers or
predictions, but I suspect every institution whose value rides on the scarcity of information or the
difficulty of distributing it will face this issue eventually. And those are some institutions we both care a
lot about. There are whole classes of professionals who may find themselves without work. That's a
frightening prospect. (On the other hand, delivering this value on the Web is a business opportunity, so
it would be premature to declare defeat.)
We will lose some talent. We'll gain some that otherwise would have been left behind by the binary
selection process in the real world. Of those, a few will be world-class. Many will make the world only
somewhat better. And some will be screeching, violin-playing monkeys whose efforts we will flee from.
But that raises one other myth that I think runs under your comments. You say "the intellectual
authority able to help people understand the world is indeed endangered." Then you ask if I'm
convinced that the Web benefits intellectuals. Yes, I am. And that's because, while some talent is indeed
solitary, many types of talent prosper in connection with others. That is especially true for the
development of ideas. Knowledge is generally not a game for one. It is and always has been a
collaborative process. And it is a process, not as settled, sure, and knowable by authorities as it would
be comforting to believe. So forget my homey examples of Joe and Maria. Consider how much more we
know about the world because we have bloggers everywhere. They may not be journalists, but they are
sources, and sometimes they are witnesses in the best sense. We know and understand more because
of these voices than we did when we had to rely on a single professional reporting live at 7.
I was an academic a long time ago, Andrew, but I haven't forgotten how isolated I felt in the
philosophical community before the Web. Ideas were scarce back then because space, time and the
limitations of paper made it hard to hear what others were saying and well nigh impossible to talk with
them about it. Today I am in contact with people who come up with ideas I'd never have encountered,
who are sources of wide expertise, who squirrel away in public on tiny topics, who spew a long tail of
speculations with occasional insights that are worth the wait, who take me apart because my logic is
wrong or my biases are showing or my grammar has gone screwy, who support my good ideas and just
let the bad ones pass. Without any doubt, I am in the richest, most stimulating, most fruitful swirl of
thought, knowledge, ideas and feeling ever in my life...far more productive than when I was consigned
to talking only with professionals and credentialed experts. This is fundamental to my experience of the
Web, just as monkeys and cockroaches are to yours.
Andrew, maybe you just ought to find some better blogs to read. :)
Now a question: For academics, scientists and serious intellectuals, do you think the Web is nothing but
a disaster? In fact, since businesses learned long ago that knowledge is social, do you seriously
maintain that the work of business -- I'm not here thinking of ecommerce -- can only be degraded by
being done on the Web?
(As for "David Weinberger 2.0," I appreciate your confidence, but I'm still in beta.)
Mr. Keen: I can be as cocky a cockroach as anyone, thus my blog has gigantic insect footprints all over
it. :-)
I agree wholeheartedly with your comments about the online academic community. Any medium which
brings experts and professional authorities together is healthy. I am thrilled that you've discovered such
a rich intellectual community online. If this is Web 2.0, then I love Web 2.0. I'm a Cluetrainer when it
comes to serious people conversing fruitfully on the Internet. The problem, however, with Web 2.0 is
that most of the conversation seems to be taking place anonymously, conducted -- in a manner of
speaking -- by people who are more interested in vulgar insult than respectful intellectual intercourse.
The comments sections of most major website are littered with this trash. As is the blogosphere. So,
yes, the Internet is great for experts to discover one another and conduct responsible conversation. It's
the monkey chorus on the democratized web that bother me.
The issue of talent is the heart of the matter. How do we traditionally constitute/nurture/sell talent and
how is Web 2.0 altering this? My biggest concern with Web 2.0 is the critique of mainstream media that,
implicitly or otherwise, drives its agenda. It's the idea that mainstream media is a racket run by
gatekeepers protecting the interests of a small, privileged group of people. Thus, by flattening media,
by doing away with the gatekeepers, Web 2.0 is righting cultural injustice and offering people like your
friends Joe and Maria an opportunity to monetize their talent. But the problem is that gatekeepers -- the
agents, editors, recording engineers -- these are the very engineers of talent. Web 2.0's
distintermediated media unstitches the ecosystem that has historically nurtured talent. Web 2.0
misunderstands and romanticizes talent. It's not about the individual -- it's about the media ecosystem.
Writers are only as good as their agents and editors. Movie directors are only as good as their studios
and producers.
These professional intermediaries are the arbiters of good taste and critical judgment. It we flatten
media and allow it be determined exclusively by the market, then your friends Joe and Marie have even
less chance of being rewarded for their talent. Not only will they be expected to produce high quality
music, but -- in the Web 2.0 long tail economy -- they'll be responsible for the distribution of their
content. No, if Joe and Maria want to be professional musicians paid for their work, they need a label to
make an either/or call about their talent. That's the binary logic that informs any market decision -- from
music to any other consumer product. Either they can produce music which has commercial value or
they can't. If they can't, they should keep their day jobs. If they can produce commercially viable music,
Joe and Maria need the management of professionals trained in the development of musical talent.
I respect your attempt to escape from the either/or realities of market economics. But I'm afraid this is
the binary logic of life. The culture business is ugly. It rewards talent and punishes those who don't have
it. The democratization of talent is a contradiction in terms. Even part-time cockroaches like me know
that.
printing press. That is, the gate keeping goes from dictating what we can read to telling us what we
ought to read.
Now let me address the other sort of nurturing to which you refer: the sort that money brings. Money is
important.
Allow me to switch to PowerPoint mode, for brevity:
You overstate the rosiness of the current situation for artists, scholars and other creators. Very
few make a living through the traditional media.
Lots of creative people are making money on the Web, including traditional, edited, gate-kept
media.
It's way too early to declare that artists will not be financially supported on the Web. We are at
the beginning of a painful transition. We're not yet done inventing.
It may well be that the Web results in fewer mega-stars. But it may also become an important
addition to the real "business model" of most artists and creators, providing more listeners who
will not only download their creations but perhaps come to their performances. The Web is
actually additive for most creators.
We will also have more terrible "artworks." So what? We should ignore them just as we skip over most
channels on TV. Except we're far more sophisticated in how we travel the Web than we are when using
the sequential clicking of a TV remote. On the Web we'll continue to invent ways to find what matters to
us.
Of course we will, because "mattering" is the real driver of the Web.
Mr. Keen: So I did what you suggested. I took a look at the New York Times best-seller list. The top six
non-fiction hardback books for the week of June 10 were:
1. "The Assault on Reason" by Al Gore
2. "The Reagan Diaries" by Ronald Reagan
3. "Einstein" by Walter Isaacson
4. "God is not Great" by Christopher Hitchens
5. "Presidential Courage" by Michael Bechloss
6. "A Long Way Gone" by Ishmael Beah
None of these books seem to be "engineered" hits. Gore as #1 and Reagan as #2 collectively disprove
the right/left wing critique of big media as a right/left wing racket. A strong marketing and sales effort
on behalf of a 700-page, $32 biography of Albert Einstein seems to me like a noble achievement on the
part of big media to bring science to the people. Equally noble is their commitment to Beah's book, a
searing narrative about the contemporary African tragedy and the power of personal redemption. As a
wannabe Hitchens myself, I'm a big fan of an anti-populist polemic which goes against the beliefs of the
majority of God-fearing Americans. Gore/Reagan/Isaacson/Hitchens/Bechloss/Beah are all talented
authors who have written original and important books that require the marketing and sales muscle of
mainstream media to be broadly distributed. And even if these hits are "engineered" by big media -- so
what? Indeed, I applaud the engineering of books about critically important subjects in politics, history
and theology. I want my kids reading the awful truth about life in Africa. I want them to get mugged by
Hitchens on the question of God's (non)existence. I want them to attempt to digest a 675 page
biography about Einstein.
You say that mainstream media's only goal is "moving units" which "pander to the market." But surely
those supposedly nefarious fellows who run our publishing houses could come up with easier way of
moving units than 675-page biographies of a German physicist or a 308-page civics lecture by Al Gore?
No. The truth is that the editors in charge of America's publishing industry value high quality books. And
the reading public obviously values these texts too. The wisdom of the literate crowd is reflected in the
New York Times list. Book readers are smarter than you think. We the audience don't want to read crap.
Then I went to Technorati to look at the six most popular blogs for the same week. This, to borrow your
language, is what "matters" in the world of Web 2.0:
1. Engadget
2. Boing Boing: A Directory of Wonderful Things
3. TechCrunch
4. Gizmodo
5. The Huffington Post
6. Lifehacker, the Productivity and Software Guide
David, you say we have "lots of different tastes," but it seems like the hits on blogosphere are much less
intellectually diverse than the hits on the New York Times book list. Engadget and Gizmodo are blogs
about new technology gear -- iPods, BlackBerries, iPhones etc. TechCrunch and Lifehacker are geeky
technological blogs for technology geeks. The Huffington Post is, I admit, a valuable read -- although it
seems to me to becoming more like a traditionally authoritative newspaper than an unedited blog.
Meanwhile, Boing Boing is a surreal and supremely inane compendium of miscellaneous knowledge -listing stories about kidney donor hoaxes, a pedagogical tract on "How to Kiss" and, a game-theory
piece entitled "an economic analysis of leaving the toilet seat down."
I respect your faith in the miscellany of knowledge, but I worry that it's you, in fact, who have sipped the
Kool-Aid. I want my kids reading Reagan and Gore rather than how-to articles about kissing. I fear that
the overall consequence of the democratized blogosphere is akin to leaving the toilet seat down. But
this isn't a game and it isn't theory. Sites like Boing Boing are flushing away valuable culture. Rather
than a directory of wonderful things, Web 2.0 is a miasma of trivia and irrelevance. It doesn't matter.
Mr. Weinberger concludes: Actually, I'd suggested you take a look at the Top 40 songs. Of course you're
within your rights to cite the New York Times best-sellers list instead, but that's indicative of the
problem with your method. Are you seriously maintaining that pop culture off line is represented by six
good books on the New York Times hardcover non-fiction list? Why do you find it so awkward to
acknowledge the obvious point that the gatekeepers of commercial publishing and production -- the
producers of TV shows, magazines, pop music, movies, books -- are usually driven not by high cultural
standards, but by the need to reach a broad audience? Do I need to remind you that "The Secret" is
likely ultimately to outsell all six of those worthy books combined?
We could argue over the value of the six top blogs versus their analogues in the traditional media. I
could point out that those blogs are not the work of amateurs, but are profitable businesses run by
experts. I could even rise to your BoingBoing bait, as if that site needed my defense against your
selective reading. But, all that would miss the real point: The Web is not mass culture, so we can't just
look at the most popular sites to see what's going on. Most of the action is in the long tail of users, sites
with just a handful of links going to them. So, pointing to the "short head" of highly popular sites not
only tells us little, it views the Web through a distorting lens, as if sites were read-only publications
rather than part of a web of conversations.
Andrew, the mud you throw obscures the issues you raise. Porn sites, silly posts, monkeys, cockroaches,
toilet seats. This rhetoric isn't helpful. In fact, in your attempt to be controversial, you're playing into the
hands of political and economic forces that would like the Internet to be nothing more than an extension
of the mass media. If your book succeeds on the best-seller lists, but contributes to the Web becoming
as safe, narrow, controlled and professional as the mainstream media, I believe you would be almost as
unhappy as I would. It's a shame because we need to be taking seriously the issues you raise. But to
talk about them, we need to get past the notion that the Web is all dreck all the time and that it is
nothing but a great "seducer" of taste.
For example, you're right that we're in the middle of a disruption of the professional media "ecosystem,"
as you aptly call it. Some of our professional media are faltering before we have built their online
replacements. It's frightening, especially if you're delighted with the existing mass media. But, the
transition is hardly over. If these institutions have value, then providing that value on line is an
opportunity that may well be addressed by the market (have faith, Andrew!) or by the new economics of
cooperative social production expounded in Yochai Benkler's seminal "The Wealth of Networks" (which is
available, of course, in its entirety for free online). Further, these newly fashioned mechanisms for
delivering old-fashioned value will have their own advantages, as well as the weaknesses you note.
Wikipedia, if nothing else, is more complete and current than printed encyclopedias -- and we can quote
it at length without getting sued. iTunes enables some worthy musicians to find their own small
audiences. Open access scientific journals have made far more research (including peer reviewed
papers) available to scientists than ever before -- a good example of what I think of as the power of
making information miscellaneous. In fact, amateurs and professionals are getting "miscellanized" so
that their influence is proportional not to their status but to the value they contribute...and our
understanding of the professionals is being enhanced by their revealing more of their amateur, personal
side in their blogs.
Most of all, a serious discussion of amateurism has to be able to admit that it may have some benefits.
For example:
(1) Some amateurs are uncredentialed experts from whom we can learn.
(2) Amateurs often bring points of view to the table that the orthodoxy has missed, sometimes even
challenging the authority of institutions whose belief systems have been corrupted by power.
(3) Professional and expert ideas are often refined by being brought into conversation with amateurs.
(4) There can be value in amateur work despite its lack of professionalism: A local blogger's description
of a news story happening around her may lack grammar but provide facts and feelings that add to -- or
reveal -- the truth.
(5) The rise of amateurism creates a new ecology in which personal relationships can add value to the
experience: That a sister-in-law is singing in the local chorus may make the performance thoroughly
enjoyable, and that I've gotten to know a blogger through her blog makes her posts more meaningful to
me.
(6) Collections of amateurs can do things that professionals cannot. Jay Rosen, for example, has
amateur citizens out gathering distributed data beyond the scope of any professional news
organization.
(7) Amateur work helps us get over the alienation built into the mainstream media. The mainstream is
theirs. The Web is ours.
(8) That amateur work is refreshingly human -- flawed and fallible -- can inspire us, and not just seduce
us into braying like chimps.
Yes, Andrew, we are amateurs on the Web, although there's plenty of room for professionals as well. But
we are not replicating the mainstream media. We're building something new. We're doing it together. Its
fundamental elements are not bricks of content but the mortar of links, and links are connections of
meaning and involvement. We're creating an infrastructure of meaning, miscellaneous but dripping with
potential for finding and understanding what matters to us. We're building this for one another. We're
doing it by and large for free, for the love of it, and for the joy of creating with others. That makes us
amateurs. And that's also what makes the Web our culture's hope.
True Enough
By Steven Johnson
Farhad
Before I go into debate mode here, I wanted to start by saying how much I've enjoyed reading True
Enough. You have literally dozens of stories in the book that you've told wonderfully, but you've also
managed to connect them to illuminating research in psychology and sociology: the whole history of the
"Swift Boat" campaign, the 2004 Ohio election-fraud meme, 9/11 conspiracy theorists. It's an
entertaining and important mix of media theory, cultural criticism, and science journalism.
Of course, one of the central themes of True Enough is that we live in excessively partisan times, and in
that spirit, I'm now going to shift gears and explain why your argument is hopelessly wrong.
I'm kidding, but I think we do disagree on a couple of key points. I find myself agreeing thoroughly with
your assessment of the forces at work in each of your anecdotes. What I have trouble with is the global
conclusions you draw. You describe your thesis near the beginning: "The limitless choice we now enjoy
over the information we get about our world has loosened our grip on what isand isn'ttrue." At the
end you phrase it this way: "The particular way in which information now moves through societyon
currents of loosely linked online groups and niche media outfits, pushed along by experts and journalists
of dubious character, and bolstered by documents that are no longer considered proof of realty
amplifies deception."
Now, it seems to me that there are two ways to set about determining whether this interpretation is, in
fact, true. The first is the media-theory approach, which is to analyze the "particular way that
information now moves," thanks to the Web and other modern media forms, and to try to gauge
whether there is indeed something structural to these new forms that amplifies deception. The other
approach is to look at the problem from a sociological point of view: Is there a general increase in
falsehood, or blindly partisan interpretations of the world, that we can see around us, compared with
what we saw 20 or 50 years ago?
Let me try my hand quickly at both, and perhaps we can get into more detail in the next round. In terms
of the flow of information, there is no question that the Internet has made it vastly easier to share
complete fabricationsdelusional theories, libelous accusations, Photoshopped fantasieswith other
human beings. (Just think about the spam!) That we agree on. But I think it is equally true that the rise
of the Internet has made it vastly easier to share useful, factual information with other human beings.
Because the media landscape is so much more interconnectedthanks largely to the innovation of
hypertext (and to Google)it has also never been easier to fact-check a given piece of information. We
had plenty of urban myths during my childhood in the 1970s and early 1980s, but we didn't have
Snopes.com to debunk them.
Saying that the Web amplifies deception is, to me, a bit like saying that New York is more dangerous
than Baltimore because it has more murders. Yes, in absolute numbers, there are more untruths on the
Web than we had in the heyday of print or mass media, but there are also more truths out there. We've
seen that big, decentralized systems like open-source software and Wikipedia aren't perfect, but over
time they do trend toward more accuracy and stability. I think that will increasingly be the case as more
and more of our news migrates to the Web.
That's why I think it's important to note that many of your key examples are dependent on old-style,
top-down media distribution. You talk about the American public's continuing belief in a connection
between 9/11 and Saddam Hussein; the Swift Boat Veteran ads that distorted the truth of Kerry's record;
Lou Dobbs ranting on CNN. These are all distortions that speak to the power of the old mass-media
model or the even older political model of the executive branch. (I think it's telling that you only spent a
page or two on the successful fact-checking of the forged CBS draft-dodging memos.) As you say in the
book, the Swift Boat meme didn't take off until the group started running television ads. Americans
don't connect Saddam to 9/11 because of distributed online niche groups; they make that connection
because the vice president of the United States repeatedly went on television to keep the connection
alive. That's as old-school as it gets.
This leads to the sociological question. One way to think about it is to look at conspiracy theories, which
play a prominent role in True Enough. If your premise were right, the new media landscape would have
made our culture more amenable to these theories than ever. I don't exactly know how to go about
proving this, but I think there's a very strong argument that the country is significantly less
conspiratorially minded than it was in the late 1960s and 1970s. Think of the litany from that period:
JFK, Castro, faked moon landings, "Paul Is Dead," Roswell. (For what it's worth, the conspiracy page at
Wikipedia is dominated by these outdated theories, but perhaps that itself is a conspiracy.) Yes, we have
the 9/11 "truth movement" theories, but we also have a number of dogs that didn't bark. Think about
the anthrax attacks of 2001a major act of terrorism against prominent people that has not been
solved, and yet there are almost no well-known crank theories about that in circulation. If that had
happened in the 1970s, Oliver Stone would be making a movie about it right about now.
And then there is the premise that we live in increasingly partisan political times, where our worldviews
have diverged so much that we can't agree on basic truths. This is, of course, conventional wisdom
people make offhand references to our partisan political culture all the timebut I think it is a bizarre
form of political amnesia. Think back again to the 1950s, '60s, and '70s. Yes, we have Fox News, but we
no longer have lynch mobs. We no longer have people getting fire-hosed by the authorities because
they want to ride in the front of the bus or war protesters killed on their campuses. We no longer have
radical political groups with significant followings arguing for violent revolution; we haven't had a
politically motivated assassination attempt in decades. We have broad public consensus on the role of
women and minorities in government and the workforce. We no longer have major political figures
denouncing the Communists lurking among us. Yes, the right hates the Clintons, and the left hates
Bush, but the left hated Nixon just as much, and some of them hated LBJ for good measure. There is far
more consensus in the country's political values than there was 30 years ago. We agree on much more
than we did back then.
I admit that one thing has changed: Our political culture looks more partisan on television than it did
back then, in the sense that Bill O'Reilly is more partisan in style and substance than, say, Cronkite was.
But as you know better than anyone, Farhad, just because it's on television, it doesn't mean it's true.
Steven
*
Steven,
As a longtime fan of your work, I'm tickled by your kind words, and I'm honored to have the chance to
debate True Enough with you here. That said, let's get ready to rumble.
You do a nice job summarizing my ideas, but I want to point out that True Enough isn't about the
Internet alone. I've got to say this in order to squash the chargewhich you don't make, but which I fear
others mightthat I'm some kind of Luddite. I've spent a career writing for the Web, I get most of my
news online, and I consider Boing Boing a national treasure.
So my beef is not with the Internet, exactly, but with the entire modern infosphere: blogs, cable news,
talk radio, YouTube, podcasts, on-demand book publishing, etc. In the era of mass mediathe 60-year
span, give or take, between the advent of television and the advent of the Webwe got all our news
from a handful of major sources. Now we get our news from all sides, from amateurs and professionals
who span Chris Anderson's famously long tail of niche outlets.
I think you and I agree that this shift will profoundly alter society, and that some changes will be for the
good and some will be for the bad. Where we disagree is the bottom line: When it comes to that grand,
gauzy thing called Truth, I think niche media will do more harm than good, at least for the foreseeable
future.
You're right, the Internet is a boon for fact-checking. But how useful is fact-checking if the facts and the
lies shuttle about in entirely separate cultural universes?
In the book, I spend much time on Leon Festinger's theory of "selective exposure"the idea that in
order to avoid cognitive dissonance, we all seek out information that jibes with our beliefs and avoid
information that conflicts with them. While the theory is controversial, there's ample evidence that
selective exposure plays a role in how people parse the news today. Survey data show that folks on the
right and folks on the left now swim in very different news pools. Right-wing blogs link to righty sites,
while left-wing blogs link to lefty sites. For example, see Lada Adamic and Natalie Glance's study (PDF)
or consider this experiment by Shanto Iyengar and Richard Morin: If you slap the Fox News logo on a
generic news storyeven a travel or sports story, something completely nonpoliticalRepublicans'
interest in it shoots up, while Democrats' interest plummets. People now choose their newsand thus
their factsthrough a partisan lens.
Yes, the Swift Boat campaign exploded when it hit TV, but I wouldn't say it depended on "old-style topdown media distribution." The TV we're talking about is cable news, especially Fox: the very definition of
a niche partisan outlet. (Fox's biggest show, The O'Reilly Factor, attracts about 4 million viewers a
night; that's big for cable, but it's not the mainstream.)
The Swift Boaters initially tried to go the old-media way. In May 2004, they held a press conference at
the National Press Club to announce that John Kerry had lied about his time in Vietnam. Reporters from
every old-media shop in town showed up, but most dismissed the group. Bereft, the vets went to the
Web and talk radio, where they found an audience that lapped up their claims. It was only by winning
some fame in these media that the vets garnered a few big donors and, eventually, interest from cable
TV. Broadcast news networks, the Associated Press, and national newspapers came to the story much
later on. And their role was salutaryonline, in print, and on TV, the old-media outlets fact-checked the
Swift Boaters very well, debunking most of their claims. But did the facts hurt the story? Not really.
On 9/11 and Saddam: Would Dick Cheney have been able to convince the nation of that connection
without a partisan press apparatusLimbaugh, Drudge, O'Reilly, the Freepersat his back? We can't
know, of course. I think it's telling, though, that a large percentage of Americans continued believing the
lie long after even Cheney and the rest of the administration disavowed it. To me, this suggests that the
story was propelled by forces far stronger than the vice president. It persistedand persiststhanks to
niche partisan outlets and despite the facts of the matter being available to all online.
Of course, you're right that society has found a consensus on many of the most dogged issues of our
past. But True Enough doesn't argue that we are markedly more partisan today than we once were.
Rather, I'm saying that our partisanship is of a different character. The big historical controversies you
mention involved questions of political valuesfor example, what should be the proper role of women
and minorities in society? Disagreement over an issue like global warming, though, doesn't concern
values. It's a difference over facts: If you believe the science on global warming, you think we should do
something about it. But if you're among the 20 percent to 40 percent of Americans who subscribe to
different facts on the question, you don't. And on many big issuesthe war, terrorism, several areas of
science, even the state of the economyAmericans today not only hold different opinions from one
another; they hold different facts.
As for conspiracy theories, I can assure you that an alleged governmental role in 9/11 isn't the only
thing keeping paranoid Americans up at night. Have you heard Robert F. Kennedy Jr.'s theory on
vaccines and autism (championed now by John McCain)? Or Kennedy's theories on the stolen election of
2004? What about the NAFTA superhighway? Or HIV denialists? Really, I could go on.
Farhad
*
Farhad,
To ensure that you and I don't end up swimming in different pools, let me try to spell out quickly where
we agree. First, the modern infosphere is dramatically more diverse in the number and range of
perspectives now available. (Taking your cue, I'm referring to the whole panoply here: the Web, cable,
talk radio, and so on.) Second, ordinary people have far more control over the perspectives they are
exposed to, thanks both to the diversity of media platforms and to the long tail of viewpoints they
support. Third, that infosphere is now far more densely interconnectedon the Web, of course, but also
on cable. (Bill O'Reilly is just a couple of clicks on the remote away from Keith Olbermann, after all.)
I agree completely that you can use these three developments to build an ideological cocoon for
yourself if you so choose. But you can also use them to expose yourself to an incredible range of ideas
and perspectivesto challenge your assumptions, fact-check arguments, understand where your
opponents are coming from, and stitch together your own informed worldview out of those multiple
realities. I realize that description sounds ridiculously high-minded. (Even the most urbane Web
polymath goes for a little partisan red meat every now and then.) But let's think of it, for our purposes,
as the caricature on the other side of the spectrum, the opposite of the dittohead who doesn't believe
anything unless he hears it straight from Rush's mouth.
What we're trying to figure out is which pole has a stronger magnetic force in this new world: the
dittohead or the polymath. In such a connected environment, truth should be able to spread more
quickly through the system, assuming people have an interest in truth. But if people are more driven by
selective exposurefinding online information that confirms what they already believethen the
system will let them keep truth at bay, assuming their beliefs are untrue. (By the way, I loved the
sections of your book on the science of selective exposure.)
You invoke the 20 percent to 40 percent of Americans who don't believe the science of global warming
as evidence that the forces of selective exposure are stronger than those of truth-seeking. But the
percentage of Americans who have both heard of and believe in human-caused climate change has
been growing steadily for the last 15 years. Many more Americans now pursue green lifestylesin their
choice of cars, in the products they buy, and in the food they eat. So there's no question the science is
making progress and winning converts at a steady rate. But just like the political struggles that
dominated the '50s and '60swhich were about facts as much as values, contrary to what you claim
the conversion process takes time. It's frustrating that the change can't happen overnight, but no more
frustrating than it was listening to bigots invoking the pseudosciences of sexism or racism in the '60s.
I suppose the great, untestable question on global warming is this: If we could rewind the clock and
somehow build an international scientific consensus about global warming in, say, 1950, would the
American public have embraced the reality of the threat and the need for change more quickly? We'll
never know, of course. It would be interesting to compare the spread of information in the post- Silent
Spring era, to see whether the environmental science of that period reached a broad public consensus
faster than global-warming science has in recent years. If youor any of Slate's readersknow of
studies along those lines, I'd love to hear about them.
But we do have one clear social experiment that we can look at on the dittohead-vs.-polymath question.
If you and I had been having this debate back in 1990, right as the new infosphere was coming into
beingtalk radio ascendant, online communities starting to take shapepresumably your prediction
would have been that the forces of selective exposure in this new world would drive people into those
different pools of information, confirming and amplifying their existing beliefs, strengthening their
alliances to their initial tribe, and growing further away from those with different perspectives. My
prediction, on the other hand, would have been that the connective, diversifying properties of this new
world would express themselves in the opposite direction: people breaking free from the party lines and
creating more eclectic political worldviews, stitched together from the diverse experiences that they can
now encounter on the screen.
What actually happened during that period? Through all the swings back and forth between the two
parties, the single most pronounced trend since the early '90s is the steady rise of Americans who
consider themselves independent voters, unaligned with either party. (They have tripled in size during
that period, by some measures.) Yes, the new information paradigm has been a boon to people who
believe only what O'Reilly (or Michael Moore) has to say. But all those independents make me think that
the common groundthe space that connects the poolshas become an even more popular place to
be.
Steven
*
Steven,
For two people in a debate over cultural rifts, you and I sure are agreeing on a great deal. I suppose
that's one positive sign.
That's why I hate to turn this, now, into the most tedious sort of fightone about interpreting voter
stats. There's voluminous poli-sci research on the recent rise of independent voters, and the picture isn't
as clear-cut as you say. Yes, the share of Americans who identify as independents has grown over the
past couple of decades. At the same time, though, the meaning of independence has shifted: Most
unaligned voters now exhibit strong, pseudo-permanent preferencesin surveys as well as in voting
behaviorfor one party or another. The number of what you might call "pure" independentsvoters
who pick candidates without regard to party and ideologyhas been steadily declining.
And don't overlook all the other signs of growing political polarization. Americans who do identify with
parties are now much less willing than in the past to vote across party lines. In the 1970s, liberals
frequently voted (PDF) for Republicans and conservatives for Democrats. We don't see that sort of
behavior anymore. Congress has also grown steadily more partisan; party-line votes on all but the most
inconsequential of issues are now the norm. Just look at what's happened to John McCain in the last 10
yearshe was against Bush's tax cuts before he was for them, which pretty much says it all, no?
Can we blame the new infosphere for this new partisanship? It certainly doesn't deserve all the blame.
Gerrymandering, lobbying, campaign-finance rules, 9/11, and Tom DeLay, among other things, have
also likely contributed to polarization. The rise of voters who call themselves "independent"
notwithstanding, we've seen few signs, since 1990, of people reaching for common ground.
I agree with you on global warming. Though a large number of Americans still dismisses the science, it
does look like facts about climate change are slowly washing over the culture. But let's not forget your
question: Would the public of the 1950sthe mass-media publichave accepted the facts sooner than
the public of the 2000s, the niche-media public? The question, as you say, is untestable.
But on Rachel Carson: Silent Spring was first serialized in The New Yorker in the summer of 1962, and it
came out as a book that September. It was a Book-of-the-Month Club title, and quickly hit the New York
Times best-seller list. In 1963, CBS Reports, a 60 Minutes-type show, broadcast an hourlong report on
Carson's thesis that the pesticide DDT was causing ecological damage. This was back when one-third of
the nation watched CBSwe're talking American Idol-type ratings.
The chemical industry mounted a huge counterattack in the media. Carson was called a "hysterical
woman," assailed as an alarmist, and accused of overlooking all the benefits of DDT. The charges didn't
stick. John F. Kennedy's science advisory panel looked into Silent Spring's thesis and supported its
claims. The industry largely backed down, and within a few years the government began to regulate
DDT. In 197210 years after Silent Spring's publication, under a Republican administrationthe
pesticide was banned for use in the United States.
Just 10 years! Can you imagine the fate that would await Silent Spring if it were serialized in The New
Yorker today? You can guess it would get some play: NPR and the big newspapers would go after the
story; sites like TreeHugger and Grist and maybe Slate and Salon would discuss it; perhaps the network
news would interview Carson; and maybe cable news would get to it, too.
But picture the fun Fox News and right-wing blogs would have with it. Carson had researched DDT's
effects on the environment for years, but the science was not airtight; there was, as in any emerging
field of study, legitimate disagreement among experts over the scope of the problem and the remedy.
Today, the right would surely distort that disagreement.
In True Enough, I describe the scourge of dubious "expertise" we now see in the mediapeople of
questionable credentials (sometimes with undisclosed financial interests) who are called on by TV
producers to discuss matters about which they've got no special knowledge. I'll hazard that such
experts would flood the zone to fight Carson today, just as they do on global warming. The antienvironmentalists would produce pseudo-scientific research of their own showing how DDT harms only
terrorists, and in fact helps bald eagles live longer, happier lives. This stuff, then, would get passed
around the right, attaining a measure of respect and becoming a kind of parallel truth. How long till
Glenn Beck begins comparing Carson to Hitler?
All speculation, of course. But that seems to me a pretty good template for how objective facts are
churned out through the news these days. If Silent Spring were published today, would it lead to a ban
of DDT? Maybe. But not fast enough, I worry.
I'd been looking forward to this debate, Steven; it's been fun. I probably haven't changed your mind
about the Internet's role in societyand you haven't changed minebut here's hoping that a civil chat
between rivals serves as a model for others online.
Farhad
http://www.brainyquote.com/quotes/authors/h/howard_rheingold.html
Openness and participation are antidotes to surveillance and control.
There is never going to be a substitute for face-to-face communication, but we have seen since
the alphabet, to the telephone and now the Internet, that whenever people find a new way to
communicate, they will flock to it.
You can't have an industrial revolution, you can't have democracies, you can't have populations
who can govern themselves until you have literacy. The printing press simply unlocked literacy.
Of course, with agriculture came the first big civilizations, the first cities built of mud and brick,
the first empires. And it was the administers of these empires who began hiring people to keep
track of the wheat and sheep and wine that was owed and the taxes that was owed on them by
making marks; marks on clay in that time.
Democracy is not just voting for your leaders; it's really premised upon ordinary citizens
understanding the issues.
We are moving rapidly into a world in which the spying machinery is built into every object we
encounter.
Howard Rheingold
You can't assume any place you go is private because the means of surveillance are becoming so
affordable and so invisible.
The more material there is, the more need there is for filters. You don't need a printing press
anymore, but you do need people who know how to cultivate sources, double-check information
and put the brand of legitimacy on it.
Mobile communications and pervasive computing technologies, together with social contracts
that were never possible before, are already beginning to change the way people meet, mate,
work, war, buy, sell, govern and create.
Humans are humans because we are able to communicate with each other and to organize to do
things together that we can't do individually.
Its not a global village, but we're in a highly interconnected globe.
You can't pick up the telephone and say, 'Connect me with someone else who has a kid with
leukemia.'
Markets are as old as the crossroads. But capitalism, as we know it, is only a few hundred years
old, enabled by cooperative arrangements and technologies, such as the joint-stock ownership
company, shared liability insurance, double-entry bookkeeping.
As for Twitter, I've found that you have to learn how to make it add value rather than subtract
hours from one's day. Certainly, it affords narcissism and distraction.
I want to be very careful about judging and how much to generalize about the use of media
being pathological. For some people, it's a temptation and a pathology; for others, it's a lifeline.
Mindfulness means being aware of how you're deploying your attention and making decisions
about it, and not letting the tweet or the buzzing of your BlackBerry call your attention.
Although we leave traces of our personal lives with our credit cards and Web browsers today,
tomorrow's mobile devices will broadcast clouds of personal data to invisible monitors all around
us.
Attention is the fundamental instrument we use for learning, thinking, communicating, deciding,
yet neither parents nor schools spend any time helping young people learn how to manage
information streams and control the ways they deploy their attention.
People's social networks do not consist only of people they see face to face. In fact, social
networks have been extending because of artificial media since the printing press and the
telephone.
Personal computers were created by some teenagers in garages because the, the wisdom of the
computer industry was that people didn't want these little toys on their desk.
Some critics argue that a tsunami of hogwash has already rendered the Web useless. I disagree.
We are indeed inundated by online noise pollution, but the problem is soluble.
The Amish communities of Pennsylvania, despite the retro image of horse-drawn buggies and
straw hats, have long been engaged in a productive debate about the consequences of
technology.
Until fairly recently, Amish teachers would reprimand the student who raised his or her hand as
being too individualistic. Calling attention to oneself, or being 'prideful,' is one of the cardinal
Amish worries. Having your name or photo in the papers, even talking to the press, is almost a
sin.
A forecasting game is a kind of simulation, a kind of scenario, a kind of teleconference, a kind of
artifact from the future - and more - that enlists the participants as 'first-person forecasters.'
The idea that your spouse or your parents don't know where you are at all times may be part of
the past. Is that good or bad? Will that make for better marriages or worse marriages? I don't
know.
Kids automatically teach each other how to use technology, but they're not going to teach each
other about the history of democracy, or the importance of taking their voices into the public
sphere to create social change.
Any disease support community is a place of deep bonds and empathy, and there are thousands
if not tens of thousands of them.
It's kind of astonishing that people trust strangers because of words they write on computer
screens.
Some digital natives are extraordinarily savvy.
What person doesn't search online about their disease after they are diagnosed?
Unlike with the majority of library books, when you enter a term into a search engine there is no
guarantee that what you will find is authoritative, accurate or even vaguely true.
Any virtual community that works, works because people put in some time.
By the time you get a job, you know how to behave in a meeting or how to write a simple memo.
When designers replaced the command line interface with the graphical user interface, billions of
people who are not programmers could make use of computer technology.
A phone tree isn't an ancient form of political organizing, but you have to call every person.
Advertising in the past has been predicated on a mass market and a captive audience.
I've spent my life alone in a room with a typewriter.
Like most modern Americans, I assume individuality is not only a fundamental value, but a goal
in life, an art form.
Soon the digital divide will not be between the haves and the have-nots. It will be between the
know-hows and the non-know-hows.
Technology is my native tongue. I'm online six hours a day.
A lot of people use collaborative technologies badly, then abandon them. They aren't 'plug-andplay.' The invisible part is the social skill necessary to use them.
One thing we didn't know in 1996 is that it's very, very difficult, if not impossible, to sustain a
culture with online advertising.
We like technology because we don't have to talk to anybody.
People look at me, and I dress a little unusually and they think, 'Oh you must be from California.'
Of course, people in California think, 'Oh you must be from from Mars,' so, you know, your nextdoor neighbour is not necessarily the person that you are going to make a connection with.
The Orwellian vision was about state-sponsored surveillance. Now it's not just the state, it's your
nosy neighbor, your ex-spouse and people who want to spam you.
There is an elementary level of trust that is necessary for community. You have to be able to
trust that your neighbors aren't going to look into your mailbox.
Entire books are being written about the distractions of social media. I don't believe media
compel distraction, but I think it's clear that they afford it.
It used to be that if your automobile broke, the teenager down the street with the wrench could
fix it. Now you have to have sophisticated equipment that can deal with microchips. We're
entering a world in which the complexity of the devices and the system of interconnecting
devices is beyond our capability to easily understand.
Open source production has shown us that world-class software, like Linux and Mozilla, can be
created with neither the bureaucratic structure of the firm nor the incentives of the marketplace
as we've known them.
Technologies evolve in the strangest ways. Computers were created to calculate ballistics
equations, and now we use them to create amusing illusions. Creating amusing illusions is a big
business if you play it right.
Technology no longer consists just of hardware or software or even services, but of communities.
Increasingly, community is a part of technology, a driver of technology, and an emergent effect
of technology.
We already know that spam is a huge downside of online life. If we're going to be spammed on
our telephones wherever we go, I think we're going to reject these devices.
We think of them as mobile phones, but the personal computer, mobile phone and the Internet
are merging into some new medium like the personal computer in the 1980s or the Internet in
the 1990s.
The two parts of technology that lower the threshold for activism and technology is the Internet
and the mobile phone. Anyone who has a cause can now mobilize very quickly.
Flash mobbing may be a fad that passes away, or it may be an indicator of things to come.
Humans have lived for much, much longer than the approximately 10,000 years of settled
agricultural civilization.
I think e-mail petitions are an illusion. It gives people the illusion that they're participating in
some meaningful political action.
Inexpensive phones and pay-as-you go services are already spreading mobile phone technology
to many parts of that world that never had a wired infrastructure.
The Chinese government tried to keep a lid on the SARS crisis, but there were 160 million text
messages in three days sent by Chinese citizens. These are early indications that it's going to be
difficult for people who used to have control over the news to maintain that level of control.
Craigslist is about authenticity. Craig has paid his dues, and people respect him.
I'm somebody who seems to stumble into things 10 or 20 years before the rest of the world
does.
It's more important to me to get an e-mail that says, 'I saw your page and it changed my life,'
than how many hits the page got.
On the Internet, it is assumed people are in business to sell out, not to build something they can
pass along to their grandkids.
People move from place to place and job to job, but they no longer need to lose touch.
The AP has only so many reporters, and CNN only has so many cameras, but we've got a world
full of people with digital cameras and Internet access.
There's a direct relationship between how difficult it is to send a message and how strongly it is
received.
Whenever a technology enables people to organize at a pace that wasn't before possible, new
kinds of politics emerge.
Journalists don't have audiences - they have publics who can respond instantly and globally,
positively or negatively, with a great deal more power than the traditional letters to the editor
could wield.
People's behavior will change with technology. I know very few young people who can't type out
a text message on their phone with one thumb, for instance.
Schoolchildren are not taught how to distinguish accurate information from inaccurate
information online - surely there are ways to design web-browsers to help with this task and
ways to teach young people how to use the powerful online tools available to them.
Communicating online goes back to the Defense Department's Arpanet which started in 1969.
There was something called Usenet that started in 1980, and this gave people an opportunity to
talk about things that people on these more official networks didn't talk about.
I think there are two aspects to smart environments. One is information embedded in places and
things. The other is location awareness, so that devices we carry around know where we are.
When you combine those two, you get a lot of possibilities.
Young voters are crucial. The trend over recent years has been for them to drift away. So
anything that gets young voters interested in the electoral process not only has an immediate
effect, but has an effect for years and years.
Democracy is not just voting for your leaders; it's really premised upon ordinary citizens
understanding the issues.
ho'oponopono (Hawaiian):
Solving a problem by talking it out. After an invocation of the gods, the aggrieved parties sit
down and discuss the issue until it is set right (pono means righteousness).
ngaobera:
a slight inflammation of the throat produced by screaming too much.
fisselig (German):
Flustered to the point of incompetence. A temporary state of inexactitude and sloppiness that is
elicited by another person's nagging.
Shirky has described the pre-Web era of publishing as working on a filter, then publish
paradigm, subjecting text to editors and publishers before making it available; now, Shirky
observes, the paradigm has flipped to publish then filter.103 In that sense, Shirky adds, there
is no such thing as information
One of the most astonishing statistics McGonigal cites in her book is the estimate that gamers
have spent 5.93 million years playing World of Warcraft.22
When todays infants grow up, they will be amazed that their parents generation could ever get
lost, not be in touch with everyone they know at all times, and get answers out of the air for any
question.
If the rule of thumb for attention literacy is to pay attention to your intention, then the heuristic
for crap detection is to make skepticism your default.
Sunday Story #6
Due Sun Oct 25 (before midnight)
Prompt
vimeo-rheingold quotes
Rheingold
books that stand as insightful counterweights to early techno-utopian works like Esther Dysons
Release 2.0 and Nicholas Negropontes Being Digital, which took an almost Pollyannaish view
of the Web and its capacity to empower users.
8
THESE NEW BOOKS share a concern with how digital media are reshaping our political and social
landscape, molding art and entertainment, even affecting the methodology of scholarship and
research. They examine the consequences of the fragmentation of data that the Web produces,
as news articles, novels and record albums are broken down into bits and bytes; the growing
emphasis on immediacy and real-time responses; the rising tide of data and information that
permeates our lives; and the emphasis that blogging and partisan political Web sites place on
subjectivity.
9
At the same time its clear that technology and the mechanisms of the Web have been
accelerating certain trends already percolating through our culture including the blurring of
news and entertainment, a growing polarization in national politics, a deconstructionist view of
literature (which emphasizes a critics or readers interpretation of a text, rather than the texts
actual content), the prominence of postmodernism in the form of mash-ups and bricolage, and a
growing cultural relativism that has been advanced on the left by multiculturalists and radical
feminists, who argue that history is an adjunct of identity politics, and on the right by creationists
and climate-change denialists, who suggest that science is an instrument of leftist ideologues.
10
Even some outspoken cheerleaders of Internet technology have begun to grapple with some of
its more vexing side effects. Steven Johnson, a founder of the online magazine Feed, for
instance, wrote in an article in The Wall Street Journal last year that with the development of
software for Amazon.coms Kindle and other e-book readers that enable users to jump back and
forth from other applications, he fears one of the great joys of book reading the total
immersion in another world, or in the world of the authors ideas will be compromised. He
continued, We all may read books the way we increasingly read magazines and newspapers: a
little bit here, a little bit there.
11
Mr. Johnson added that the books migration to the digital realm will turn the solitary act of
reading a direct exchange between author and reader into something far more social and
suggested that as online chatter about books grows, the unity of the book will disperse into a
multitude of pages and paragraphs vying for Googles attention.
12
WORRYING ABOUT the publics growing attention deficit disorder and susceptibility to
information overload, of course, is hardly new. Its been 25 years since Neil Postman warned in
Amusing Ourselves to Death that trivia and the entertainment values promoted by television
were creating distractions that threatened to subvert public discourse, and more than a decade
since writers like James Gleick (Faster) and David Shenk (Data Smog) described a culture
addicted to speed, drowning in data and overstimulated to the point where only sensationalism
and willful hyperbole grab peoples attention.
13
Now, with the ubiquity of instant messaging and e-mail, the growing popularity of Twitter and
YouTube, and even newer services like Google Wave, velocity and efficiency have become even
more important. Although new media can help build big TV audiences for events like the Super
Bowl, it also tends to make people treat those events as fodder for digital chatter. More people
are impatient to cut to the chase, and theyre increasingly willing to take the imperfect but
immediately available product over a more thoughtfully analyzed, carefully created one. Instead
of reading an entire news article, watching an entire television show or listening to an entire
speech, growing numbers of people are happy to jump to the summary, the video clip, the sound
bite never mind if context and nuance are lost in the process; never mind if its our emotions,
more than our sense of reason, that are engaged; never mind if statements havent been
properly vetted and sourced.
14
People tweet and text one another during plays and movies, forming judgments before seeing
the arc of the entire work. Recent books by respected authors like Malcolm Gladwell (Outliers)
and Jane Jacobs (Dark Age Ahead) rely far more heavily on cherry-picked anecdotes instead
of broader-based evidence and assiduous analysis than the books that first established their
reputations. And online research enables scholars to power-search for nuggets of information
that might support their theses, saving them the time of wading through stacks of material that
might prove marginal but that might have also prompted them to reconsider or refine their
original thinking.
15
Reading in the traditional open-ended sense is not what most of us, whatever our age and level
of computer literacy, do on the Internet, the scholar Susan Jacoby writes in The Age of
American Unreason. What we are engaged in like birds of prey looking for their next meal
is a process of swooping around with an eye out for certain kinds of information.
16
TODAYS TECHNOLOGY has bestowed miracles of access and convenience upon millions of
people, and its also proven to be a vital new means of communication. Twitter has been used by
Iranian dissidents; text messaging and social networking Web sites have been used to help
coordinate humanitarian aid in Haiti; YouTube has been used by professors to teach math and
chemistry. But technology is also turning us into a global water-cooler culture, with millions of
people sending each other (via e-mail, text messages, tweets, YouTube links) gossip, rumors and
the sort of amusing-entertaining-weird anecdotes and photographs they might once have shared
with pals over a coffee break. And in an effort to collect valuable eyeballs and clicks, media
outlets are increasingly pandering to that impulse often at the expense of hard news. I have
the theory that news is now driven not by editors who know anything, the comedian and
commentator Bill Maher recently observed. I think its driven by people who are slacking off at
work and surfing the Internet. He added, Its like a country run by Americas Funniest Home
Videos.
17
MSNBCs new program The Dylan Ratigan Show, which usually focuses on business and
politics, has a While you were working ... segment in which viewers are asked to send in some
of the strangest and outrageous stories youve found on the Internet, and the most e-mailed
lists on popular news sites tend to feature articles about pets, food, celebrities and selfimprovement. For instance, at one point on March 11, the top story on The Washington Posts
Web site was Maintaining a Sex Life, while the top story on Reddit.com, a user-generated news
link site, was (Funny) Sexy Girl? Do Not Trust Profile Pictures!
18
Given the constant bombardment of trivia and data that were subjected to in todays
mediascape, its little wonder that noisy, Manichean arguments tend to get more attention than
subtle, policy-heavy ones; that funny, snarky or willfully provocative assertions often gain more
traction than earnest, measured ones; and that loud, entertaining or controversial personalities
tend to get the most ink and airtime. This is why Sarah Palins every move and pronouncement is
followed by television news, talk-show hosts and pundits of every political persuasion. This is
why Glenn Beck and Rush Limbaugh on the right and Michael Moore on the left are repeatedly
quoted by followers and opponents. This is why a gathering of 600 people for last months
national Tea Party convention in Nashville received a disproportionate amount of coverage from
both the mainstream news media and the blogosphere.
19
Digital insiders like Mr. Lanier and Paulina Borsook, the author of the book Cyberselfish, have
noted the easily distracted, adolescent quality of much of cyberculture. Ms. Borsook describes
tech-heads as having an angry adolescent view of all authority as the Pig Parent, writing that
even older digerati want to think of themselves as having an Inner Bike Messenger.
20
For his part Mr. Lanier says that because the Internet is a kind of pseudoworld without the
qualities of a physical world, it encourages the Peter Pan fantasy of being an entitled child
forever, without the responsibilities of adulthood. While this has the virtues of playfulness and
optimism, he argues, it can also devolve into a Lord of the Flies-like nastiness, with lots of
bullying, voracious irritability and selfishness qualities enhanced, he says, by the anonymity,
peer pressure and mob rule that thrive online.
21
Digital culture, he writes in You Are Not a Gadget, is comprised of wave after wave of
juvenilia, with rooms of M.I.T. Ph.D. engineers not seeking cancer cures or sources of safe
drinking water for the underdeveloped world but schemes to send little digital pictures of teddy
bears and dragons between adult members of social networks.
22
AT THE SAME time the Internets nurturing of niche cultures is contributing to what Cass Sunstein
calls cyberbalkanization. Individuals can design feeds and alerts from their favorite Web sites
so that they get only the news they want, and with more and more opinion sites and specialized
sites, it becomes easier and easier, as Mr. Sunstein observes in his 2009 book Going to
Extremes, for people to avoid general-interest newspapers and magazines and to make
choices that reflect their own predispositions.
23
Serendipitous encounters with persons and ideas different from ones own, he writes, tend to
grow less frequent, while views that would ordinarily dissolve, simply because of an absence of
social support, can be found in large numbers on the Internet, even if they are understood to be
exotic, indefensible or bizarre in most communities. He adds that studies of group polarization
show that when like-minded people deliberate, they tend to reinforce one another and become
more extreme in their views.
24
One result of this nicheification of the world is that consensus and common ground grow ever
smaller, civic discourse gets a lot less civil, and pluralism what Isaiah Berlin called the idea
that there are many different ends that men may seek and still be fully rational, fully men,
capable of understanding each other and sympathizing and deriving light from worlds,
outlooks, very remote from our own comes to feel increasingly elusive.
25
As Mr. Manjoo observes in True Enough: Learning to Live in a Post-Fact Society (2008), the way
in which information now moves through society on currents of loosely linked online groups
and niche media outlets, pushed along by experts and journalists of dubious character and
bolstered by documents that are no longer considered proof of reality has fostered deception
and propaganda and also created what he calls a Rashomon world where the very idea of
objective reality is under attack. Politicians and voters on the right and left not only hold
different opinions from one another, but often cant even agree over a shared set of facts, as
clashes over climate change, health care and the Iraq war attest.
26
THE WEBS amplification of subjectivity applies to culture as well as politics, fueling a
phenomenon that has been gaining hold over America for several decades, with pundits
squeezing out reporters on cable news, with authors writing biographies animated by personal
and ideological agendas, with tell-all memoirs, talk-show confessionals, self-dramatizing blogs
and carefully tended Facebook and MySpace pages becoming almost de rigeur.
27
As for the textual analysis known as deconstruction, which became fashionable in American
academia in the 1980s, it enshrined individual readers subjective responses to a text over the
text itself, thereby suggesting that the very idea of the author (and any sense of original intent)
was dead. In doing so, deconstruction uncannily presaged arguments advanced by digerati like
Kevin Kelly, who in a 2006 article for The New York Times Magazine looked forward to the day
when books would cease to be individual works but would be scanned and digitized into one
great, big continuous text that could be unraveled into single pages or reduced further, into
snippets of a page, which readers like David Shields, presumably could then appropriate
and remix, like bits of music, into new works of their own.
28
As John Updike pointed out, Mr. Kellys vision would in effect mean the end of authorship
hobbling writers ability to earn a living from their published works, while at the same time
removing a sense of both recognition and accountability from their creations. In a Web world
where copies of books (and articles and music and other content) are cheap or free, Mr. Kelly has
suggested, authors and artists could make money by selling performances, access to the
creator, personalization, add-on information and other aspects of their work that cannot be
copied. But while such schemes may work for artists who happen to be entrepreneurial, selfpromoting and charismatic, Mr. Lanier says he fears that for the vast majority of journalists,
musicians, artists and filmmakers it simply means career oblivion.
29
Other challenges to the autonomy of the artist come from new interactive media and from
constant polls on television and the Web, which ask audience members for feedback on
television shows, movies and music; and from fan bulletin boards, which often function like giant
focus groups. Should the writers of television shows listen to fan feedback or a networks
audience testing? Does the desire to get an article on a most e-mailed list consciously or
unconsciously influence how reporters and editors go about their assignments and approaches to
stories? Are literary-minded novelists increasingly taking into account what their readers want or
expect?
30
As reading shifts from the private page to the communal screen, Mr. Carr writes in The
Shallows, authors will increasingly tailor their work to a milieu that the writer Caleb Crain
describes as groupiness, where people read mainly for the sake of a feeling of belonging
rather than for personal enlightenment or amusement. As social concerns override literary ones,
writers seem fated to eschew virtuosity and experimentation in favor of a bland but immediately
accessible style.
31
For that matter, the very value of artistic imagination and originality, along with the primacy of
the individual, is increasingly being questioned in our copy-mad, postmodern digital world. In a
recent Newsweek cover story pegged to the Tiger Woods scandal, Neal Gabler, the author of
Life: the Movie: How Entertainment Conquered Reality, absurdly asserts that celebrity is the
great new art form of the 21st century.
32
Celebrity, Mr. Gabler argues, competes with and often supersedes more traditional
entertainments like movies, books, plays and TV shows, and it performs, he says, in its own
roundabout way, many of the functions those old media performed in their heyday: among them,
distracting us, sensitizing us to the human condition, and creating a fund of common experience
around which we can form a national community.
33
However impossible it is to think of Jon & Kate Plus Eight or Jersey Shore as art, reality shows
have taken over wide swaths of television, and memoir writing has become a rite of passage for
actors, politicians and celebrities of every ilk. At the same time our cultural landscape is
brimming over with parodies, homages, variations, pastiches, collages and others forms of
appropriation art much of it facilitated by new technology that makes remixing, and cuttingand-pasting easy enough for a child.
34
Its no longer just hip-hop sampling that rules in youth culture, but also jukebox musicals like
Jersey Boys and Rock of Ages, and works like The League of Extraordinary Gentlemen,
which features characters drawn from a host of classic adventures. Fan fiction and fan edits are
thriving, as are karaoke contests, video games like Guitar Hero, and YouTube mash-ups of music
and movie, television and visual images. These recyclings and post-modern experiments run the
gamut in quality. Some, like Zachary Masons Lost Books of the Odyssey, are beautifully
rendered works of art in their own right. Some, like J. J. Abrams 2009 Star Trek film and Amy
Heckerlings 1995 Clueless (based on Jane Austens Emma) are inspired reinventions of
classics. Some fan-made videos are extremely clever and inventive, and some, like a 3-D video
version of Picassos Guernica posted on YouTube, are intriguing works that
raise important and unsettling questions about art and appropriation.
All too often, however, the recycling and cut-and-paste esthetic has resulted in tired imitations;
cheap, lazy re-dos; or works of appropriation designed to generate controversy like Mr.
Shieldss Reality Hunger. Lady Gaga is third-generation Madonna; many jukebox or tribute
musicals like Good Vibrations and The Times They Are A-Changin do an embarrassing
disservice to the artists who inspired them; and the rote remaking of old television shows into
films (from The Brady Bunch to Charlies Angels to Get Smart), not to mention the
recycling of video games into movies (like Tomb Raider and Resident Evil) often seem as
pointless as they are now predictable.
35
Writing in a 2005 Wired article that new technologies redefine us, William Gibson hailed
audience participation and argued that an endless, recombinant, and fundamentally social
process generates countless hours of creative product. Indeed, he said, audience is as antique
a term as record, the one archaically passive, the other archaically physical. The record, not the
remix, is the anomaly today. The remix is the very nature of the digital.
36
To Mr. Lanier, however, the prevalence of mash-ups in todays culture is a sign of nostalgic
malaise. Online culture, he writes, is dominated by trivial mash-ups of the culture that
existed before the onset of mash-ups, and by fandom responding to the dwindling outposts of
centralized mass media. It is a culture of reaction without action.
37
He points out that much of the chatter online today is actually driven by fan responses to
expression that was originally created within the sphere of old media, which many digerati
mock as old-fashioned and pass, and which is now being destroyed by the Internet. Comments
about TV shows, major movies, commercial music releases and video games must be responsible
for almost as much bit traffic as porn, Mr. Lanier writes. There is certainly nothing wrong with
that, but since the Web is killing the old media, we face a situation in which culture is effectively
eating its own seed stock.
Sunday Story #7
Due Sun Nov 1 (before midnight)
Prompt
vimeo-screenshot cheating
through school and university, passing off the words and work of others as their own in papers,
projects, and theses.
A June 2005 study by the Center for Academic Integrity (CAI) of 50,000 students didnt think that
Internet plagiarism was a serious issue. This disturbing finding gets at a grave problem in
terms of Internet and culture: The digital revolution is creating a generation of cut-and-paste
burglars who view all content on the Internet as common property.
successful society. Without trust in public and business institutions outside the family, an
economy stops developing after a certain point, he says.
What rules should we seek to enforce (and why)?
Lawrence Lessig, Remix: Making Art and Commerce Thrive in the Hybrid Economy (2008)
I stand by my position that piracy is wrong. However, I ask whether we want to make this
mistake again. Should the next ten years be another decade-long war against our kids? Should
we spend more of our resources hiring lawyers and technologists to build better weapons to
wage war against those practicing RW culture? Have we learned nothing from the total failure of
policy that has defined copyright policy over the last decade? I believe this for the same reason
the content industry is so keen to enforce copyright. As the RIAAs Mitch Bainwol and Cary
Sherman explained: Its not just the loss of current sales that concerns us, but the habits formed
in college that will stay with these students for a lifetime. This is a teachable momentan
opportunity to educate these particular students about the importance of music in their lives and
the importance of respecting and valuing music as intellectual property. Exactly right. So what
rules should we work so hard to enforce? The argument in favor of reforming our legal attitude
toward remixing is a thousand times stronger than in the context of p2p file sharing: this is a
matter of literacy. We should encourage the spread of literacy here, at least so long as it doesnt
stifle other forms of creativity. There is no plausible argument that allowing kids to remix music is
going to hurt anyone. Until someone can show that it will, the law should simply get out of the
way. We need to decriminalize creativity before we further criminalize a generation of our kids.
"The Remix is the Very Nature of the Digital"
Andrew Keen, The Cult of the Amateur (2008)
Silicon Valley visionary and cyberpunk author William Gibson wrote in the July 2005 issue of
Wired magazine: "Our culture no longer bothers to use words like appropriation or borrowing.
Today's audience isn't listening at all, it's participating. Indeed, audience is as antique a term as
record, the one archaically passive, the other archaically physical. The record, not the remix, is
the anomaly today. The remix is the very nature of the
digital."
A dangerous self-negating prophecy is at work here: The more Western policymakers talk up the
threat that bloggers pose to authoritarian regimes, the more likely those regimes are to limit the
maneuver space where those bloggers operate. In some countries, such politicization may be for
the better, as blogging would take on a more explicit political role, with bloggers enjoying the
status of journalists or human rights defenders. But in many other countries such politicization
may only stifle the nascent Internet movement, which could have been be far more successful if
its advocacy were limited to pursuing social rather than political ends.
(Authoritarian governments like China have started to) aggressively engage with new media
themselves, paying bloggers to spread propaganda and troll social networking sites looking for
new information on those in the opposition.
Activism or "Slactivism"?
Evgeny Morozov, The Net Delusion: The Dark Side of Internet Freedom (2011)
A good way to tell whether a digital campaign is serious or slacktivist is to look at what it
aspires to achieve. Campaigns of the latter kind seem to be premised on the assumption that,
given enough tweets, the worlds problems are solvable; in the language of computer geeks,
given enough eyeballs, all bugs are shallow. This is precisely what propels so many of these
campaigns into gathering signatures, adding new members to their Facebook pages, and asking
everyone involved to link to the campaign on blogs and Twitter. This works for some issues,
especially those that are geography bound (e.g., performing group community service at a local
soup kitchen, campaigning against a resolution passed by a local town council, etc.). But with
global issues, whether its genocide in Darfur or climate change, there are diminishing returns to
awareness raising. At some point one must convert awareness into action, and this is where tools
like Twitter and Facebook prove much less successful. Not surprisingly, many of these Facebook
groups find themselves in a waiting for Godot predicament: Now that the group has been
formed, what comes next? In most cases, what comes next is spam. Most of these campaigns
remember many of them, like the anti-FARC campaign in Colombia, pop up spontaneously
without any carefully planned course of actiondo not have clear goals beyond awareness
raising. Thus, what they settle on is fund-raising. But its quite obvious that not every problem
can be solved with an injection of funds. If the plight of sub-Saharan Africa or even Afghanistan is
anything to judge by, money can only breed more trouble unless endemic political and social
problems are sorted out first.
"The Ringleman Effect"
Evgeny Morozov, The Net Delusion: The Dark Side of Internet Freedom (2011)
In 1882 Ringelmann conducted an experiment in which he asked four individuals to pull on a
rope, first alone and then in groups, and then compared the results. The rope was attached to a
strain gauge so it was possible to measure the pull force. To Ringelmanns surprise, the total pull
force of the group pull was consistently less than the sum of the individual pull forces, even as
he adjusted the number of individuals participating in the experiment. What has become known
as the Ringelmann Effect is thus the opposite of synergy.
In the century that has passed since Ringelmanns original experiment, plenty of other tests
have proven that we usually put much less effort into a task when other people are also doing it
alongside us. In fact, calling it the Ringelmann Effect is only adding theoretical luster to what we
already knew intuitively. We dont have to make fools of ourselves by singing Happy Birthday
at the top of our lungs; others will do the job just fine. Nor do we always clap our hands as loudly
as we couldmuch to the disappointment of performers. The logic is clear: When everyone in
the group performs the same mundane tasks, its impossible to evaluate individual contributions,
and people inevitably begin slacking off (its for this reason that another name for this
phenomenon is social loafing). Increasing the number of participants diminishes the relative
social pressure on each and often results in inferior outputs.
Hearing of Ringelmanns experiments today, one cant help noticing the parallels to much of
todays Facebook activism. With the power of Facebook and Twitter at their fingertips, many
activists may choose to tackle a problem collectively when tackling it individually would make
more strategic sense. But just as the madness of crowds gives rise to the wisdom of crowds
only under certain, carefully delineated social conditions, social loafing leads to synergy only
once certain conditions are met.
China and its "firewall"
Eli Pariser, The Filter Bubble: What the Internet is Hiding From You (2011)
In practice, the firewall is not so hard to circumvent. Corporate virtual private networksInternet
connections encrypted to prevent espionageoperate with impunity. Proxies and firewall
workarounds like Tor connect in-country Chinese dissidents with even the most hard-core
antigovernment Web sites. But to focus exclusively on the firewalls inability to perfectly block
information is to miss the point. Chinas objective isnt so much to blot out unsavory information
as to alter the physics around itto create friction for problematic information and to route
public attention to progovernment forums. While it cant block all of the people from all of the
news all of the time, it doesnt need to.
What the government cares about, Atlantic journalist James Fallows writes, is making the
quest for information just enough of a nuisance that people generally wont bother. The
strategy, says Xiao Qiang of the University of California at Berkeley, is about social control,
human surveillance, peer pressure, and self-censorship. Because theres no official list of
blocked keywords or forbidden topics published by the government, businesses and individuals
censor themselves to avoid a visit from the police. Which sites are available changes daily. And
while some bloggers suggest that the systems unreliability is a result of faulty technology (the
Internet will override attempts to control it!), for the government this is a feature, not a bug.
James Mulvenon, the head of the Center for Intelligence Research and Analysis, puts it this way:
Theres a randomness to their enforcement, and that creates a sense that theyre looking at
everything.
Sunday Story #8
Due Sun Nov 8 (before midnight)
Prompt
medium-magazine
Google has continually fallen back on its guidelines to remove only content that breaks laws or
its terms of service, at the request of users, governments or courts, which is why blocking the
anti-Islam video was exceptional. Some wonder what precedent this might set, especially for
government authorities keen to stanch expression they think will inflame their populace.Free
Speech in the Age of YouTube
NYT News Analysis (September 22, 2012)
COMPANIES are usually accountable to no one but their shareholders.
Internet companies are a different breed. Because they traffic in speech rather than, say, corn
syrup or warplanes they make decisions every day about what kind of expression is allowed
where. And occasionally they come under pressure to explain how they decide, on whose laws
and values they rely, and how they distinguish between toxic speech that must be taken down
and that which can remain.
The storm over an incendiary anti-Islamic video posted on YouTube has stirred fresh debate on
these issues. Google, which owns YouTube, restricted access to the video in Egypt and Libya,
after the killing of a United States ambassador and three other Americans. Then, it pulled the
plug on the video in five other countries, where the content violated local laws.
Some countries blocked YouTube altogether, though that didnt stop the bloodshed: in Pakistan,
where elections are to be scheduled soon, riots on Friday left a death toll of 19.
The company pointed to its internal edicts to explain why it rebuffed calls to take down the video
altogether. It did not meet its definition of hate speech, YouTube said, and so it allowed the video
to stay up on the Web. It didnt say very much more.
That explanation revealed not only the challenges that confront companies like Google but also
how opaque they can be in explaining their verdicts on what can be said on their platforms.
Google, Facebook and Twitter receive hundreds of thousands of complaints about content every
week.
We are just awakening to the need for some scrutiny or oversight or public attention to the
decisions of the most powerful private speech controllers, said Tim Wu, a Columbia University
law professor who briefly advised the Obama administration on consumer protection regulations
online.
Google was right, Mr. Wu believes, to selectively restrict access to the crude anti-Islam video in
light of the extraordinary violence that broke out. But he said the public deserved to know more
about how private firms made those decisions in the first place, every day, all over the world.
After all, he added, they are setting case law, just as courts do in sovereign countries.
Mr. Wu offered some unsolicited advice: Why not set up an oversight board of regional experts or
serious YouTube users from around the world to make the especially tough decisions? Google has
not responded to his proposal, which he outlined in a blog post for The New Republic.
Certainly, the scale and nature of YouTube makes this a daunting task. Any analysis requires
combing through over a billion videos and overlaying that against the laws and mores of
different countries. Its unclear whether expert panels would allow for unpopular minority opinion
anyway. The company said in a statement on Friday that, like newspapers, it, too, made
nuanced judgments about content: Its why user-generated content sites typically have clear
community guidelines and remove videos or posts that break them.
Behind closed doors, Internet companies routinely make tough decisions on content.
Apple and Google earlier this year yanked a mobile application produced by Hezbollah. In 2010,
YouTube removed links to speeches by an American-born cleric, Anwar al-Awlaki, in which he
advocated terrorist violence; at the time, the company said it proscribed posts that could incite
violent acts.
Susan Benesch, who studies hate speech that incites violence, said it would be wise to have
many more explanations like this, not least to promote debate. They certainly dont have to,
said Ms. Benesch, director of the Dangerous Speech Project at the World Policy Institute. But we
can encourage them to because of the enormous power they have.
The companies point out that they obey the laws of every country in which they do business.
And their employees and algorithms vet content that may violate their user guidelines, which are
public.
YouTube prohibits hate speech, which it defines as that which attacks or demeans a group
based on its race, religion and so on; Facebooks hate speech ban likewise covers content that
attacks people on the basis of identity. Google and Facebook prohibit hate speech; Twitter does
not explicitly ban it. And anyway, legal scholars say, it is exceedingly difficult to devise a
universal definition of hate speech.
Shibley Telhami, a political scientist at the University of Maryland, said he hoped the violence
over the video would encourage a nuanced conversation about how to safeguard free expression
with other values, like public safety. Its really about at what point does speech becomes action;
thats a boundary that becomes difficult to draw, and its a slippery slope, Mr. Telhami said.
He cautioned that some countries, like Russia, which threatened to block YouTube altogether,
would be thrilled to have any excuse to squelch speech. Does Russia really care about this
film? Mr. Telhami asked.
International law does not protect speech that is designed to cause violence. Several people
have been convicted in international courts for incitement to genocide in Rwanda.
One of the challenges of the digital age, as the YouTube case shows, is that speech articulated in
one part of the world can spark mayhem in another. Can the companies that run those speech
platforms predict what words and images might set off carnage elsewhere? Whoever builds that
algorithm may end up saving lives.
weapon. But it has a weakness: It depends on you. Youre the detonator. If you dont cooperate,
the bomb doesnt explode.
This isnt just a Muslim problem, though thats been the pattern lately. On YouTube, you can find
videos insulting every religion on the planet: Jews, Christians, Hindus, Catholics, Mormons,
Buddhists, and more. Some clips are ironic. Others are simply disgusting. Many were posted to
bait one group into fighting another. The baiters are indiscriminate.
The promoter of the Mohammed movie founded a group that also protests at Mormon temples.
The hatred and bloodshed will go on until you stop taking the bait. Mockery of your prophet on a
computer with an Internet address somewhere in the world can no longer be your master. Nor
can the puppet clerics who tell you to respond with violence.
Lay down your stones and your anger. Go home and pray. God is too great to be troubled by the
insults of fools. Follow Him.
Links
The Filter Bubble
The Filter Bubble: How the personalized web is changing what we read and how we think (Eli Pariser, 2011)
The trouble with the echo chamber online (NYT, 2011)
Your own facts (Evgeny Morozov review of "The Filter Bubble" (NYT, 2011)
Maybe the web is not as polarized as we thought (Slate, 2012)
Five ways out of filter bubbles (Nieman Journalism Lab, 2012)
Facebook study disputes theory of political polarization among users (NYT, 2015)
Facebook published a big new study on the Filter Bubble: Here's what it says (Eli Pariser, Medium, 2015)
Fun facts from the new Facebook Filter Bubble study (Eli Pariser, Medium, 2015)
The fact that weak ties introduce us to novel information wouldnt matter if we only had a few
weak ties on Facebook. But it turns out that most of our relationships on Facebook are pretty
weak, according to Bakshys study. Even if you consider the most lax definition of a strong tie
someone from whom youve received a single message or commentmost people still have a lot
more weak ties than strong ones. And this means that, when considered in aggregate, our weak
tieswith their access to novel informationare the most influential people in our networks.
Even though were more likely to share any one thing posted by a close friend, we have so many
more mere acquaintances posting stuff that our close friends are all but drowned out.
In this way, Bakshys findings complicate the echo chamber theory. If most of the people we
encounter online are weak ties rather than close friends, and if theyre all feeding us links that
we wouldnt have seen elsewhere, this suggests that Facebook (and the Web generally) isnt
simply confirming our view of the world. Social networkseven if theyre dominated by
personalization algorithms like EdgeRankcould be breaking you out of your filter bubble rather
than reinforcing it.
Bakshys work shares some features with previous communications studies on networks, and it
confirms some long-held ideas in sociology. (For instance, the idea that weak ties can be
important was first floated in a seminal 1973 study by Mark Granovetter.) It also confirms a few
other recent studies questioning the echo chamber, including the economists Matthew Gentzkow
and Jesse Shapiros look at online news segregation.
But there are two reasons why Bakshys research should be considered a landmark. First, the
study is experimental and not merely observational. Bakshy wasnt just watching how people
react to news shared by their friends on Facebook. Instead, he was able to actively game the
News Feed to create two different worlds in which some people get a certain piece of news and
other, statistically identical, people do not get that news. In this way, his study is like a clinical
trial: Theres a treatment group thats subjected to a certain stimulus and a control group that is
not, and Bakshy calculated the differences between the two. This allows him to draw causal
relationships between seeing a link and acting on it: If you see a link and reshare it while some
other user does not see the link and does not share it, this means that the Facebook feed was
responsible for the sharing.
The other crucial thing about this study is that it is almost unthinkably enormous. At the time of
the experiment, there were 500 million active users on Facebook. Bakshys experiment included
253 million of them and more than 75 million shared URLs, meaning that in total, the study
observed nearly 1.2 billion instances in which someone was or was not presented with a certain
link. This scale is unheard of in academic sociological studies, which usually involve hundreds or,
at most, thousands of people communicating in ways that are far less trackable.
At the same time, theres an obvious problem with Bakshys study: It could only occur with the
express consent of Facebook, and in the end it produced a result that is clearly very positive for
the social network. The fact that Facebooks P.R. team contacted me about the study and allowed
me to interview Bakshy suggests the company is very pleased with the result. If Bakshys
experiment had come to the opposite conclusionthat, say, the News Feed does seem to echo
our own ideasI suspect they wouldnt be publicizing it at all. (Bakshy told me that he has a
good amount of freedom at the company to research whatever he wants to look into about the
social network, and that no one tells him what to investigate and what to leave alone. The study
is being submitted to peer-reviewed academic journals.)
Also, so as not to completely tank the ongoing sales of my brilliant book, Id argue that Bakshys
study doesnt indemnify the modern media against other charges that its distorting our politics.
For one thing, while it shows that our weak ties give us access to stories that we wouldnt
otherwise have seen, it doesnt address whether those stories differ ideologically from our own
general worldview. If youre a liberal but you dont have time to follow political news very closely,
then your weak ties may just be showing you lefty blog links that you agree witheven though,
under Bakshys study, those links would have qualified as novel information. (Bakshys study
covered all links, not just links to news stories; he is currently working on a follow-up that is more
narrowly focused on political content.)
Whats more, even if social networks arent pushing us toward news that confirms our beliefs,
theres still the question of how we interpret that news. Even if were all being exposed to a
diverse range of stories, we can still decide whose spin we wantand then we go to the Drudge
Report or the Huffington Post to get our own views confirmed.
Still, I have to say Im gratified by Bakshys study. The echo chamber is one of many ideas about
the Web that weve come to accept in the absence of any firm evidence. The troves of data that
companies like Facebook are now collecting will help add some empirical backing to our
understanding of how we behave online. If some long-held beliefs get overturned in the process,
then all the better.
Enclave extremism
Cass Sunstein, Republic.com 2.0 (2008)
Let us explore an experiment conducted in Colorado in 2005, designed to cast light on the
consequences of self- sorting. About 60 Americans were brought together and assembled into a
number of groups, each consisting of five or six people. Members of each group were asked to
deliberate on three of the most controversial issues of the day: Should states allow same-sex
couples to enter into civil unions? Should employers engage in affirmative action by giving a
preference to members of traditionally disadvantaged groups? Should the United States sign an
international treaty to combat global warming?
As the experiment was designed, the groups consisted of "liberal" and "conservative" enclaves
the former from Boulder, the latter from Colorado Springs. It is widely known that Boulder
tends to be liberal, and Colorado Springs tends to be conservative. Participants were screened to
ensure that they generally conformed to those stereotypes. People were asked to state their
opinions anonymously both before and after 15 minutes of group discussion. What was the effect
of that discussion?
In almost every case, people held more-extreme positions after they spoke with like-minded
others. Discussion made civil unions more popular among liberals and less popular among
conservatives. Liberals favored an international treaty to control global warming before
discussion; they favored it far more strongly after discussion. Conservatives were neutral on that
treaty before discussion, but they strongly opposed it after discussion. Liberals, mildly favorable
toward affirmative action before discussion, became strongly favorable toward affirmative action
after discussion. Firmly negative about affirmative action before discussion, conservatives
became fiercely negative about affirmative action after discussion.
The creation of enclaves of like-minded people had a second effect: It made both liberal groups
and conservative groups significantly more homogeneous and thus squelched diversity. Before
people started to talk, many groups displayed a fair amount of internal disagreement on the
three issues. The disagreements were greatly reduced as a result of a mere 15-minute
discussion. In their anonymous statements, group members showed far more consensus after
discussion than before. The discussion greatly widened the rift between liberals and
conservatives on all three issues. The Internet makes it exceedingly easy for people to replicate
the Colorado experiment online, whether or not that is what they are trying to do. Those who
think that affirmative action is a good idea can, and often do, read reams of material that
support their view; they can, and often do, exclude any and all material that argues the other
way. Those who dislike carbon taxes can find arguments to that effect. Many liberals jump from
one liberal blog to another, and many conservatives restrict their reading to points of view that
they find congenial. In short, those who want to find support for what they already think, and to
insulate themselves from disturbing topics and contrary points of view, can do that far more
easily than they can if they skim through a decent newspaper.
A key consequence of this kind of self-sorting is what we might call enclave extremism. When
people end up in enclaves of like-minded people, they usually move toward a more extreme
point in the direction to which the group's members were originally inclined. Enclave extremism
is a special case of the broader phenomenon of group polarization, which extends well beyond
politics and occurs as groups adopt a more extreme version of whatever view is antecedently
favored by their members.
Why do enclaves produce polarization?
Why do enclaves, on the Internet and elsewhere, produce political polarization? The first
explanation emphasizes the role of information. Suppose that people who tend to oppose nuclear
power are exposed to the views of those who agree with them. It stands to reason that such
people will find a disproportionately large number of arguments against nuclear power and a
disproportionately small number of arguments in favor of nuclear power. If people are paying
attention to one another, the exchange of information should move people further in opposition
to nuclear power. This very process was specifically observed in the Colorado experiment, and in
our increasingly enclaved world, it is happening every minute of every day.
The second explanation, involving social comparison, begins with the reasonable
suggestion that people want to be perceived favorably by other group members. Once they hear
what others believe, they often adjust their positions in the direction of the dominant position.
Suppose, for example, that people in an Internet discussion group tend to be sharply opposed to
the idea of civil unions for same-sex couples, and that they also want to seem to be sharply
opposed to such unions. If they are speaking with people who are also sharply opposed to these
things, they are likely to shift in the direction of even sharper opposition as a result of learning
what others think.
The final explanation is the most subtle, and probably the most important. The starting point
here is that on many issues, most of us are really not sure what we think. Our lack of certainty
inclines us toward the middle. Outside of enclaves, moderation is the usual path. Now imagine
that people find themselves in enclaves in which they exclusively hear from others who think as
they do. As a result, their confidence typically grows, and they become more extreme in their
beliefs. Corroboration, in short, reduces tentativeness, and an increase in confidence produces
extremism. Enclave extremism is particularly likely to occur on the Internet because people can
so easily find niches of like-minded types and discover that their own tentative view is shared
by others.
Cass Sunstein, Chronicle of Higher Education (2007)
Questions:
What is Amazon Web Services and why is this business important to Amazon?
Explain: Each of the Fab Four (Google/Apple/Amazon/Facebook) believes that it can somehow
define the future of television. The honey pot? Not only that $70 billion in domestic ad revenue but also
$74 billion in cable-subscriber fees.
Which of the Fab Four is best positioned to dominate the distribution of television and movies?
Why?
One industry stands directly between the Fab Four (Google/Apple/Amazon/Facebook) and global
domination. It's an industry that frustrates you every day, one that consistently ranks at the bottom of
consumer satisfaction surveys, that poster child for stifling innovation and creativity: your phone carrier.
And your cable or DSL firm. For Amazon, Apple, Facebook, and Google, the world's wireless and
broadband companies are a blessing and a curse. By investing in the infrastructure that powers the
Internet, they've made the four firms' services possible. But the telcos and cable companies are also
gatekeepers to customers, and Amazon, Apple, Google, and Facebook would love to cut them out of the
equation. In the long run, they actually stand a shot at doing so. Research and then explain the
significance of this passage.
One of the technologies that Google released in the past few years anonymously tracks where its
Android phones go. So if you have a phone that is powered by Google's operating system, when you're
driving down the road, the phone might send back data to Google about how fast you're going, where
you are and various other statistics about your drive. And then Google can collect all of that information
from all the Android phones, and it can create a very accurate representation of traffic patterns in a city.
Research and then explain the significance of this passage.
Data powers new inventions: Google's voice-recognition system, its traffic maps, and its spell-checker
are all based on large-scale, anonymous customer tracking. Research and then explain the significance
of this passage.
Facebook is not a bystander in the competition to create the best possible digital shopping experience
for consumersanother battle for which those platforms are being built. To win this one means taking
turf from Amazon. Facebook Gifts is a new service in America which mines what the company knows
about its users, their tastes and their friendships to encourage them to buy and send each other gifts at
appropriate times, such as birthdays. Will Facebook Gifts ever become a meaningfully sized business?
Read this story and explain its significance:
http://qz.com/83243/amazon-apple-google-facebook-are-all-trying-to-turn-into-the-same-ubercompany/
The other outfit standing between you and the Fab Four (Google/Apple/Amazon/Facebook) is one
that barely registers: your credit-card company. When you buy something through iTunes, the Android
Market, Amazon, or Facebook, the credit-card company gets a small cut of your payment. To these
giants, the cut represents a terrible inefficiency--why surrender all that cash to an interloper? And not
just any interloper, but an inefficient, unfriendly one that rarely innovates for its consumers. These
credit-card giants seem ripe for the picking. Heres how that scenario would play out. The first step is
getting consumers used to the idea of paying by phone. The second step is to encourage consumers to
link their bank accounts directly to their devices, thus eliminating the credit-card middleman. Research
and then explain the significance of this passage.
Review the organizational charts graphic in this story and tell us what you think it means.
http://www.ritholtz.com/blog/2013/07/organizational-charts-of-amazon-apple-facebook-microsoft/ (Links
to an external site.)
Platforms are the weapons with which the warring factions seek to rule their own lands and conquer
new ones. Patents are the weapons with which they try straightforwardly to hurt their rivals. Although
some lawsuits have been launched by trolls who accumulate patents without actually making stuff, a
number have been launched by one giant, or a company acting as its catspaw, against one of the
others. Apple has been lobbing lawsuits around in the smartphone arena as if armed with a trebuchet.
Google snapped up Motorola Mobility in large part to get its hands on the firms thousands of patents
issued and pending, thus bulking up its own defences and accumulating ammunition to fling at the
fortresses of the competition. Research and then explain the significance of this passage (clue: the point
here is how patents create power for these companies vis--vis each other).
Google is experimenting with a service that would let folk find goods online, order them and have
them delivered within a day for a modest fee. This seems similar to Amazons hugely successful Prime
service, which costs $79 a year to join in America. Rather than try to replicate the e-commerce giants
extensive network of warehouses, Google is looking for partnerships with shipping companies and
retailers instead. But if it is serious about taking on Amazon, it may ultimately have to buy a logistics
firm. At $69 billion UPS has a market value less than a third of Googles; it is valued at less than twice
the search giants cash pile. Research and then explain the significance of this passage.
Review the infographic in this story and share what you find compelling in it.
http://venturebeat.com/2013/06/25/every-day-tracking/
Write and respond to a lesson-relevant question of your choosing. z
The tech boom of the 1990s was thought to spell the death of plenty of brick-and-mortar
companies, but they coexisted with their e-rivals for years. It looks like those days are now
coming to an end.
We all knew that the 1990s tech boom would change the world. But then a funny thing
happened: For years brick-and-mortar companies happily coexisted with their e-rivals. Borders,
for instance, actually increased sales from 2000 to 2005 as it dueled Amazon (AMZN). Now those
days seem to be ending. Digital companies are so big, and growing so fast, that they're
obliterating old businesses. Consider these four examples: The U.S. Postal Service says it will be
insolvent by the end of 2011 without a bailout. Blockbuster and Borders have filed for
bankruptcy. And music stores keep closing.
Texts vs. Mail. The U.S. Postal Service is on track to lose $6 billion this year, as e-mails and
texting reduce mail volumes faster than postage fees can rise.
Netflix vs. Blockbuster. Blockbuster hit 4,000 stores in two decades. Then, in 1997, Reed
Hastings got charged a $40 late fee on Apollo 13 and founded Netflix (NFLX). The rest is history.
Amazon vs. Borders. Amazon almost single-handedly bankrupted the No. 2 bookseller in a
decade. Barnes & Noble (BKS) is fighting back with its Nook.
iTunes vs. CDs. ITunes made its debut in 2003, with devastating effects on music retailers.
Tower Records went bust in 2004. Musicland folded in 2006. FYE has shriveled.
Coming Soon: The End of Movie Theatres?
Andrew Keen, The Cult of the Amateur (2008)
The Internet is beginning to undermine the viability of the movie theater. ClickStar, an Intelfunded start-up founded by actor Morgan Freeman and launched in December 2006, is debuting
some independent films on the Internet the same day they are released in the theaters. Such
practices, which go against long-held Hollywood strategy, will compound the crisis facing movie
theaters. When a movie is available on the Internet as soon as it has been released, why go to
the extra inconvenience and cost of seeing it in a local theater? For many technophiles
accustomed to watching all media on their computers already, the big screen viewing experience
of the multiplex will hardly be missed.
Walled gardens
Tim Berners-Lee, Scientific American (2010)
In contrast, not using open standards creates closed worlds. Apples iTunes system, for example,
identifies songs and videos using URIs that are open. But instead of http: theq addresses begin
with itunes:, which is proprietary. You can access an itunes: link only using Apples
proprietary iTunes program. You cant make a link to any information in the iTunes worlda song
or information about a band. You cant send that link to someone else to see. You are no longer
on the Web. The iTunes world is centralized and walled off. You are trapped in a single store,
rather than being on the open marketplace. For all the stores wonderful features, its evolution is
limited to what one company thinks up. Other companies are also creating closed worlds. The
tendency for magazines, for example, to produce smartphone apps rather than Web apps is
disturbing, because that material is off the Web. You cant bookmark it or e-mail a link to a page
within it. You cant tweet it. It is better to build a Web app that will also run on smartphone
browsers, and the techniques for doing so are getting better all the time.
The web as we know it is being threatened
Tim Berners-Lee, Scientific American (2010)
The Web as we know it, however, is being threatened in different ways. Some of its most
successful inhabitants have begun to chip away at its principles. Large social-networking sites
are walling off information posted by their users from the rest of the Web. Wireless Internet
providers are being tempted to slow traffic to sites with which they have not made deals.
Governmentstotalitarian and democratic alikeare monitoring peoples online habits,
endangering important human rights. Several threats to the Webs universality have arisen
recently. Cable television companies that sell Internet connectivity are considering whether to
limit their Internet users to downloading only the companys mix of entertainment. Socialnetworking sites present a different kind of problem. Facebook, LinkedIn, Friendster and others
typically provide value by capturing information as you enter it: your birthday, your e-mail
address, your likes, and links indicating who is friends with whom and who is in which
photograph. The sites assemble these bits of data into brilliant databases and reuse the
information to provide value-added servicebut only within their sites. Once you enter your data
into one of these services, you cannot easily use them on another site. Each site is a silo, walled
off from the others. Yes, your sites pages are on the Web, but your data are not.
Without Their Permission: How the 21st Century Will Be Made, Not Managed
Lock-in
Eli Pariser, The Filter Bubble: What the Internet is Hiding From You (2011)
Which brings us to lock-in. Lock-in is the point at which users are so invested in their technology
that even if competitors might offer better services, its not worth making the switch. If youre a
Facebook member, think about what itd take to get you to switch to another social networking
siteeven if the site had vastly greater features. Itd probably take a lotre-creating your whole
profile, uploading all of those pictures, and laboriously entering your friends names would be
extremely tedious. Youre pretty locked in. Likewise, Gmail, Google Play, Google Drive, and a host
of other products are part of an orchestrated campaign for Google lock-in. The fight between
Google and Facebook hinges on which can achieve lock-in for the most users.
Google is big
Scott Cleland, Search & Destroy: Why You Cant Trust Google, Inc. (2011)
Google has more than one billion users. Google dominates both search and search advertising.
Google handles more than two billion Internet searches per day. Googles tentacles extend to
every major type of content, every major hardware platform, every nook and cranny of the Web,
and every corner of the globe. Google has indexed over one trillion Web pages. If you spent one
minute scanning each page indexed by Google, then you would need more than 38,000 years to
scan them all. Gmails data repository is equivalent to about 1.74 billion music CDs. If you
printed the information Google processes each day, then you would need to cut down 1.2 million
trees.
A list of information that Google collects
Scott Cleland, Search & Destroy: Why You Cant Trust Google, Inc. (2011)
Your interests, desires and needs (e.g., Search) Your search history (e.g. Web History)
The websites you visit (e.g. Chrome)
The videos that you watch (e.g. YouTube)
The news, commentary, and books that you reach (e.g. Google Books) The topics that you
discuss (e.g. Google Groups)
The content that you produce (e.g. Gmail)
Your, your familys and your friends faces (e.g. Picasa)
The sound of your voice and the people you call (e.g. Google Talk) Your medical history and
prescriptions (e.g. Google Health)
Your purchases (e.g. Google Maps)
Your locations of interest (e.g. Google Street View) Your personal information (e.g. Checkout)
Your home, workplace, and hangouts (e.g. Google Latitude) Your activity plans (e.g. Google
Calendar)
The data stores on your computer (e.g. Google Desktop with Search across computers
enabled) The TV Programs you watch (e.g. Google TV)
This is by no means a complete list there are hundreds of Google Products. Google also offers
tools that enable application developers to gather information and invests in companies
developing new sources of information.
For example, Google has invested in 23andMe, a company that is helping individuals
understand their own
genetic information using recent advances in DNA analysis technologies and web-based
interactive tools. Personal genomes could be the ultimate tool for targeted advertising,
personalization, and even medical fortune telling.
YouTube, the tracking tool
Scott Cleland, Search & Destroy: Why You Cant Trust Google, Inc. (2011)
YouTube is one of Googles most powerful tracking tools. Acquired by Google in 2006 for $1.65
billion (an astounding figure given that YouTube did not have a viable business model at the
time). YouTube is the leading video-sharing website. YouTube makes it easy for blogs and other
websites to embed YouTube videos in their Web pages. As weve seen, Google is informed every
time your browser loads a Web page with embedded YouTube videos. Google not only keeps a
log of the YouTube videos you watch, it associates the log with your true identity.
Hmm...
Scott Cleland, Search & Destroy: Why You Cant Trust Google, Inc. (2011)
Google has patented a system for monitoring the way you move the on-screen pointer with your
PCs mouse. Its also been reported that Google is developing a method for listening to
background sounds picked up by your PCs microphone.
Gmail Privacy Concerns
Scott Cleland, Search & Destroy: Why You Cant Trust Google, Inc. (2011)
Gmail scans all of your email both the email you compose and send to others and the email
that others compose and send to you. By subscribing to Gmail, you are giving Google permission
to scan email you receive from people who are not Gmail subscribers, who have not given
Google permission to scan and store their email, and who might be horrified if they knew that
their messages were being scanned and permanently stored by a third party. Less well known is
the risk posed by a Gmail feature called auto-save. Many desktop programs, including email
programs, automatically save draft documents as you compost them. This comes in handy if the
program crashed or your PC loses power before you save your most recent work. While desktop
application typically save drafts to your PC, auto-save sends draft messages over the Internet
and saves them on Googles servers. If you compose a message out of anger, and later decide to
replace it with a calmer message, Google may retain a copy of the embarrassing draft.
Google and the NSA
Scott Cleland, Search & Destroy: Why You Cant Trust Google, Inc. (2011)
More recently, Google began collaborating with the National Security Agency (NSA) for the
ostensible purpose of thwarting cyberattacks. The NSA is chartered to gather intelligence from
foreign communications. However, this often involves monitoring communications between
people in foreign countries and people in the U.S. While cyberattacks on the U.S information
infrastructure are a legitimate national security concern, collaboration between the worlds
biggest commercial data mining operation and the NSA presents myriad opportunities for abuse.
What happens when Google changes its mind?
Siva Vaidhyanathan, The Googlization of Everything (2011)
The main risk of the privatization of book content is simple: libraries and universities last, but
companies wither and fail. Should we entrust our heritage and collective knowledge to a
company that has been around for less than fifteen years? What will happen if stockholders
decide that Google Books is a money loser or too much of a liability? What if they decide that the
infrastructure costs of keeping all those files on all those servers are not justifiable?
Is Google too big to fail?
Stephen Gandal, TIME magazine (2011)
Another question: Is Google too big to fail? The government would probably weigh the impact on
Web business and the economy in general if Google were to be broken up.
Google's Monopoly
Stephen Gandal, TIME magazine (2011)
It is clear that Google has a tight hold on the Internet. And that has been clearer than ever in the
past year. The New York Times has run a number of stories showing how companies have been
able to boost their sales by gaming Google's search algorithms. When Google changed its
algorithm, those sites fell off the search page, basically hiding them from the world. The stories
also showed what lengths companies will go not to cross Google and what they will do to try
to make up. If that's not a sign of monopoly, I don't know what is. And that's just the search side
of the business. Google's power on the advertising side is even more troublesome. If you are a
small company and Google won't take your advertisement, you basically can't market your
wares on the Web, at least not to any sizable audience.
Nothing is free in this world
Siva Vaidhyanathan, The Googlization of Everything (and why we should worry) (2011)
One of the great attractions of Google is that it appears to offer so many powerful services for
freethat is, for no remuneration. But there is an implicit nonmonetary transaction between
Google and its users. Google gives us Web search, e-mail, Blogger platforms, and YouTube
videos. In return, Google gets information about our habits and predilections so that it can more
efficiently target advertisements at us. Googles core business is consumer profiling. It generates
dossiers on many of us. It stores cookies in our Web browsers to track our clicks and
curiosities. Yet we have no idea how substantial or accurate these digital portraits are. This book
generates a fuller picture of what is at stake in this apparently costless transaction and a new
account of surveillance that goes beyond the now-trite Panopticon model.
Gmail Privacy Concerns
Scott Cleland, Search & Destroy: Why You Cant Trust Google, Inc. (2011)
Gmail scans all of your email both the email you compose and send to others and the email
that others compose and send to you. By subscribing to Gmail, you are giving Google permission
to scan email you receive from people who are not Gmail subscribers, who have not given
Google permission to scan and store their email, and who might be horrified if they knew that
their messages were being scanned and permanently stored by a third party.
Thoughts on asymmetrical power
Eli Pariser, The Filter Bubble: What the Internet is Hiding From You (2011)
One of the defining traits of the new personal information environment is that its asymmetrical.
As Jonathan Zittrain argues in The Future of the Internetand How to Stop It, nowadays, an
individual must increasingly give information about himself to large and relatively faceless
institutions, for handling and use by strangersunknown, unseen, and all too frequently,
unresponsive.
In a small town or an apartment building with paper-thin walls, what I know about you is roughly
the same as what you know about me. Thats a basis for a social contract, in which well
deliberately ignore some of what we know. The new privacyless world does away with that
contract. I can know a lot about you without your knowing I know. Theres an implicit bargain in
our behavior, search expert John Battelle told me, that we havent done the math on.
If Sir Francis Bacon is right that knowledge is power, privacy proponent Viktor MayerSchonberger writes that what were witnessing now is nothing less than a redistribution of
information power from the powerless to the powerful. Itd be one thing if we all knew
everything about each other. Its another when centralized entities know a lot more about us
than we know about each otherand sometimes, more than we know about ourselves. If
knowledge is power, then asymmetries in knowledge are asymmetries in power.
Googles famous Dont be evil motto is presumably intended to allay some of these concerns. I
once explained to a Google search engineer that while I didnt think the company was currently
evil, it seemed to have at its fingertips everything it needed to do evil if it wished. He smiled
broadly. Right, he said. Were not evil. We try really hard not to be evil. But if we wanted to,
man, could we ever!
The Google Chronicles: 7 Facts on Founders Larry Page & Sergey Brin
We are in the early days of online harassment being taken as a serious problem, and not simply
a quirk of online life. The likely solution will be a combination of things. The expansion of laws
like the one currently on the books in California, which expands what constitutes online
harassment, could help put the pressure on harassers. The upcoming Supreme Court case, Elonis
v. The United States, looks to test the limits of free speech versus threatening comments on
Facebook. But there are limits to legal action. "Law can only do so much," says Citron. "Its a
blunt instrument." Which leaves societal pressure. Sexual harassment in the workplace has been
greatly reduced not just because employers are suddenly liablethere's also a huge social
stigma against those who sexually harass their co-workers. It's difficult to see, in 2014, how
exactly to stop an anonymous person sitting behind a keyboard from making the lives of others
miserable if they so choose. Can a combination of legal action, market pressure, and societal
taboo work together to curb harassment? Too many people do too much online for things to stay
the way they are.
around, activates the camera, and proceeds to take pictures of herself and her friendsinstantly
uploading them to her Facebook page for the world to see. She does this for about an hour, until
a message comes through one of her networks and shes o to the next location for the cycle to
begin all over again.
Gina is the girl who is everywhere at once, yet ultimatelynowhere at all. She is already
violating the first command by maintaining an always on relationship to her devices and
networks. This has in turn fostered her manic, compulsive need to keep tabs on everything
everyone else is doing at all times. It has not only removed her from linear time, however, but
also from physical place. She relates to her friends through the network, while practically
ignoring whomever she is with at the moment. She relates to the places and people she is
actually with only insofar as they are suitable for transmission to others in remote locations. The
most social girl in her class doesnt really socialize in the real world at all.
No such thing as full attention
Sherry Turkle, Alone Together: Why We Expect More from Technology and Less from Each Other
(2011)
Teenagers know that when they communicate by instant message, they compete with many
other windows on a computer screen. They know how little attention they are getting because
they know how little they give to the instant messages they receive. One sophomore girl at
Branscomb High School compares instant messaging to being on cruise control or automatic
pilot. Your attention is elsewhere. A Branscomb senior says, Even if I give my full attention to
the person I am IMing . . . they are not giving full attention to me. The first thing he does when
he makes a call is to gauge whether the person on the other end is there just for me. This is
one advantage of a call. When you text or instant-message, you have no way to tell how much
else is going on for the person writing you. He or she could also be on the phone, doing
homework, watching TV, or in the midst of other online conversations.
On stalking and showers
Sherry Turkle, Alone Together: Why We Expect More from Technology and Less from Each Other
(2011)
So, stalking is a transgression that does not transgress. A seventeen-year-old junior at the
Fillmore School describes it as the worst. Normal, but still creepy. Normal because its not
against the rules to look at peoples wall-to-wall conversations [on Facebook]. Creepy because
its like listening to a conversation that you are not in, and after stalking I feel like I need to take
a shower. Just starting college, Dawn, eighteen, says she is obsessed with the interesting
people who are her new classmates: I spend all night reading peoples walls. I track their
parties. I check out their girlfriends. She, too, says, My time on Facebook makes me feel dirty.
So stalking may not be breaking any rules, but it has given young people a way to invade each
others privacy that can make them feel like spies and pornographers.
Goodbye empathy
Brian Chen, Always On (2011)
In another recent study, based on surveys measuring empathy among almost fourteen thousand
college students over the last thirty years, University of Michigan researchers found that todays
college students are significantly less empathetic than college students of the 1980s and 1990s.
The researchers suggested that perhaps connecting with friends online makes shutting out realworld issues easier. The ease of having friends online might make people more likely to just
tune out when they dont feel like responding to others problems, a behavior that could carry
over offline, said Edward OBrien, a University of Michigan graduate student who helped with
the study.
have the time to just go on and on. I like texting, Twitter, looking at someones Facebook wall. I
learn what I need to know.
Technologies live in complex ecologies. The meaning of any one depends on what others are
available. The telephone was once a way to touch base or ask a simple question. But once you
have access to e-mail, instant
messaging, and texting, things change. Although we still use the phone to keep up with those
closest to us, we use it less outside this circle. Not only do people say that a phone call asks too
much, they worry it will be received as demanding too much. Randolph, a forty-six-year-old
architect with two jobs, two young children, and a twelve-year- old son from a former marriage,
makes both points. He avoids the telephone because he feels tapped out.... It promises more
than Im willing to deliver. If he keeps his communications to text and e-mail, he believes he can
keep it together. He explains, Now that there is e-mail, people expect that a call will be more
complicated. Not about facts. A fuller thing. People expect it to take timeor else you wouldnt
have called.
Me and my machine
William Powers, Hamlets Blackberry: Building a Good Life in the Digital Age (2010)
Educator and writer Lowell Monke shared with his students a troubling study that showed that
many young people prefer to interact with machines rather than directly with human beings. The
next day, one of the students sent him an e-mail explaining why this might be: I do feel deeply
disturbed when I can run errand after errand, and complete one task after another with the help
of bank clerks, cashiers, postal employees, and hairstylists without ANY eye contact at all! After
a wicked morning of that, I am ready to conduct all business online. In a society in which adults
so commonly treat each other mechanically, Monke writes, perhaps we shouldnt be surprised
that our youth are more attracted to machines. We believe in our screens so much, weve
placed them at the center of our lives, so why shouldnt they?
Giving up face-to-face
Christine Rosen, The New Atlantis (2007)
We should also take note of the trend toward giving up face-to-face for virtual contactand, in
some cases, a preference for the latter. Today, many of our cultural, social, and political
interactions take place through eminently convenient technological surrogatesWhy go to the
bank if you can use the ATM? Why browse in a bookstore when you can simply peruse the
personalized selections Amazon.com has made for you? These virtual networks greatly expand
our opportunities to meet others, but they might also result in our valuing less the capacity for
genuine connection. As the young woman writing in the Times admitted, I consistently trade
actual human contact for the more reliable high of smiles on MySpace, winks on Match.com, and
pokes on Facebook. That she finds these online relationships more reliable is telling: it shows a
desire to avoid the vulnerability and uncertainty that true friendship entails. Real intimacy
requires riskthe risk of disapproval, of heartache, of being thought a fool. Social networking
websites may make relationships more reliable, but whether those relationships can be humanly
satisfying remains to be seen.
Peer Absorption
Mark Bauerlein, The Dumbest Generation (2008)
The enhanced connectivity, and the indulgence of teachers and journalists, feed yet
another adolescent vice that technophiles never mention: peer absorption. Educators
speak about the importance of role models and the career pressures facing kids, but in
truth, ado plescents care a lot more about what other adolescents think than what their
elders think. Their egos are fragile, their beliefs in transition, their values uncertain.
They inhabit a rigorous world of consumerism and conformity, of rebellious poses and
withering group judgments. Boys struggle to acquire the courage and strength of
manhood, girls the poise and strength of womanhood. They tease one another
mercilessly, and a rejection can crush them. Life is a pinball game of polarized demands
a part-time job that requires punctuality and diligence, pals who urge them to cut up
in class; a midterm forcing them to stay home and study, a friend who wants to catch a
horror flick. For many of them, good standing with classmates is the only way to secure
a safe identity, and so they spend hours on the channels of adolescent fare searching
out the latest in clothes, slang, music, sports, celebrities, school gossip, and one
another. Technology has made it fabulously easier.
Groupthink and the peer-to-group phenomenon
Tom Zeller, The New York Times (2010)
"What's hard to measure is the impact of groupthink, of group mentality, and the tendency of
what we might call the democratization of social interaction and how that changes this
generation's relationship with almost everything they come in contact with. With the technology,
the Internet in terms of being able to facilitate the social networking there's just so much
ability to quickly transfer information." He calls it the peer-to-group phenomenon a digital-age
manifestation of the grapevine. "When someone wants to share it, forward it, record it, take a
picture of it, whatever the case may be, that puts it into a form of currency." "You've got a group
of kids who are unbelievably, incredibly loyal to each other," Dr. Levine said. "They are very
bound to ethics and values. But in a funny sort of way, it prevents some of them from developing
as individuals." Along with finding technological dexterity in this group, and a highly developed
ability to work in team settings, Dr. Levine said he had encountered concerns that some young
people lacked the ability to think and plan for the long term, that they withered without
immediate feedback and that the machinery of groupthink had bred a generation flush with loyal
comrades but potentially weak on leaders.
The culture war: How new media keeps corrupting our children
Tom Standage, WIRED (2006)
US senator Charles Schumer says some videogames aimed at kids "desensitize them to death
and destruction." But dire pronouncements about new forms of entertainment are old hat. It
goes like this: Young people embrace an activity. Adults condemn it. The kids grow up, no better
or worse than their elders, and the moral panic subsides. Then the whole cycle starts over.
Here's how the establishment has greeted past scourges.
Videogames"The disturbing material in Grand Theft Auto and other games like it is stealing the
innocence of our children and it's making the difficult job of being a parent even harder ... I
believe that the ability of our children to access pornographic and outrageously violent material
on video games rated for adults is spiraling out of control."- US senator Hillary Rodham Clinton,
2005
Rock and Roll"The effect of rock and roll on young people, is to turn them into devil
worshippers; to stimulate self-expression through sex; to provoke lawlessness; impair nervous
stability and destroy the sanctity of marriage. It is an evil influence on the youth of our country."
- Minister Albert Carter, 1956
Novels"The free access which many young people have to romances, novels, and plays has
poisoned the mind and corrupted the morals of many a promising youth; and prevented others
from improving their minds in useful knowledge. Parents take care to feed their children with
wholesome diet; and yet how unconcerned about the provision for the mind, whether they are
furnished with salutary food, or with trash, chaff, or poison?"- Reverend Enos Hitchcock, Memoirs
of the Bloomsgrove Family, 1790
Movies"This new form of entertainment has gone far to blast maidenhood ... Depraved adults
with candies and pennies beguile children with the inevitable result. The Society has prosecuted
many for leading girls astray through these picture shows, but GOD alone knows how many are
leading dissolute lives begun at the 'moving pictures.'" - The Annual Report of the New York
Society for the Prevention of Cruelty to Children, 1909
The Telephone"Does the telephone make men more active or more lazy? Does [it] break up
home life and the old practice of visiting friends?" - Survey conducted by the Knights of
Columbus Adult Education Committee, San Francisco Bay Area, 1926
Comic Books"Many adults think that the crimes described in comic books are so far removed
from the child's life that for children they are merely something imaginative or fantastic. But we
have found this to be a great error. Comic books and life are connected. A bank robbery is easily
translated into the rifling of a candy store. Delinquencies formerly restricted to adults are
increasingly committed by young people and children ... All child drug addicts, and all children
drawn into the narcotics traffic as messengers, with whom we have had contact, were inveterate
comic-book readers This kind of thing is not good mental nourishment for children!" - Fredric
Wertham, Seduction of the Innocent, 1954
1
With respect to identity formation: Apps can short-circuit identity formation, pushing you into
being someone else's avatar (that of your parents, your friends, or one formulated by some app
producer)or, by foregrounding various options, they can allow you to approach identity
formation more deliberately, holistically, thoughtfully. You may end up with a stronger and more
powerful identity, or you may succumb to a prepackaged identity or to endless role diffusion.
With respect to intimacy: Apps can facilitate superficial ties, discourage face-to-face
confrontations and interactions, suggest that all human relations can be classified if not predetermined in advanceor they can expose you to a much wider world, provide novel ways of
relating to people, while not preventing you from shutting off the devices as warrantedand that
puts you in charge of the APPS rather than vice versa. You may end up with deeper and longerlasting relations to others, or with a superficial stance better described as cool, isolated, or
transactional.
With respect to imagination: Apps can make you lazy, discourage the development of new skills,
limit you to mimicry or tiny trivial tweaks or meetsor they can open up whole new worlds for
imagining, creating, producing, remixing, even forging new identities and enabling rich forms of
intimacy.
2
The rival brand of psychology, which came into prominence during Howard's own professional
lifetime, is called cognitivisin or constructivism.'" On this view, skills and knowledge arc
constructed on the basis of the individual's own active explorations of the cn vironment Rewards
supplied by others are fine, but the most important activities are ones that are intrinsically
rewardingbased on one's own discovered pleasures as one explores the world. Imitations and
modeling are am.' tests," less kindly called "drill and kill." in sharp contrast, constructivists call
for rich and inviting problems and puzzles, which will engage curiosity .and catalyze extensive
explorationwith, at most, the "guide on the side," rather than the "sage on the stage? On the
constructivist view, the best way to educate is to provide inviting materials and get out of the
way.
As for the probability of these various alternatives, heated debate already exists in the writings
of the digerati. On the one side we find unabashed enthusiasts of the digital world. In the view of
experts like danah boyd, Cathy Davidson, Henry Jenkins, Clay Shirley} and David Weinberger,
the digital media hold the promise of ushering in an age of unparalleled democratic participation,
mastery of diverse skills and areas of knowledge, and creative expression in various media,
singularly or orchestrally." As they see it, for perhaps the first time in human history, it is
possible for each of us to have access to the full range of information and opinions, to inform
ourselves, to make judicious decisions about or our own lives, to form links with others who want
to achieve similar goalsbe they political, economic, or culturaland to benefit from the
enhanced intelligence and wisdom enabled by a vast multi-networked system. On this
perspective, a world replete with apps is a world in which endless options arise, with at least the
majority tilted in positive, world-building, personally fulfilling directions. It's a constructivist's
dream.
3
The rival brand of psychology, which came into prominence during Howard's own professional
lifetime, is called cognitivism or constructivism." On this view, skills and knowledge are
constructed on the basis of the individual's own active explorations of the en vironmcnt, Rewards
supplied by others are fine, but the most important activities are ones that are intrinsically
rewardingbased on one's own discovered pleasures as one explores the world. Imitations and
modeling are and tests," less kindly tcrnicd "drill and kill." Jig sharp contrast, constructivists call
for rich and inviting problems and puzzles, which will engage curiosity tired catalybe extensive
explorationwith, at most, the "gelid u on the side," rather than the "sage on the stage." On the
consrructivist view, the best way to educate is to provide inviting materials and get out of the
way.
Others are less sanguine. Nicholas Carr claims that, with their speed and brevity, the digital
media encourage superficial thinking, thereby thwarting the sustained reading and reflection
enabled broadly by the Gutenberg era." Raising the stakes, Mark Bauerlein invokes the
inflammatory epithet the dumbest generation."' Cass Sunstein fears that the digital media
encourage us to consort with like-minded persons; far from exposing us to a range of opinions
and broadening our horizons, the media enableor, more perniciously, dictatethe creation of
intellectual and artistic silos or echo clambers.' Sherry Turkle worries about an increasing sense
of isolation and the demise of open, exploratory conversations while Jaron Lanier laments
threats to our poetic, musical, and artistic souls.' On this perspective, an app-filled world brings
about dependence on the particulars of each currently popular app, and a general expectation
that one's futureindeed, the future itselfwill be dictated by the technological options of the
time. It's a constructivist's nightmare.
4
The situation could not be more different from that which obtains today. Howard has taught
students intermittently in the 1960s and 1970s and regularly ever since. With every passing
decade, it appears to Howard that students look increasingly to their teachersand more
broadly, to their supervisors and their mentorsfor the correct way, for what is wanted, for the
route to an "A," to approval, to a positive letter of recommendation, smoothing the way to the
next step on the ladder of success. There's more. Many students convey the impression that the
authority figures know just what they want from their charges; that they could be straightforward
and say what is wanted; and that they are being irresponsible, delinquent, unfair, and even
unethical in withholding the recipe, the road map. The light-hearted version of this attitude is the
all-too-familiar question, Will this be on the exam?" The nuts-and-bolts version is, "Just tell us
what you want and we will give it to you."
5
Back to our story about the generations, but with an unexpected twist. In mid-twentieth-century
America, generations were routinely spoken of in terms of their defining political experiences or
powerful cultural forces. Only in recent memory has characterization of a generation taken on a
distinctly technological flavor. In his studies of successive waves of college students, Arthur
Levine (with colleagues) has discerned a revealing trend. Students in the latter decades of the
twentieth century characterized themselves in terms of their common experiences vis--vis the
Kennedy assassination, the Vietnam War, the Watergate burglary and investigation, the shuttle
disaster, the attack on the Twin Towers in September zoor. But one the opening years of the
twenty-first century had passed, political events increasingly took a back seat. Instead, young
people spoke about the common experiences of their generation in terms of the Internet, the
web, handheld devices, and smartphones, along with the social and cultural connections that
they enabledmost prominently, the social networking platform Facebook.
6
THE APPS ARRAYED ON a person's sniartphone or tablet represent a fingerprint of sortsonly
instead of a unique pattern of ridges, it's the combination of interests, Fiabits, and social connections that identify that person. A news app might he sandwiched between a fantasy sports
app and a piano keyboard app, revealing multiple facets of one's identity. Because many of these
apps provide access to various online communities, each facet allows the owner to find ready
communion with similarly oriented people. Though the range of self-expression is great online,
it's not unrestricted. For instance, expressions are limited to 140 characters on Twitter, whereas
digitally manipulated photos are the coin of the realm on Instagram. The app identity, then, is
multifaceted, highly personalized, outward-facing, and constrained by the programming decisions of the app designer. Just how are youth's identities shaped and expressed in the age of the
app? Are they truly different or just superficially so?
7
Our focus group participants believe that the identities of today's App Generation are more
externally oriented than the identities of predigital youth. For the affluent youth, their focus
largely rests on presenting a polished, packaged self that will meet the approval of college
admissions officers and prospective employers. They appear to regard themselves increasingly
as objects that have quantifiable value to others: an SAT score, a GPA, a collection of varsity
letters, trophies, community service certifications, or other awards. One religious leader echoed
the sentiments of the other participants in his focus group when he said that, for many young
people, Who am I?" means "What am I going to produce?"
Accompanying this sensibility is a calculated effort to maximize one's value in order to achieve
academic and professional success. One participant in a focus group said that when youth are
asked what their hopes are, they give "pragmatic, achievable answers" situated in the present or
near future such as "a good job" or "a good relationship" more often than was the case with
youth from earlier generations. During our conversation with the therapists, a participant
declared that many of today's young people suffer from a "planning delusion"a (mistaken) faith
that if they make careful, practical plans, they will face no future challenges or obstacles to
success.
8
The pragmatic, careerist focus of today's college students occurs within the context of a broader
societal trend toward individualism and away from a more community-minded, institutiona!
orientation. In his landmark book Bowling Alone, political scientist Robert Putnam shows that
Americans' participation in various civic institutions, such as bowling leagues, labor unions, and
church organizations, has declined steadily across cohorts born after World War II. As these
community ties loosen, they're replaced by a "moral freedom" that allows individuals to define
for themselves the meaning of a virtuous life and doesn't require them to sacrifice their personal
needs and desires in the process.
9
One psychologist expressed concern about young persons' constant self-projection and selftracking online, which she says leaves them with little time for private contemplation or identity
construction. She worries that, as a result, the prominence of their internal sense of self (in
Riesman's term, "inner-directedness") is dwindling, perhaps to the point of nonexistence.
This lament about the lack of time for quiet reflection has become a common theme among
academics and the popular press. Researchers have identified a number of benefits that accrue
when a brain is at rest (relatively speaking) and focused inward.. The downtime appears to play
a restorative role, promoting feelings of well-being and, ultimately, helping individuals to focus
their attention more effectively when it's needed. Daydreaming, wandering, and wondering have
positive facets. Introspection may be particularly important for young people who are actively
figuring out who and what they want to he. -Without time and space to ponder alternative ways
of being in the worldwithout breaking away from an app-determined life pathyoung persons
risk prematurely foreclosing their identities, making it less likely that they will achieve a fully
realized and personally fulfilling sense of self.''
10
The question of the Internet's impact on self-focus has also become a popular focus among social
scientists, who've generally observed a positive connection between narcissism and online
behavior. 2 For instance, one study found that people with high narcissism scores were more
likely to post self-promoting content and engage in high levels of social activity on Facebooka''
Another found that college students with high narcissism scores were more likely to tweet about
themselves. The authors of this study caution that while youth's online behavior may appear
narcissistic to an outsider's eye, it's important to keep in mind that their primary motivation for
going online may well be not to promote themselves but rather to maintain and nurture their
social ties. (We'll examine the social dimension of youth's online lives in the next chapter.) Still,
it's worth noting that about 30 to 40 percent of ordinary conversation consists of people talking
about themselves, whereas around go percent of social media updates are self-focused. Also
important is the fact that we can't determine in which direction the arrow of causality points.
Does Internet use cause narcissism, or do narcissistic people use the Internet in distinctive ways?
11
Given the self-focus of narcissists, one might assume that they're self-assured and unaffected by
the goings-on of others. This turns out not to be the case. As Sherry Turkle explains in her book
Alone Together, "In the psychoanalytic tradition, one speaks about narcissism not to indicate
people who love themselves, but a personality so fragile that it needs constant support." Instead
of self-assuredness, then, narcissists tend more toward a fragile self that needs propping up by
external reassurances. Jean Twenge's research bears this out. Along with rising levels of
narcissism among youth, she finds increasing moodiness, restlessness, worry, sadness, and
feelings of isolation. In sharp contrast to Riesman's inner-directed persons, today's young people
are also more likely to feel that their lives are controlled by external social forces rather than
growing out of an internal locus of control. Consistent with Twenge's findings, researchers at the
University of California at Los Angeles found that the percentage of first-year college students
who said that they frequently felt "overwhelmed by all I had to do" during their senior year of
high school increased from 8 percent in 1985 to 30 percent in 2010.
12
Several of our participants identified a similar incongruity between youth's external polish and
their internal insecurities. The camp directors we interviewed told us that campers today
demonstrate more self-confidence in what they say they can do but are less willing to test their
abilities through action. They attributed this shift to youth's growing distaste for taking any
tangible risk that could end in failure failure that once might have been witnessed by a few peers
and then forgotten but today might become part of one's permanent digital footprint.
The themes of growing anxiety and aversion to risk surfaced in other focus groups. One therapist
reflected that youth seem reluctant to engage in certain endeavors for fear of feeling anxious or
depressed if they don't go as planned. Indeed, many participants agreed that young people's
identities are defined by insecurity and disequilibrium. The religious leaders remarked that youth
today are generally more fearful about their future. "Even the most confident Harvard
grad.shared one participant, "is ... scared to death." The therapists observed that, to cope with
this fear, many young persons display a notable lack of affect and an apparent goal to "feel
nothing." Citing a word all too familiar to parents of today's teenagers, one participant called
today's youth the whatever' generation."
13
Turkle uses the metaphor of a tether to suggest that youth's constant connections to their digital
devices and the people accessible through them weaken their ability to develop an autonomous
sense of self. These technologies encourage youth to look outside themselves for reassurance, in
matters both mundane and existentiol, Indeed, their thoughts and feelings don't seem real until
confirmed by others. This argument is supported by empirical evidence showing that college
students who use their digital devices to maintain frequent contact with their parents tend to be
less autonomous. In the spirit of Turkle, some scholars have invoked the concept of psychasthenia in an effort to explain how people's online presence can weaken their sense of self to the
point of full renunciation.
14
Still, many peopleincluding youthare optimistic about the Internet's power to expand our
horizons and enrich our lives. In his book Here Comes Everybody, Clay Shirley suggests that the
bowling leagues, lodges and rotary clubs of the fifties and sixties have not simply vanished;
rather they have been replaced by a far greater number of online communities representing a
wider range of interests. No matter how obscure one's interest, it can nd expression and
validation online, whether it be down the street or halfway around the globe.4is For young
people, this access to digital alter-egos" means that their identities as fan girls, garners, chess
players, or knitters don't have to be set aside to fit into a narrow peer culture."
15
What are teens saying through their apps, and to whom? As it turns Out, a considerable portion
of teens' computer-mediated communication is dedicated to making (and sometimes breaking)
on-the-fly arrangements to meet up with their friends in person. In one of our studies, we asked
teens what they would miss most about not having a cell phone. Sixteenyear-old Justin
answered, "just being able to make plans on the go, and stuff, because me and my friends, we
don't really plan things. We just go out." The app mentality supports the belief that just as
information, goods, and services are always and immediately accessible, so too are people.
Scholars in the mobile communication field have dubbed such in-the-moment planning
"microcoordination" and observe that it can slide into "hypercoorclination" when teens start to
feel left out of their social circles if separated from their mobile devices for any period of time.'
16
In Our focus groups, we learned that many of today's youth consider it less intrusive to send a
text rather than call someone, and its not uncommon for them to end relationships through text
message or Facebook rather than in person. Similar to this is the phenomenon of text
cancellations that many of us apparently now rely on to break plans with others at the last
minute." Turkle contends that this sort of arm's-length way of conducting relationships ultimately
empties them of true intimacy. She warns: "There is the risk that we come to see others as
objects to be accessedand only for the parts we find useful, comforting, or amusing." This
emptying out of intimacy is likely what one focus group participant had in mind when she
observed tellingly: "Kids are more and more connected, but less and less really connected."
17
There may be another way in which new media technologies remove the vulnerability from our
interpersonal relationships and distance us from each other. In a provocative and much debated
op-ed, "How to Live without Irony," scholar Christy Wampole observes a strong ironic sensibility
among today's generation of youth.) In her rendition, young people wear Justin Bieber T-shirts
ironically, watch Glee ironically, and give each other birthday gifts ironically. By bathing their
actions and interactions in a wash of sarcasm, young people distance themselves both from their
actions and from other people. According to Wampole, the Internet supportsindeed,
encourages this ironic turn. Online, the actions of public figures are transformed instantly into
derisive memes and circulated widely. The addition of a witty ha shag at the end of a tweet
empties it instantly of any seriousness. This sensibility is reinforced nightly on TVand
subsequently posted, shared, and tweeted about onlineby Jon Stewart and Stephen Colbert,
who wryly ridicule newscasters, politicians, and other well-known personalities. By turning
everything into a joke, youth risk nothing because they make nothing of themselves vulnerable.
Yet vulnerability is precisely what's needed to connect with other people in an honest and
meaningful way.
18
Another app, Facetime (Apple's answer to Skype) can also be used to illustrate the ease of falling
into a transactional rather than a transformational interpersonal exchange online. When Katie
and Molly first talked remotely using Facetime, the first thing Katie noticed was that genuine eye
contact is impossible. If you want the other person to feellike you're looking them in the eyes,
then you have to look into the camera, not their eyes. In other words, to create the illusion of
eye contact one must actively avoid it. Something else that Katie noticed instantly was her own
image in the corner of the screen. She found it hard not to glance over at it periodically, which
turned her attention away from Molly and onto herself. Apparently, Molly was equally, if not more
so, enticed by the "Narcissus trap." In fact, at one point in their conversation, Katie was confused
when she made a funny face but Molly didn't react in the slightest. When Katie called her on it,
Molly admitted somewhat sheepishly that she'd been focusing on her own image and facial
expressions instead of her sister's. Overall, Katie's experiences with Facetime, Skype, and
Hangouts.
Google have led her to conclude that, while it's great to be able to connect with others across
distances, it's difficultif not impossibleto achieve the level of deep, warm connection that
face-to-face contact provides.
19
A similar dynamic may he playing out on the Internet. In The Filter Bubble, Eli Pariser explains
how search engines and social network sites show us only what we want to see (or what they
think we want to see). He uses Facchook's Edge-Rank as one example of how this works.
EdgeRank uses an algorithm to rank users' friends list according to how much interaction each
user has with each person on the list. EdgeRank then uses that ranking to structure users'
newsfeeds so they see more from the friends at the top of the list. Google's search algorithm
works in a similar way such that two people conducting an identical Google search (whether it's
"performing arts in Atlanta" or "2012 presidential election") will be shown a different set of
results based on what Google knows about them (and drawing on previous search history, Grnail
contacts and exchanges, YouTube posts and viewing habit pole knows a lot!). Pariser argues that
such algorithms have a silo-in effect, causing us to encounter only like-minded people and ideas
online. It's difficult to empathize with perspectives that we never see.
20
We find that digital. media open up new avenues for youth to express themselves creatively.
Rernix, collage, video production, and music compositionto name just a few popular artistic
genres of the dayare easier and cheaper for today's youth to pursue than were their predigital
counterparts,. It's also easier to find an audience for one's creative productions. The app
metaphor serves us well here, since apps are easy to use, support diverse artistic genres, and
encourage sharing among their users.
And yetreflecting patterns we observed in youth's expressions of personal identity and
experiences of intimacyan app mentality can lead to an unwillingness to stretch beyond the
functionality of the software and the packaged sources of inspiration that come with a oogle
search. We ask: Under what circumstances do apps enable imaginative expression? Under what
circumstances do they foster a dependent or narrow-minded approach to creation?
21
To complicate matters further, we also spoke with art teachers (visual art, music, and performing
arts) who'd been teaching for at least twenty years and therefore could reflect on changes
they've observed in students' imaginative processes over time. Though these teachers
celebrated the broad range of creative opportunities now open to today's youth (which we
discuss in greater detail below), several arts educators observed that today's students have
more difficult)? in coming up with their own ideas; they're far more comfortable engaging with
existing ones. One participant reflected: "Some of the most artistically skilled kids cannot come
up with an idea. They've got full scholarships to Mass Art [Massachusetts College of Art and
Design] and they can't come up with an idea. . . They go to their laptop first.. .. I find that I'm
constantly shoulder to shoulder asking what do you see? What does it mean? ... They're thinking
too much or saying have nothing." Moreover, when they do come up with their own. ideas, they
often have difficulty executing them, particularly in the absence of clear "executive assistants."
Said another participant, "Before, they used to jump in and see where the materials would take
them, now they ask what to do,"
23
In his book You Are Not a Gadget, computer scientist and cultural critic jaron Lanier bemoans the
effects o f remix on individual creativity: "Pop culture has entered into a nostalgic malaise. Online
culture is dominated by trivial mash ups of the culture that existed before the onset of mashups,
and by fandom responding to the dwindling outposts of centralized mass media. It is a culture of
reaction without action.
24
In his book Cognitive Surplus, Clay Shirley celebrates digital media's ability to connect people
easily, quickly, and cheaply." Drawing on examples like the Impressionist painters, who lived and
worked together in southern France, Mirky argues that collaboration is a central component of
creativity. Where collaboration is supported and encouragedas it surely is onlinecreativity will
thrive.
25
As we've discussed in this chapter, however, the act of creation is circumscribed by the app's
underlying code and the developer who wrote it; to paraphrase Lawrence Lessig, the code
determines the creation. A specific hue of green may not be included in one's painting app; the
piccolo might be missing from the music app. Users have little choice but to work within these
limitations. The avenues to artistic expression may be many in thc app era, but they're often
tightly bounded.
26
"Civilization advances by extending the
number of important operations which we can
perform without thinking about them."
Alfred North Whitehead
At the head of the chapter, we've affixed a statement by the philosopher Alfred North
Whitehead, Though it may be well known among the digerati (Howard first heard it quoted by a
leading technologist), we had not encountered it until we were putting the finishing touches on
this book. At first blush, the statement sounds just right. One finds oneself nodding in agreement
yes, we value those inventions that allow us to make habitual those thoughts and actions that
could assume much time and effort. And indeed, we can think of many manmade devices
(ranging from the creation of script to the invention of the credit card) that have allowed us to
simplify formerly complex operations and to move on to other things. Is civilization even
imaginable without a multitude of labor-saving devices that free our hands and minds? Thank
goodness for the "flywheel of civilization"!
Yet on reflection, -Whitehead's statement seems increasingly dual-edged to us. For sure, most of
us would like to automatize as much as possibleour psychological antagonistsbehaviorists
and constructivistswould agree. But do we want to automatize everything? And who decides
what is important? And where do we draw the line between an operation and the content on
which the operation is carried out?
27
Of course, other potent factors are at work. In this book, we have not spoken much about the
ambition and reach of vast multinational corporations or of totalitarian states. For every major
medium of communication that began as the product of human imagination, one can tell a story
of how megacorporations eventually came to dominate the media and to determine how human
beings interacted with them. Google, Apple, Amazon, and their less prominent peers have
tremendous power and access to data of a size and scale that not even the most imaginative
science fiction writersH. G. Wells, Jules Vernecould have anticipated a century ago. It would
be a brave person who would predict that the fate of corporation-devised and -sold apps would
be different; and it would be a naive person who would simply assume that such power will
inevitably be dedicated to benign uses.
28
We must also acknowledge the possibility of powers even greater than those associated with
megacorporations and powerful political entities. As we come to understand better our genetic
and neurological nature, there will. be attempts to reconfigure our species, more or less
aggressively, and to usher in a so-called singularity, in which the lines between computer and
brain, machine and human, mortality and immortality become blurred or blended or disappear
altogether. As more than one wag has put it, "The question is no longer, 'Are computers like us?'
hut rather, Are we like computers?" To the extent that these impulses are realized, human
tendencies to resist or transcend apps will evaporate. Just as surely as the reach of Big brother in
1984 or the programming of Alex's brain in Clockwork Orange, apps will come to control our
lives.
29
With essayist Christine Rosen, we worry about the "ultimate efficiencyhaving one's needs and
desires foreseen and the vicissitudes of future possible experiences controlled." With poet Allen
Tate, we spurn a world in which "we no longer ask is it right,' we ask 'does it work?'"
As authors, we get the privilege of last words. For ourselves, and for those who come after us as
well, we desire a world where all human beings have a chance to create their own answers,
indeed, to raise their own questions, and to approach them in ways that are their own.
Use as prompt:
"Always on" is exhausting
Douglas Rushkoff, Program or Be Programmed: Ten Commands for the Digital Age (2010)
As Internet connections grow faster, fatter, and freer, however, we are more likely to
adopt an always on approach to media. Our broadband connectionswhether in our
homes or in our phoneskeep our applications on, updating, and ready at every moment.
Anytime anyone or anything wants to message, email, tweet, update, notify, or alert us,
something dings on our desktop or vibrates in our pocket. Our devices and, by extension, our
nervous systems are now attached to the entire online universe, all the time. Is that my phone
vibrating? We scramble to keep up with the never-ending inflow of demands and commands,
under the false premise that moving faster will allow us to get out from under the endless stream
of pings for our attention. For answering email and responding to texts or tweets only
exacerbates the problem by leading to more responses to our responses, and so on. We strive to
multitask, attempting to give partial attention to more than one thing at a time, when all we
really do is move as quickly as possible from one task to another. No matter how proficient we
think we are at multitasking, studies show our ability to accomplish tasks accurately and
completely only diminishes the more we try to do at the same time. This is not the fault of digital
technology, but the way we use it. The results arent pretty. Instead of becoming empowered and
aware, we become frazzled and exhausted. We have no time to make considered responses,
feeling instead obligated to reply to every incoming message on impulse. We reduce the length
and complexity of our responses from paragraphs to sentences to txts, making almost
everything we transmit sound like orders barked over a walkie-talkie in a war zone. Everything
must happen right away or, better, now. There is no later.
Technology and the overworked American
David Shenk, Data Smog: Surviving the Information Glut (1997)
Harvard economist Juliet Shor reports that technology was predicted to save us from excessively
long hours. But, as Shor points out in her book, The Overworked American, we are working 164
hours more per year than we did 20 years ago. "Technology", reports Shor, "reduces the amount
of time it takes to do any one task but also leads to the expansion of tasks that people are
expected to do. It's what happens to people when they get computers and faxes and cellular
telephones and all of the new technologies that are coming out today."
6
This contemporary mania with our own self-expression is what two leading American
psychologists, Dr. Jean
Twenge and Dr. Keith Campbell, have described as the narcissism epidemica self-promotional
madness driven, these two psychologists say, by our need to continually manufacture our own
fame to the world. The Silicon Valley based psychiatrist, Dr. Elias Aboujaoude, whose 2011
book, Virtually You, charts the rise of what he calls the self- absorbed online Narcissus, shares
Twenge and Campbells pessimism. The Internet, Dr. Aboujaoude notes, gives narcissists the
opportunity to fall in love with themselves all over again, thereby creating a online world of
infinite self-promotion and shallow web relationships. Many other writers share Aboujaoudes
concerns. The cultural historian Neal Gabler says that we have all become information
narcissists utterly disinterested in anything outside ourselves. Social network culture
medicates our need for self-esteem, adds best-selling author Neil Strauss, by pandering to win
followers.
7
Twenge, Campbell, Aboujaoude, Strauss and Franzen are all correct about this endless loop of
great exhibitionism an attention economy that, not uncoincidentally, combines a libertarian
insistence on unrestrained individual freedom with the cult of the social. Its a public exhibition of
self-love displayed in an online looking glass that New Atlantis senior editor Christine Rosen
identifies as the new narcissism and New York Times columnist Ross Douthat calls a desperate
adolescent narcissism. Everythingfrom communications, commerce and culture to gaming,
government and gamblingis going social. As David Brooks, Douthats colleague at The Times,
adds, achievement is redefined as the ability to attract attention. All we, as individuals, want to
do on the network, it seems, is share our reputations, our travel itineraries, our war plans, our
professional credentials, our illnesses, our confessions, photographs of our latest meal, our
sexual habits of course, even our exact whereabouts with our thousands of online friends.
8
Zuckerbergs five-year plan is to eliminate loneliness. He wants to create a world in which we will
never have to be alone again because we will always be connected to our online friends in
everything we do, spewing huge amounts of our own personal data as we do it. Facebook wants
to populate the wilderness, tame the howling mob and turn the lonely, antisocial world of
random chance into a friendly world, a serendipitous world (wrote) Times Lev Grossman.
9
Facebook, with its members investing over 700 billion minutes of their time per month on the
network, was the worlds most visited Web site in 2010 making up 9 percent of all online traffic.
By early 2011, 57 percent of all online Americans were logging onto Facebook at least once a
day, with 51 percent of all Americans over twelve years old having an account on the social
network and 38 percent of all the Internets sharing referral traffic emanating from Zuckerbergs
creation. By September 2011, more than 500 million people were logging onto Facebook each
day with its then almost 800 million active users being larger than the entire Internet was in
2004. Facebook is becoming mankinds own image.
10
Whether we like it or not, twenty-first-century life is increasingly being lived in public. Four out of
five college admissions offices, for example, are looking up applicants Facebook profiles before
making a decision on whether to accept them. A February 2011 human resources survey
suggested that almost half of HR managers believed it was likely that our social network profiles
are replacing our resumes as the core way for potential employers to evaluate us. The New York
Times reports that some firms have even begun using surveillance services like Social
Intelligence, which can legally store data for up to seven years, to collect social media
information about prospective employees before giving them jobs. In todays executive search
market, if youre not on LinkedIn, you dont exist, one job search expert told The Wall Street
Journal in June 2011. LinkedIn now even enables its users to submit their profiles as resumes,
thus inspiring one personal branding guru to announce that the 100 million member
professional network is about to put Job Boards (and Resumes) out of business.
11
Writing in 1948, Orwell imaginedIn principle a Party member had no spare time, and was
never alone except in bed, Orwell wrote in Nineteen-Eighty-four. It was assumed that when he
was not working, eating, or sleeping he would be taking part in some kind of communal
recreation: to do anything that suggested a taste for solitude, even to go for a walk by yourself,
was always slightly dangerous. There was a neologism for it in Newspeak: Ownlife, it was called,
meaning individualism and eccentricity. And there was another neologism in Newspeak:
facecrime, Orwell coined it. It was terribly dangerous to let your thoughts wander when you
were in any public place or within range of a telescreen, he wrote. The smallest thing could
give you away. A nervous tic, an unconscious look of anxiety, a habit of muttering to yourself
anything that carried with it the suggestion of abnormality, of having something to hide. In any
case, to wear an improper expression on your face (to look incredulous when a victory was
announced, for example) was itself a punishable offence. There was even a word for it in
Newspeak: facecrime, it was called.Newspeaks facecrime has been turned on its head in
our world of endless tweets, check-ins and status updates. In Nineteen Eighty-four, it was a
crime to express yourself; today, it is becoming unfashionable, perhaps even socially
unacceptable not to express oneself.
12
As best-selling digital evangelists Don Tapscott and Anthony D. Williams argue in their 2010 book
MacroWikinomics, todays Internet represents a turning point in history. We are entering what
they call the age of networked intelligence, a titanic historic shift, they pronounce,
equivalent to the birth of the modern nation-state or the Renaissance. Mark Pincuss always-on
social dial tone, Tapscott and Williams argue, represents a platform for the networking human
minds that will enable us to collaborate and to learn collectively. Echoing Mark Zuckerbergs
five-year vision of social medias revolutionary impact on the broader economy, they predict that
politics, education, energy, banking, healthcare and corporate life will all be transformed by what
social utopians embrace as the openness and sharing of the networked intelligence age.
13
Botsford, Rogers, Tapscott, Williams and the rest of the social media quixotics are wrong that the
Internet is resulting in a new age of networked intelligence. In fact, the reverse may well be
true. From Zuckerbergs Facebook, Hoffmans LinkedIn and Stones Twitter to SocialEyes,
SocialCam, foursquare, ImageSocial, Instagram, Living Social and the myriad of other digital
drivers of John Doerrs third great wave, the network is creating more social conformity and herd
behavior. Men arent sheep, argued John Stuart Mill, the nineteenth centurys greatest critic of
Benthamite utilitarianism, in his 1859 defense of individual freedom On Liberty. Yet on the social
network, we seem to be thinking and behaving more and more like sheep, making what cultural
critic Neil Strauss describes as the need to belong, rather than genuine nonconformity, the
rule. While the Web has enabled new forms of collective action, it has also enabled new kinds of
collective stupidity, argues Jonas Lehrer, a contributing editor to Wired magazine and a bestselling writer on both neuroscience and psychology. Groupthink is now more more widespread,
as we cope with the excess of available information by outsourcing our beliefs to celebrities,
pundits and Facebook friends.
14
The more friends you have on Twitter or Facebook, therefore, the more potentially valuable you
become in terms of getting your friends to buy or do things. We manage our friends in the
social networking world in the same way as we manage our assets in the financial
marketplace. There is something Orwellian about the management speak on social networking
sites, notes the ever perceptive Christine Rosen, who adds that such terminology encourages
the bureaucratization of friendship.
15
In our digital age, we are, ironically, becoming more divided than united, more unequal than
equal, more anxious than happy, lonelier rather than more socially connected. A November 2009
Pew Research report about Social Isolation and New Technology, for example, found that
members of networks like Facebook, Twitter, MySpace and LinkedIn are 26 percent less likely to
spend time with their neighbors (thus, ironically, creating the need for social networks like
Nextdoor.com and Yatown that connect local communities). A 2007 Brigham Young University
research study, which analysed 184 social media users, concluded that the heaviest networkers
feel less socially involved with the community around them. While a meta-analysis of seventytwo separate studies conducted between 1979 and 2009 by the University of Michigans Institute
for Social Research showed that contemporary American college students are 40 percent less
empathetic than their counterparts in the 1980s and 1990s. Even our tweets are becoming
sadder, with a study made by scientists from the University of Vermont of 63 million Twitter users
between 2009 and 2011 proving that happiness is going downhill. Most troubling of all, a
fifteen-year study of 300 social media subjects by Professor Sherry Turkle, the director of MITs
Initiative on Technology and the Self, showed that perpetual networking activity is actually
undermining many parents relationship with their children. Technology proposes itself as the
architect of our intimacies, Turkle says about the digital architecture in which we are now all
living. But the truth, her decade and a half of research reveals, is quite the reverse. Technology,
she finds, has become our phantom limb, particularly for young people who, Turkle finds, are
sending up to 6,000 social media announcements a day and who have never either written nor
received a handwritten letter. No wonder, then, that teens have not only stopped using email,
but also no longer use the telephoneboth are too intimate, too private for a digital generation
that uses texting as a protection for their feelings.
16
In describing what she calls the practice of the protean self, MITs Turkle argues that we have
moved from multitasking to multi-lifing. But while we are forever cultivating our collaborative
self, she argues, what is being lost is our experience of being alone and privately reflecting on
our emotions. The end result, Turkle explains, is a perpetual juvenile, somebody she calls a
tethered child, the type of person who, like one of Turkles subjects in her study, believes that
if Facebook were deleted, Id be deleted too. The end result, Turkle explains, is a perpetual
juvenile, somebody she calls a tethered child, the type of person who, like one of Turkles
subjects in her study, believes that if Facebook were deleted, Id be deleted too.
17
So what is the real value of social media in repressive regimes? Twitter is a wonderful tool for
secret policeman to find revolutionaries, Friedman told me. His analysis reflects the so-called
Morozov Principle of Stanford University scholar Evgeny Morozov, whose 2010 book, The Net
Delusion: The Dark Side of Internet Freedom argues that social media tools are being used by
secret policemen in undemocratic states like Iran, Syria, and China to spy on dissidents. As
Morozov told me when he appeared on my TechcrunchTV show in January 2011, these
authoritarian governments are using the Internet in classic Benthamite fashionrelying on social
networks to monitor the behavior, activities and thoughts of their own citizens. In China,
Thailand, and Iran, therefore, the use of Facebook can literally be a facecrime and the Internets
architecture has become a vast Inspection-House, a wonderful tool for secret policemen who no
longer even need to leave their desks to persecute their own people.
18
Not only is social media being used by repressive regimes or organizations to strengthen their
hold on power, but it is also compounding the ever-widening inequalities between the influencers
and the new digital masses. If identity is the new currency and reputation the new wealth of the
social media age, then todays hypervisible digital elite is becoming a tinier and tinier proportion
of the populationOn Twitter, for example, only 0.05 percent of people have more than 10,000
followers with 22.5 percent of users accounting for 90 percent of activity, thus reflecting the
increasingly unequal power structure of an attention economy in which the most valuable
currency is being heard above the noise. Monopolies are actually even more likely in highly
networked markets like the online world, wrote Wired editor-in-chief Chris AndersonThe
inequalities between rich and poor nodes is even more exaggerated in the wake of 2009s Great
Recession. The people who use these [social media] tools are the ones with higher education,
not the tens of millions whose position in todays world has eroded so sharply, notes Time
magazine business columnist Zachary Karabell. Social media contribute to economic bifurcation.
The irony is that social media widen the social divide, making it even harder for the have-nots
to navigate. They allow those with jobs to do them more effectively and companies that are
profiting to profit more. But so far, they have done little to aid those who are being left behind.
They are, in short, business as usual.
19
The problem is that our ubiquitous online culture of free means that every social media
companyfrom Facebook to Twitter to geolocation services like foursquare, Hitlist, and Plancast
relies exclusively on advertising for its revenue. And its information about usJames Gleicks
vital principlethat is driving this advertising economy. As MoveOn.org president Eli Pariser,
another sceptic concerned about the real cost of all these free services, argues in his 2011
book The Filter Bubble, the race to know as much as possible about you has become the central
battle of the era for Internet giants like Google, Facebook, Apple and Microsoft.
20
As MoveOn.org president Eli Pariser, another sceptic concerned about the real cost of all these
free services, argues in his 2011 book The Filter Bubble, the race to know as much as possible
about you has become the central battle of the era for Internet giants like Google, Facebook,
Apple and Microsoft. It is fundamentally impossible for a digital advertising business to care
deeply about privacy, because the user is the only asset it has to sell. Even if the founders and
executives want to care about privacy, at the end of the day, they cant: the economic incentives
going the other direction are just too powerful, Michael Fertik, the Silicon Valleybased CEO of
Reputation.com, a company dedicated to protecting our online privacy, told me. Fertiks
argument is reiterated by the media theorist and CNN columnist Douglas Rushkoff who explains
that rather than being Facebooks customers, we are the product.
21
Bowling Alone syndromea reference to the communitarian theories of Harvard University
sociologist Robert Putnam, whose highly influential and best-selling Bowling Alone regards the
digital network as the solution to what he considers as the crisis of local community. Writing, in
2000only a couple of years after @quixotic created the first social media businessPutnam
sees electronic media as the twenty-first-century means of reinventing community engagement.
Let us find ways to ensure that by 2010 Americans will spend less leisure time sitting passively
alone in front of glowing screens and more time in active connection with our fellow citizens, he
argued with communitarian fervor. Let us foster new forms of electronic entertainment and
communication that reinforce community engagement rather than forestalling it.
22
This intellectual obsession with the social, an obsession with sharingwhat today, as the arc of
information flow bends toward ever greater connectivity, is fashionably called a meme (but is,
in many ways, a virus)can be seen across many different academic disciplines. The concepts of
togetherness and sharing have acquired such religious significance that, in stark contrast with
the research of Oxford Universitys Baroness Susan Greenfield, some scientists are now
discovering its centrality in the genetic make-up of the human condition. One
neuroeconomist, a certain Dr. Paul Zak from the California Institute of Technology, has
supposedly found that social networking activates the release of generosity-trust chemical in
our brains. Larry Swanson and Richard Thompson from the University of Southern California are
even discovering that the brain resembles a interconnected communitythereby triggering
the ridiculous headline: Brain works more like internet than top down company.
23
The future is already here, William Gibson observed in 1993, its just unevenly distributed.
One version of the future, at least our social future, may have arrived, a handful of years after
Gibson first made this prescient remark, at the very end of the twentieth century.
24
This future is called a Super Sad True Love Story. It is imagined by satirist Gary Shteyngart, the
author of a creepy 2010 novel about a dystopian future in which we all own a chic little device
called an Apparat that quantifies and ranks the massive amounts of personal data being
generated by our real identities. Shteyngart explains his data dystopia in which we all live in
public: Everyone has this device called the Apparat, which they wear either tucked into their
pocket or usually as a pendant. The moment you enter a room everyone judges you. So it has
whats called Rate Me Plus technology. So youre rated immediately. Everyone can chip in and
rate everyone else, and everyone does. When he appeared on my TechcrunchTV show in July
2011, Shteyngart described this world as William Gibson land. Its a place where our
personalities are quantified in universally accessible, real-time lists akin to Internet reputation
networks like Hashable or Kred. Mystery, privacy and secrecy will have all been eliminated in this
transparent marketplace. Todays reputation stock market Empire Avenue will have replaced Wall
Street as the key exchange of value. It will be a pure reputation economy, a marketplace of
mirrors a perfect data market in how others see us.
25
As John Stuart Mill argues in On Liberty, government exists to protect us from others rather than
from ourselves and the reality, for better or worse, is that once a photo, an update or a tweet is
publicly published on the network, it becomes de facto public property. So, without wishing to
sound too much like the uber-glib Eric Schmidt, the only way to really protect ones own privacy
is by not publishing anything in the first place.
26
The European Union has been much more aggressive than the United States government in
pushing for privacy rights over social networks. On the all-important issue of online tracking by
social media companies, for example, European privacy regulators have been pushing to
establish an arrangement in which consumers could only be tracked if they actively opt in and
permit marketers to collect their personal data. Europeans have also been more aggressive in
pushing back against the leading Web 3.0 companies. In April 2011, for example, the Dutch
government threatened Google with fines of up to $1.4 million if it continued to ignore dataprotection demands associated with its Street View technology. Apple and Google face much
tighter regulation in Europe with the EU classifying the location information that they have been
collecting from their smartphones as personal data. European Union data protection regulators
have aggressively scrutinized Facebooks May 2011 rollout of its facial recognition software that
reveals peoples identities without their permissionEU justice commissioner Viviane Reding is
even intending social networks to establish a right to be forgotten option that would allow
users to destroy data already published on the network. I want to explicitely clarify that people
shall have the rightand not only the possibilityto withdraw their consent to data processing,
Reding told the EU parliament in March 2011."
27
According to the executive editor of The New York Times, friendship has become a kind of drug
on the Internet, the crack cocaine of our digital age. Last week, my wife and I told our 13-yearold daughter she could join Facebook, confessed The New York Times Bill Keller in May 2011.
Within a few hours she had accumulated 171 friends, and I felt a little as if I had passed my
child a pipe of crystal meth. A June 2011 Pew Research Center study of over two thousand
Americans reported that electronically networked people like Kellers daughter saw themselves
as having more close friends than those of usthose weirdo outcasts according to one
particularly vapid social media commentatorwho arent on Facebook or Twitter. The Pew report
found that the typical Facebook user has 229 friends (including an average of 7 percent that they
hadnt actually met) on Mark Zuckerbergs network and has more close relationships than the
average American. But this June 2011 Pew study made no attempt to define or calibrate the idea
of friendship, treating each one quantatively, like a notch on a bedpost, and presenting
Facebook and Twitter as, quite literally, the architects of our intimacies. What this survey failed
to acknowledge is that human beings arent simply computers, silicon powered devices with
infinitely expandable hard drives and memories, who can make more friends as a result of
becoming more and more networked. So how many real friends should we have? And is there a
ceiling to the number of friendships that we actually can have?
28
A couple of miles north of the Oxford Mal hotel sits the gray-bricked home of Oxford Universitys
Institute of Cognitive and Evolutionary Anthology. It is here, in the nondescript academic setting
of a north Oxford suburb, that we find a man who has determined how many friends we really
need. Professor Robin Dunbar, the director of this institute, is an anthropologist, evolutionary
psychologist and authority on the behavior of primates, the biological order that includes
monkeys, apes and humans. And he has become a social media theorist too, best known for
formulating a theory of friendship dubbed Dunbars Number. The big social revolution in the
last few years has not been some great political event, but the way our social world has been
redefined by social networking sites like Facebook, MySpace and Bebo, Dunbar explains his
eponymous number. This social revolution, he says, attempts to break through the constraints
of time and geography to enable uber-connected primates like @scobleizer to establish online
friendships with tens of thousands of other wired primates. So why do primates have such big
brains? Dunbar asks, rhetorically. Their large brains, he says, borrowing from a theory known as
the Machiavellian intelligence hypothesis, are the result of the complex social world in which
primates live. Its the complexity of their social relations defined by their tangled and
interdependent personal intimacies, Dunbar argues, that distinguishes primates from every
other animal. And as the most successful and widely distributed member of the primate order,
he goes on, humans brains have evolved most fully of all because of the intricate complexity of
our intense social bonds. Memory and forgetting are the keys to Dunbars theory about human
sociability. Youll remember that The New York Times Paul Sullivan suggested that the Internet is
like an elephant because it never forgets. But what really distinguishes animals like elephants
from primates, Robin Dunbar explains, is that they use their knowledge about the social world in
which they live to form more complex alliances with each other than other animals. Thus
primates have a lot more to remember about our social intimacies than elephantswhich may
be one reason why humans forget things and elephants supposedly dont. For better or worse,
nature hasnt come up with a version of Moores Law that could double the size and memory
capacity of our brain every two years. Thus, while our big brains are the result of our complex
social relationships, they are still confined by their limited memories. And its our biological
inability to remember the intricate social details of large communities, Robin Dunbar explains,
that limits our ability to make intimate friendships. We can only remember 150 individuals,
Dunbar says, or only keep track of all the relationships involved in a community of 150. That is
Dunbars Numberour optimal social circle, for which we, as a species, are wired.
29
As I sat upstairs in The Jeremy Bentham nursing my beer and thinking about John Stuart Mill,
what struck me is how acutely relevant On Liberty is today, in an age also being revolutionized
by a pervasive connective technology. This is a world, according to Mark Zuckerberg, in which
education, commerce, health and finance are all becoming social. Its a connected world defined
by billions of smart devices, by real-time lynch mobs, by tens of thousands of people
broadcasting details of a strangers sex life, by the bureaucratization of friendship, by the groupthink of small brothers, by the elimination of loneliness, and by the transformation of life itself
into a voluntary Truman Show. Most of all, its a world in which many of us have forgotten what it
means to be human. But here I fear I am becoming nostalgic, writes the novelist Zadie Smith,
who along with Jonathan Franzen and Gary Shteyngart is amongst the most articulate
contemporary critics of social media. I am dreaming of a Web that caters to a person who no
longer exists. A private person, a person who is a mystery, to the world andwhich is more
importantto herself. Person as mystery: This idea of personhood is certainly changing, perhaps
has already changed.
Reading
We will read and discuss this in class.
Its Complicated: the social lives of networked teens
(danah boyd, 2014)
Homework video and reading
(discussion leader: y)
Current Events / iPad activity
(discussion leader: z)
Links:
(on Canvas lesson page)
What is social medias effect on parenting?
danah boyd
Parasocial Relationships
Clive Thompson, The New York Times (2008)
It is also possible, though, that this profusion of weak ties can become a problem. If youre
reading daily
updates from hundreds of people about whom theyre dating and whether theyre happy, it
might, some critics
worry, spread your emotional energy too thin, leaving less for true intimate relationships.
Psychologists have
long known that people can engage in parasocial relationships with fictional characters, like
those on TV
shows or in books, or with remote celebrities we read about in magazines. Parasocial
relationships can use up
some of the emotional space in our Dunbar number, crowding out real-life people. Danah Boyd, a
fellow at
Harvards Berkman Center for Internet and Society who has studied social media for 10 years,
published a
paper this spring arguing that awareness tools like News Feed might be creating a whole new
class of
relationships that are nearly parasocial peripheral people in our network whose intimate
details we follow
closely online, even while they, like Angelina Jolie, are basically unaware we exist.
The filter bubble diet
Eli Pariser, The Filter Bubble: What the Internet is Hiding From You (2011)
One of the best ways to understand how filters shape our individual experience is to think in
terms of our information diet. As sociologist danah boyd said in a speech at the 2009 Web 2.0
Expo: Our bodies are programmed to consume fat and sugars because theyre rare in nature....
In the same way, were biologically programmed to be attentive to things that stimulate: content
that is gross, violent, or sexual and that gossip which is humiliating, embarrassing, or offensive.
If were not careful, were going to develop the psychological equivalent of obesity. Well find
ourselves consuming content that is least beneficial for ourselves or society as a whole. Just as
the factory farming system that produces and delivers our food shapes what we eat, the
dynamics of our media shape what information we consume. Now were quickly shifting toward a
regimen chock-full of personally relevant information. And while that can be helpful, too much of
a good thing can also cause real problems. Left to their own devices, personalization filters serve
up a kind of invisible autopropaganda, indoctrinating us with our own ideas, amplifying our
desire for things that are familiar and leaving us oblivious to the dangers lurking in the dark
territory of the unknown.
A Radio Frequency Identification tagRFID, for shortcan be read from a distance of a few feet.
Because RFID data is transferred by radio waves rather than visible light, the tags need not be
visible to be read, and the sensor need not be visible to do the reading. RFID tags are simple
devices. They store a few dozen bits of information, usually unique to a particular tag. Most are
passive devices, with no batteries, and are quite small. The RFID includes a tiny electronic chip
and a small coil, which acts as a two-way antenna. A weak current flows through the coil when
the RFID passes through an electromagnetic fieldfor example, from a scanner in the frame of a
store, under the carpet, or in someones hand. This feeble current is just strong enough to power
the chip and induce it to transmit the identifying information. Because RFIDs are tiny and require
no connected power source, they are easily hidden. They can be almost undetectable.
The Government and Email Surveillance
John Freeman, The Tyranny of Email (2009)
The scope of the United States National Security Agencys (NSA) listening capacities is truly
awesome. As James Bamford describes in A Pretext for War, dozens of listening posts around
the world each sweep in as many as two million phone calls, faxes, e-mail messages, and other
types of communications per hour. Most alarming for many Americans is the fact that the
communications companies are helping them. According to a federal statute called the
Communications Assistance for Law Enforcement Act (CALEA), passed in 1994, communications
companies must design their facilities so that their network can be easily monitored. As Bamford
explains in The Shadow Factory, it even requires the company to install the eavesdropping
devices themselves if necessary and then never reveal their existence.
about, the computer user never knows that his or her disk is being searched by the worm. With
the general search, the police were breaking into a house and rummaging through private stuff.
With the worm, it is a bit of computer code that does the breaking and it can only see one thing.
The code can't read private letters; it doesn't break down doors and it doesn't interfere with
ordinary life. And the innocent have nothing to fear.
The worm is silent in a way that King George's troops were not. It searches perfectly and
invisibly, discovering only the guilty This difference complicates the constitutional question.
The worm's behavior is like a generalized search in that it is a search without suspicion, but it is
unlike a generalized search in that it creates no disruption of ordinary life. The framers of the
constitution do not distinguish between these two very different protections. It is we, instead,
who must choose.
Can iPhone Data Be Searched?
Brian Chen, Always On (2011)
Imagine that a twenty-two-year-old Oakland man named Phil gets pulled over for running a stop
sign. A police officer approaches the passenger window and determines that Phil looks suspicious
and decides to arrest him
because running a stop sign is an arrestable offense. While patting down Phil, the officer finds
a pack of cigarettes and an iPhone in his pockets. Under the search-to-arrest doctrine, the officer
is entitled to open the pack of cigarettes and search it without a warrant or even a probable
cause to believe there is anything illegal inside. Further, under the same doctrine, the officer also
has a right to a warrantless search of Phils iPhone. The iPhone (and any smartphone, for that
matter) can be considered the digital equivalent of a closed container, which police officers can
search thoroughly during an arrest, according to Adam Gershowitz, an associate professor at
South Texas College. Thats because even though society and technology have transformed
dramatically over the past few decades, the Fourth Amendment, which guards citizens against
unreasonable searches and seizures, has remained static.
This is not a new observation. For years courts have accepted digital information as evidence;
they see no conceptual difference between physical containers and gadgets containing data.
Before the iPhone, police officers used information retrieved from pagers and conventional cell
phones as admissible evidence against criminals. However, with media-rich, all-in-one portables
such as the iPhone, the situation changes tremendously. Simply by searching an iPhone, police
officers can rightfully gain access to a treasure trove of personal information. In addition to text
messages, contacts, and call histories, an iPhone holds far more pictures than could be stored on
a conventional cell phone and displays them in much clearer detail. Furthermore, the data
contained inside third-party apps can potentially tell a persons life story
1984 and Big Brother
James Gleick, What Just Happened: A Chronicle from the Information Frontier (2003)
For much of the 20th century, 1984 was a year that belonged to the future -- a strange, gray
future at that. Then it slid painlessly into the past, like any other year. Big Brother arrived and
settled in, though not at all in the way George Orwell had imagined. Underpinning Orwell's 1948
anti-utopia -- with its corruption of language and history, its never-ending nuclear arms race and
its totalitarianism of torture and brainwashing -- was the utter annihilation of privacy. Its single
technological innovation, in a world of pig iron and pneumatic tubes and broken elevators, was
the telescreen, transmitting the intimate sights and sounds of every home to the Thought Police.
BIG BROTHER IS WATCHING YOU. ''You had to live Orwell wrote, ''in the assumption that every
sound you made was overheard, and, except in darkness, every movement scrutinized.''
How might technology threaten our precept of innocent before proven guilty?
How might technology compromise our constitutional protections against excessive
punishment?
4/20 Day at the University of Colorado
Daniel Solove, The Future of Reputation (2008)
In the ordinary criminal justice process, a person is innocent until proven guilty. The world of
shaming works differently, as people are punished without a hearing. In one incident, the
University of Colorado used a website to post surveillance photos of students and other
individuals it wanted to identify for smoking marijuana on Farrand Field. It was long a tradition at
the university for students to smoke pot on Farrand Field each year on April 20a party called
420 Day. The university wanted to stamp out this tradition, so it created a website on which it
posted pictures of 150 students captured in the act of smoking pot. According to the website:
The University is offering a reward for the identification of any of the individuals pictured below.
After reviewing the photos (click on a photo for a larger image), you may claim the reward by
following the directions below:
Contact the UCPD Operations section at (303) 492-8168
Provide the photo number and as much information as you have about the individual.
Provide your name and contact information.
If the identity is verified to be correct, you will be paid a $50 reward for every person
identified.
The reward will be paid to the first caller who identifies a person below, multiple rewards will
not be paid for individuals listed below.
The website consisted of a grid of thumbnail photos that people could click on to get larger, highresolution images. Pictures of students who were identified were stamped with the word
IDENTIFIED in large capital letters. The Farrand Field website purported to investigate
trespassers on the field. But it really appeared to be an attempt to use shaming to try to snuff
out the embers of 420 Day. The Farrand Field website exposed students engaging in a minor
infraction to being forever memorialized as drug users, and it did so even before students were
convicted of any wrongdoing. Some of the students might have been smoking cigarettes; some
might have just been there with friends. But their inclusion on the website implicated them.
The American Coalition of Life Activists
Daniel Solove, The Future of Reputation (2008)
One of the earliest attempts at Internet vigilantism was the website known as the Nuremberg
Files. Created in 1997 by Neal Horsley, the website listed the names and personal information of
abortion doctors and their families. This was part of a campaign by a group known as the
American Coalition of Life Activists (ACLA) to
terrorize abortion doctors. The website included data on more than two hundred individuals,
including names, addresses, photographs, drivers license numbers, and information about
family members, such as the schools their children attended. The name of the site alluded to the
Nuremberg trials of Nazi officials following World War II. The site listed doctors who had been
wounded by antiabortion activists in grey and those killed with a line through them. Another part
of the website listed the names of clinic owners and workers, and spouses of abortion doctors.
After Horsley created the website in January 1997, two abortion doctors were shot at their homes
that year. In 1998 an abortion clinic in Alabama was bombed and another doctor was killed by
sniper fire at his home in New York. Shortly afterward, a strikethrough was placed through his
name on the Nuremberg Files website.
Planned Parenthood and a group of doctors sued, contending that the website caused them to
live in fear, to require police protection, and to wear bulletproof vests. The case went to trial in
1999. One doctor stated that he switched his driving route to work and rode in a separate car
from the rest of his family. Every time I get a package, it makes me nervous, a doctor declared.
Its a creepy thing to have to live with, thinking every time, Is this something I ordered or is it a
bomb? One doctor began to wear wigs to conceal herself in public. A jury awarded the doctors
more than one hundred million dollars in damages. The case was appealed, with Horsley and the
ACLA contending that the verdict violated their right to free speech. The court of appeals
affirmed, concluding that the website involved threats of violence with the intent to intimidate
rather than articulating a position to de- bate.
Gae-ttong-nyue
Which of these parties present the greatest threat to personal privacy: corporations, our
friends/family, or accidental data leaks?
Corporations and Email Surveillance
John Freeman, The Tyranny of Email (2009)
Over 35 percent of the workforce has their Internet or e-mail under constant surveillance.
Employers spend hundreds of millions of dollars each year on employee-monitoring software.
Thanks to the Sarbanes-Oxley Act of 2002 and other regulations, publicly traded companies are
required to archive their e-mail. Europe still has strong privacy protections for its employees, but
many U.S. employers in the private sector, as long as they have an established policy and have
put it into writing, can keep a close eye on what their employees send and receive, and where
they point their browsers. Some companies say they do it to control the information that
employees send through the corporate network, wrote Matt Villano in The New York Times.
Other companies do it to make sure employees stay on task, or as a measure of network
security. Other companies monitor e-mail to see how employees are communicating with
customers.
Your Partner and Email Surveillance
John Freeman, The Tyranny of Email (2009)
Lovers and spouses do it, too. A survey done in Oxford revealed that one in five people had spied
on their partners e-mails or texts. Cheaters are constantly caught. Spurned lovers steal each
others smartphones, wrote Brad Stone. Suspicious spouses hack into each others e-mail
accounts. They load surveillance software onto the family PC, sometimes discovering shocking
infidelities. In one case, Stone described a woman who was convinced her husband was
strayinghe was far too obsessed with his BlackBerry. On his birthday she drew him a bubble
bath and rifled through his handheld while he was soaking, discovering that he did have a bit on
the side and planned to meet her that night. All this evidence gleaned from glowing devices
winds up in divorce proceedings, where the electronic paper trail becomes the knife you stick in
your former partners back. I do not like to put things on e- mail, said one divorce lawyer.
Theres no way its private. Nothing is fully protected once you hit the send button.
Oops
Andrew Keen, The Cult of the Amateur (2008)
On August 6, 2006, AOL leaked the search data of 658,000 people. Critics immediately dubbed
this information leak Data Valdez, after the 1989 Exxon Valdez oil tanker spill. Twenty-three
million of the AOL users most private thoughtson everything from abortions and killing ones
spouse to bestiality and pedophiliawere spilled on the Internet to the world without their
knowledge or permission. It was the equivalent of the Catholic Church mailing out 658,000
confessions to its worldwide parishioners. Or the KGB, the Soviet secret police, throwing open
their surveillance files and broadcasting them on national television. The information in these
AOL files is a twenty-first-century version of Notes from Undergroundreplete with information
that reveals us at our most vulnerable, our most private, our most shameful, our most human.
They include every imaginable query, from how to kill your wife and I want revenge for my
wife to losing your virginity, can you still be pregnant even though your period came? and
can you not get pregnant by having sex without a condom? My goodness, its my whole
personal life, a sixty-two-year-old widow from Georgia told the New York Times, horrified, when
she learned that her personal life had been splayed across the Internet. I had no idea somebody
was looking over my shoulder.
Self-disclosure => Self Awareness?
Clive Thompson, The New York Times (2008)
It is easy to become unsettled by privacy-eroding aspects of awareness tools. But there is
another quite different result of all this incessant updating: a culture of people who know
much more about themselves. Many of the avid Twitterers, Flickrers and Facebook users I
interviewed described an unexpected side-effect of constant self-disclosure. The act of stopping
several times a day to observe what youre feeling or thinking can become, after weeks and
weeks, a sort of philosophical act. Its like the Greek dictum to know thyself, or the therapeutic
concept of mindfulness. (Indeed, the question that floats eternally at the top of Twitters Web site
What are you doing? can come to seem existentially freighted. What are you doing?) Having
an audience can make the self-reflection even more acute, since, as my interviewees noted,
theyre trying to describe their activities in a way that is not only accurate but also interesting to
others: the status update as a literary form. Laura Fitton, the social-media consultant, argues
that her constant status updating has made her a happier
person, a calmer person because the process of, say, describing a horrid morning at work forces
her to look at it objectively. It drags you out of your own head, she added. In an age of
awareness, perhaps the person you see most clearly is yourself.
New Risks
Evgeny Morozov, bostonreview.net (2009)
From a national security perspective, cyber-attacks matter in two ways. First, because the backend infrastructure underlying our economy (national and global) is now digitized, it is subject to
new risks. Fifty years ago it would have been hardperhaps impossible, short of nuclear attack
to destroy a significant chunk of the U.S. economy in a matter of seconds; today all it takes is
figuring out a way to briefly disable the computer systems that run Visa, MasterCard, and
American Express. Fortunately, such massive disruption is unlikely to happen anytime soon. Of
course there is already plenty of petty cyber-crime, some of it involving stolen credit card
numbers. Much of it, however, is due to low cyber-security awareness by end-users (you and
me), rather than banks or credit card companies.
Second, a great deal of internal government communication flows across computer networks,
and hostile and not-so-hostile parties are understandably interested in what is being said.
Moreover, data that are just sitting on ones computer are fair game, too, as long as the
computer has a network connection or a USB port. Despite the cyber prefix, however, the
basic risks are strikingly similar to those of the analog age. Espionage has been around for
centuries, and there is very little we can do to protect ourselves beyond using stronger
encryption techniques and exercising more caution in our choices of passwords and Wi-Fi
connections.
Passive Privacy vs. Aggressive Privacy
Passive privacy is the kind elegantly described by the Fourth Amendment -- ''the right of the
people to be secure in their persons, houses, papers, and effects, against unreasonable searches
and seizures.'' We do have a lot of papers and effects these days.
Aggressive privacy implies much more. Telephone regulatory commissions have listened to
arguments that people have a right to remain anonymous, hiding their own numbers when
placing telephone calls. On the Internet, surprising numbers of users insist on a right to hide
behind false names while engaging in verbal harassment or slander.
James Gleick, What Just Happened: A Chronicle from the Information Frontier (2003)
Traceable Anonymity
One way to strike a balance is to enforce traceable anonymity. In other words, we preserve the
right for people to speak anonymously, but in the event that one causes harm to another, weve
preserved a way to trace who the culprit is. A harmed individual can get a court order to obtain
the identity of an anonymous speaker only after demonstrating genuine harm and the need to
know who caused that harm.
Traceable anonymity is for the most part what currently exists on the Internet. Many people use
the term anonymity rather impreciselyto refer to both anonymous speech (no name or
identifier attached) and pseudonymous speech (using a pen name).
Suppose you write an anonymous comment on my blog saying something bad about me. At a
minimum, I will know the IP address of the computer you posted from. I might even have
information about the organization that assigned you your IP address. Thus I will know your ISP
or the company where you work from and the city you were in when you posted. This is how
Brandt traced the Seigenthaler defamer. If you post from work, your employer has information
about which specific computer your post came from, and the comment may be traced back to
your office computer. If you post from home, your ISP can connect your IP address to your
account information. Thus even when youre anonymous, you can be tracked down.
Daniel Solove, Nothing to Hide: The False Tradeoff Between Privacy and Security (2011)
No Place to Hide: Edward Snowden, the NSA, and the U.S. Surveillance State
Give us 14 images of someone and we'll identify that person with 95% accuracy
Eli Pariser, The Filter Bubble: What the Internet is Hiding from You (2011)
Give us 14 images of you, Googles CEO told a crowd of technologists at the Techonomy
Conference in 2010, and we can find other images of you with ninety-five percent accuracy.
Sooner or later, mass face recognition perhaps even in real time, which would allow for
recognition on security and video feedswill roll out. Facial recognition is especially significant
because itll create a kind of privacy discontinuity. Were used to a public semi-anonymitywhile
we know we may be spotted in a club or on the street, its unlikely that we will be. But as
security-camera and camera-phone pictures become searchable by face, that expectation will
slip away.
Police departments around the nation soon to have facial recognition systems
Farhad Manjoo, slate.com (2011)
According to the Wall Street Journal, police departments across the nation will soon adopt
handheld facial- recognition systems that will let them identify people with a snapshot. These
new capabilities are made possible by BI2 Technologies, a Massachusetts company that has
developed a small device that attaches to officers' iPhones. The police departments who spoke
to the Journal said they plan to use the device only when officers suspect criminal activity and
have no other way to identify a personfor instance, when they stop a driver who isn't carrying
her license. Law enforcement officials also seemed wary about civil liberties concerns. Is
snapping someone's photo from five feet away considered a search? Courts haven't decided the
issue, but sheriffs who spoke to the paper say they plan to exercise caution. Face-scanning has
an obvious advantage over fingerprints: It works from far away. Bunch of guys loitering on the
corner? Scantily clad woman hanging around that run-down motel? Two dudes who look like
they're smoking a funny-looking cigarette? Why not snap them all just to make sure they're on
the up-and-up?
Facial recognition for everyone
Farhad Manjoo, slate.com (2011)
In the coming yearsif not monthswe'll see a slew of apps that allow your friends and
neighbors to snap your face and get your name and other information you've put online. This
isn't a theoretical worry; the technology exists, now, to do this sort of thing crudely, and the only
thing stopping companies from deploying it widely is a fear of public outcry. That fear won't last
long. Face recognition for everyone is coming. Get used to it. What's changed in the last decade?
Three things. First, computers have gotten better at recognizing faces. The technology works by
analyzing dozens of different featuresthe distance between your eyes, the width of your nose
that remain the same across photographs. As computers have gotten faster and digital
photography has gotten better, face recognition has filtered down to consumer photo software.
Another major factor that augurs the face-recognition era is that we've become accustomed to
ubiquitous photography. Now that we all carry cameras everywhere, it no longer seems odd
when someone points a lens in your directionyou probably don't even notice it.
Finally, there's Facebook. Ten years ago we were worried about authorities building a worldwide
database of our faces. We're all posting pictures, and tagging names to pictures, at a furious rate
according to Facebook, people add 100 million names to faces on Facebook every day. The
face-recognition tools available to law enforcement agencies will match you against government
databasesthe DMV or passport database, or the FBI's most-wanted listbut the technology
available to consumers will be able to do just as well by matching your face to online snapshots.
The government couldn't have built a better facial database if it tried.
Digital Natives) experience greater difficulty curbing their impulses online than they do in realspace social situations. Part of the issue is that there is a time delay between sending an e-mail
and getting one back. The absence of an authority figure in an unmediated space empowers
people to act on impulse.
Disinhibition, continued
John Freeman, The Tyranny of Email (2009)
Flaming can be induced in some people with alarming ease. Consider an experiment, reported in
2002 in The Journal of Language and Social Psychology, in which pairs of college students
strangerswere put in separate booths to get to know each other better by exchanging
messages in a simulated online chat room. While coming and going into the lab, the students
were well behaved. But the experimenter was stunned to see the messages many of the
students sent. About 20 percent of the e-mail conversations immediately became outrageously
lewd or simply rude. Psychologists call this behavior disinhibitiona filter drops, and we write
things we probably wouldnt say to another in person, at least not after a brief acquaintance. No
environment induces it quite as easily as computer-mediated communication. Indeed, the PC
may have extended the human mind, but its missing a few key human circuits that modulate
social interaction. Neurologists now know that many of the key mechanisms of communication
reside in the prefrontal cortex of the human brain. These circuits instantaneously monitor
ourselves and the other person during a live interaction, wrote Daniel Goleman on
www.edge.org, and automatically guide our responses so they are appropriate and smooth.
One of the key tasks of these circuits is inhibiting impulses for actions that would be rude or
simply inappropriateor outright dangerous.
Anonymity makes lying easier
Daniel Solove, Nothing to Hide: The False Tradeoff Between Privacy and Security (2011)
When we talk about others, we affect not only their reputation but ours as well. If a person
gossips about inappropriate things, betrays confidences, spreads false rumors and lies, then her
own reputation is likely to suffer. People will view the person as untrustworthy and malicious.
They might no longer share secrets with the person. They might stop believing what the person
says. As U.S. Supreme Court Justice Antonin Scalia observed, anonymity can making lying easier;
and the identification of speakers can help significantly in deterring them from spreading false
rumors and can allow us to locate and punish the source of such rumors.
Confronting Your Inner Troll
Jason Lanier, You Are Not a Gadget: A Manifesto (2010)
Troll is a term for an anonymous person who is abusive in an online environment. It would be
nice to believe that there is a only a minute troll population living among us. But in fact, a great
many people have experienced being drawn into nasty exchanges online. I have tried to learn to
be aware of the troll within myself. I notice that I can suddenly become relieved when someone
else in an online exchange is getting pounded or humiliated, because that means Im safe for the
moment. If someone elses video is being ridiculed on YouTube, then mine is temporarily
protected. But that also means Im complicit in a mob dynamic. Have I ever planted a seed of
mob-beckoning ridicule in order to guide the mob to a target other than myself? Yes, I have,
though I shouldn't have. I observe others doing that very thing routinely in anonymous online
meeting places. Ive also found that I can be drawn into ridiculous pissing matches online in
ways that just wouldnt happen otherwise, and Ive never noticed any benefit. There is never a
lesson learned, or a catharsis of victory or defeat. If you win anonymously, no one knows, and if
you lose, you just change your pseudonym and start over, without having modified your point of
view one bit.
LINK
Online Anonymity and Photos
The new possibilities of photographing people in both public and more intimate situations,
coupled with more or less immediately posting such photographs and/or videos to a forum such
as a social networking site or more public webpage, means that people are now more vulnerable
to violations of privacy. If privacy can be defined as the capacity to control information about
oneself, the new ability of others to record and quickly distribute potentially embarrassing
information about oneself thereby decreases ones control over such information (e.g. in the
form of permission to take a photograph, much less permission to distribute the photograph in a
semi-public or public form).
Charles Ess, Digital Media Ethics (2010)
Eric Holder: The Justice Department could strike deal with Edward Snowden
Michael Isinoff (Yahoo! News, 2015)
https://www.yahoo.com/politics/eric-holder-the-justice-department-could-strike123393663066.html
Former Attorney General Eric Holder said today that a possibility exists for the Justice
Department to cut a deal with former NSA contractor Edward Snowden that would allow him to
return to the United States from Moscow.
In an interview with Yahoo News, Holder said we are in a different place as a result of the
Snowden disclosures and that his actions spurred a necessary debate that prompted
President Obama and Congress to change policies on the bulk collection of phone records of
American citizens.
Asked if that meant the Justice Department might now be open to a plea bargain that allows
Snowden to return from his self-imposed exile in Moscow, Holder replied: I certainly think there
could be a basis for a resolution that everybody could ultimately be satisfied with. I think the
possibility exists.
Holders comments came as he began a new job as a private lawyer at Covington & Burling, the
elite Washington law firm where he worked before serving as the nations top law enforcement
officer from February 2009 until last April.
In that capacity, Holder presided over an unprecedented crackdown on government leakers,
including the filing of a June 2013 criminal complaint against Snowden, charging him with three
felony violations of the Espionage Act for turning over tens of thousands of government
documents to journalists.
Holder had previously said in a January 2014 interview with MSNBC that the U.S. would be
willing to engage in conversation with Snowden and his lawyers were he willing to return to the
United States to face the charges, but ruled out any granting of clemency.
But his remarks to Yahoo News go further than any current or former Obama administration
official in suggesting that Snowdens disclosures had a positive impact and that the
administration might be open to a negotiated plea that the self-described whistleblower could
accept, according to his lawyer Ben Wizner.
The former attorney generals recognition that Snowdens actions led to meaningful changes is
welcome, said Wizner. This is significant I dont think weve seen this kind of respect from
anybody at a Cabinet level before.
Holder declined to discuss what the outlines of a possible deal might consist of, saying that as
the former attorney general, it would not be appropriate for him to discuss it.
Its also not clear whether Holders comments signal a shift in Obama administration attitudes
that could result in a resolution of the charges against Snowden. Melanie Newman, chief
spokeswoman for Attorney General Loretta Lynch, Holders successor, immediately shot down
the idea that the Justice Department was softening its stance on Snowden.
This is an ongoing case so I am not going to get into specific details but I can say our position
regarding bringing Edward Snowden back to the United States to face charges has not changed,
she said in an email.
Three sources familiar with informal discussions of Snowdens case told Yahoo News that one top
U.S. intelligence official, Robert Litt, the chief counsel to Director of National Intelligence James
Clapper, recently privately floated the idea that the government might be open to a plea bargain
in which Snowden returns to the United States, pleads guilty to one felony count and receives a
prison sentence of three to five years in exchange for full cooperation with the government.
Litt declined to comment. A source close to Litt said any comments he made were personal and
did not represent the position of the U.S. government. The source also said Litt has made clear to
Snowdens representatives that nothing is going to happen unless he comes in and moves off
this idea, Im entitled to a medal.
But Wizner, Snowdens lawyer, said any felony plea by Snowden that results in prison time would
be unacceptable to his client. Our position is he should not be reporting to prison as a felon and
losing his civil rights as a result of his act of conscience, he said.
Moreover, any suggestion of leniency toward Snowden would likely run into strong political
opposition in Congress as well as fierce resistance from hard-liners in the intelligence community
who remain outraged over his wholesale disclosure of highly classified government documents.
Those feelings have, in some ways, been exacerbated by Snowdens worldwide celebrity that
recently prompted him to enter into an arrangement with a speakers bureau that has allowed
him to give paid talks to worldwide audiences via Skype from his apartment in Moscow.
Im quite stunned that we would be considering any return of Snowden to this country other
than to meet a jury of his peers, period, said Michael Hayden, former director of both the NSA
and CIA under President George W. Bush, when asked about Holders comments.
What Snowden did, however, was the greatest hemorrhaging of legitimate American secrets in
the history of the republic, no question about it, Hayden added.
Whatever happens, Snowdens legal fate wont be in Holders hands. In the interview, he said he
planned to concentrate on giving strategic advice to corporate clients at Covington but no
lobbying while also engaging in significant pro bono work, including starting a foundation to
promote issues such as criminal justice reform.
Holder also said he has already had interactions with Hillary Clintons presidential campaign
and expects to be helpful, including possibly speaking at campaign events and providing advice.
That will be up to the campaign, he said. Whatever the nominee wants. In-class
To this, the Web realist has a number of responses: Denying that the changes are real, that they are important, or that they
are due to the Web.
When a dystopian points to a bad effect of the Web, the utopian denies the truth of the value claim, its inevitability, or its
importance:
Dystopian: The Web has made pornography available to every schoolchild!
Utopian: It is the responsibility of parents to make sure their kids are using child-safe filters. Besides, viewing
pornography may weaken our unhealthy anti-sexual attitudes. Besides, greater access to porn is just one effect of
the Web; it's brought greater access to literature, art, science...
The realist wants to bring the argument squarely within the realm of facts. Facts can, of course, resolve some disputes. But
facts are unlikely to settle the overall question of the Web's difference because the utopians, dystopians and realists are
probably operating from different views of history, and the framing of history also frames facts.
Many utopians think the Web has uncanny power because they are McLuhanites who think media transform institutions and
even consciousness. The McLuhanites' belief in the shaping power of media leads them to a rhetoric of "not only": Not only
did the printing press enable the spread of literacy, it led to our reliance on experts. The next McLuhanite up says, "Not only
did it lead to experts, it actually changed the shape of knowledge." Web utopians engage in the same rhetorical oneupmanship.
Many Web dystopians share the utopians' disruptive view of the Web, although they are struck more by the facts with
negative values.
Many Web realists think change happens far more incrementally. They feel the inertial weight of existing institutions and
social structures. Nothing as trivial as HTML will change the fact that most of the world is in poverty and that corrupt
corporations are firmly in control.
These positions about how history works cannot be defended by looking at history, for they determine how history is to be
read. For example, did the Howard Dean campaign in 2004 show that the Web is profoundly altering politics, that the Web
has had little effect on politics, or that the Web is further degrading politics? All three positions are defensible because
historical events such as presidential campaigns are carried along social wavefronts of unfathomable complexity. Did Dean
get as far as he did because of the Web or because of the media? Did his campaign fail because the Web created a bubble
of self-involvement, because the Web ultimately did not get people out to vote, or because he was a quirky candidate who,
without the Web, wouldn't have been noticed outside of his home state of Vermont?
To make matters yet more complex, holders of these three positions are not merely uttering descriptive statements.
Frequently, they speak in order to have a political effect:
Utopians want to excite us about the future possibilities because they want policies that will keep the Internet an
open field for bottom-up innovation.
Dystopians want to warn us of the dangers of the Web so we can create policies and practices that will mitigate
those dangers.
Realists want to clear away false promises so we can focus on what really needs to be done. Also, they'd like the
blowhard utopians to just shut up for a while.
Arguments that have different aims and are based on differing views of how history works and of the nature of the
interactions between the material and social realms are not settled by facts. In fact, they're not settled. Ever. Even after the
changes happen, these three temperaments and cognitive sets will debate why the changes happened, how significant they
were, and whether they were good, bad or indifferent.
Time won't tell.
Unfortunately, we can't afford to wait for time not to tell us. "Is the Web different?" is an urgent question. Decisions depend on
our answer.
For example, if the Web utopians are right if the Web is transformative in an overall positive way then it's thus morally
incumbent upon us to provide widespread access to as much of the world as is possible, focusing on the disadvantaged. If
the Web dystopians are right, we need to put in place whatever safeguards we can. If the realists are right, then we ought to
make tactical adjustments but ignore the hyperventilations of the utopians and dystopians.
Then there are the more localized decisions. If the Web is transforming business, for better or for worse, then businesses
need to alter their strategic plans. If the Web is merely one more way information travels, then businesses should be looking
only at tactical responses. Likewise for every other institution that deals with information, including government, media,
science, and education.
So, we need to decide.
But there is no way to decide.
Fortunately, this is not the first time we humans have been in this position. In fact, it is characteristic of politics overall. Who's
right, the liberals, the conservatives, or neither? Because such a question can't be answered to the satisfaction of all the
parties involved, we come up with political means for resolving issues. For politics to work in helping us to decide what to do
about and with the Web, we need all three positions plus the incalculable variants represented.
Together we'll settle the future's hash.
But I don't want to leave it at that happy, liberal conclusion because it is, I believe, incomplete. The fuller statement of the
conclusion should include: It is vital to have realists in the discussion, but they are essentially wrong.
I am using the word "essentially" carefully here. Web realists are often right in their particular arguments, demurrals and
corrections, and the utopians and dystopians are often wrong in their predictions, readings, and even facts. That matters. Yet,
the essence of the utopian and dystopian view is that the Web is truly different. About that they are right.
Why? I am enough of a McLuhanite to believe that media do not simply transmit messages. The means by which we
communicate has a deep, profound and even fundamental effect on how we understand ourselves and how we associate
with one another. Yes, the medium is the message.
If that's the case (and notice I am not giving any further argument for it), then there are good reasons to think that the Web as
a medium is likely to be as disruptive as other media that have had profound effects on culture. Perhaps the best comparison
is to the effect Gutenberg's invention has had on the West. Access to printed books gave many more people access to
knowledge, changed the economics of knowledge, undermined institutions that were premised on knowledge being scarce
and difficult to find, altered the nature and role of expertise, and established the idea that knowledge is capable of being
chunked into stable topics. These in turn affected our ideas about what it means to be a human and to be human together.
But these are exactly the domains within which the Web is bringing change. Indeed, it is altering not just the content of
knowledge but our sense of how ideas go together, for the Web is first and foremost about connections.
Clearly, there is much more to say about this, and much has already been said. But that is the general shape of one Web
utopian argument.
It can, of course, be challenged. It should be challenged, both in its outline and in its particulars. Here Web realists have a
vital role to play. But at the highest level of abstraction, these three positions are not truly arguable. Each is an expression of
an attitude towards the future, and the future is that which does not yet exist. None of these three positions truly knows what
the future holds if only because the prevalence of these positions itself shapes the unknown future.
And that is a reason to join the utopian tribe, or at least to acknowledge the special value it brings to the conversation.
Innovation requires the realism that keeps us from wasting time on the impossible. But some of the most radical innovation
requires ignoring one's deep-bred confidence about what is possible. This is especially true within the social realm where the
limits on new ways to associate are almost always transgressible simply by changing how we think about ourselves. We thus
need utopians to invent the impossible future.
And we need lots and lots of them. There is so much to invent, and the new forms of association that emerge often only
succeed if there are enough people to embrace them.
Web realists perform the vital function of keeping us from running down dead ends longer than we need to, and from getting
into feedback loops that distort the innovation process. For those services, we should thank and encourage the realists. But
we should also recognize that beyond the particulars, they are essentially wrong.
The contention among dystopians, realists and utopians is is a struggle among the past, the present and the future. The
present is always right about itself but in times of disruption essentially wrong about the future. That's why we need to
flood the field with utopians so we can be right often enough that we build the best future we can.
It is, of course, simply an accident that this defense of Web utopianism comes from someone who is personally a Web
utopian. Absolutely coincidental.
Uh huh.No Sunday
Happy New Year!
Kaczynski argues that it is impossible to escape the ratcheting clutches of industrial technology
for several reasons. One, because if you use any part of it, the system demands servitude; two,
because technology does not reverse itself, never releasing what is in its hold; and three,
because we dont have a choice of what technology to use in the long run. In his words, from the
Manifesto:
The system HAS TO regulate human behavior closely in order to function. At work, people have
to do what they are told to do, otherwise production would be thrown into chaos. Bureaucracies
HAVE TO be run according to rigid rules. To allow any substantial personal discretion to lowerlevel bureaucrats would disrupt the system and lead to charges of unfairness due to differences
in the way individual bureaucrats exercised their discretion. It is true that some restrictions on
our freedom could be eliminated, but GENERALLY SPEAKING the regulation of our lives by large
But we are suggesting neither that the human race would voluntarily turn power over to the
machines nor that the machines would willfully seize power. What we do suggest is that the
human race might easily permit itself to drift into a position of such dependence on the
machines that it would have no practical choice but to accept all of the machines' decisions. As
society and the problems that face it become more and more complex and machines become
more and more intelligent, people will let machines make more of their decisions for them,
simply because machine-made decisions will bring better results than man-made ones.
Eventually a stage may be reached at which the decisions necessary to keep the system running
will be so complex that human beings will be incapable of making them intelligently. At that
stage the machines will be in effective control. People won't be able to just turn the machines
off, because they will be so dependent on them that turning them off would amount to suicide.
On the other hand it is possible that human control over the machines may be retained. In that
case the average man may have control over certain private machines of his own, such as his
car or his personal computer, but control over large systems of machines will be in the hands of
a tiny elite - just as it is today, but with two differences. Due to improved techniques the elite will
have greater control over the masses; and because human work will no longer be necessary the
masses will be superfluous, a useless burden on the system. If the elite is ruthless they may
simply decide to exterminate the mass of humanity. If they are humane they may use
propaganda or other psychological or biological techniques to reduce the birth rate until the
mass of humanity becomes extinct, leaving the world to the elite. Or, if the elite consists of softhearted liberals, they may decide to play the role of good shepherds to the rest of the human
race. They will see to it that everyone's physical needs are satisfied, that all children are raised
under psychologically hygienic conditions, that everyone has a wholesome hobby to keep him
busy, and that anyone who may become dissatisfied undergoes "treatment" to cure his
"problem." Of course, life will be so purposeless that people will have to be biologically or
psychologically engineered either to remove their need for the power process or make them
"sublimate" their drive for power into some harmless hobby. These engineered human beings
may be happy in such a society, but they will most certainly not be free. They will have been
reduced to the status of domestic animals.
connecting North and South America rose, it took only a few thousand years for the northern
placental species, with slightly more effective metabolisms and reproductive and nervous
systems, to displace and eliminate almost all the southern marsupials.
In a completely free marketplace, superior robots would surely affect humans as North American
placentals affected South American marsupials (and as humans have affected countless
species). Robotic industries would compete vigorously among themselves for matter, energy,
and space, incidentally driving their price beyond human reach. Unable to afford the necessities
of life, biological humans would be squeezed out of existence.
Jeopardy!
Cyberwar
iWar
Johnny Ryan, A History of the Internet and the Digital Future (2010)
iWar can be waged by nations, corporations, communities, or by any reasonably tech-savvy
individual. This is a fundamental shift in the balance of offensive capability, empowering
individuals with the power to threaten the activities of governments and large corporations.
About Cyber War
Richard Clarke, CyberWar: The Next Threat to National Security (2010)
Cyber war is real. What the United States and other nations are capable of doing in a cyber war
could devastate a modern nation. Cyber war happens at the speed of light. As the photons of the
attack packets stream down fiber-optic cable, the time between the launch of an attack and its
effect is barely measurable, thus creating risks for crisis decision makers. Cyber war is global. In
any conflict, cyber attacks rapidly go global, as covertly acquired or hacked computers and
servers throughout the world are kicked into service. Cyber war skips the battlefield. Systems
that people rely upon, from banks to air defense radars, are accessible from cyberspace and can
be quickly taken over or knocked out without first defeating a countrys traditional defenses.
Cyber war has begun. In anticipation of hostilities, nations are already preparing the
battlefield. They are hacking into each others networks and infrastructures, laying in trapdoors
and logic bombs--now, in peacetime. This ongoing nature of cyber war, the blurring of peace and
war, adds a dangerous new dimension of instability.
iWar is for everyone
Johnny Ryan, A History of the Internet and the Digital Future (2010)
iWar, perhaps for the first time, is liberated from the expense and effort that traditionally inhibits
offensive action against geographically distant targets. Conventional destruction of targets by
kinetic means is enormously expensive and comparatively slow. A single B-2 Spirit stealth
bomber, which costs US $2.1 billion to develop and build, must fly from Whiteman Air Force base
in Missouri in order to drop ordinance on a target in Afghanistan. iWar, though it delivers far less
offensive impact, can inflict damage from any point on the globe at a target anywhere on the
globe at virtually no cost. For the same reason, iWar will proliferate quickly across the globe. It is
not limited by the geographical constraints that impeded the spread of earlier military
innovations. The proliferation of gunpowder in Europe puts this in perspective; appearing in
China in the seventh or eighth century, gunpowder finally made its European debut as late as
1314, first in Flanders, then in England seven years later, and in France five years after that. In
contrast, new tools and know-how necessary to wage iWar proliferate easily across the Internet.
iWar is deniable
Johnny Ryan, A History of the Internet and the Digital Future (2010)
iWar can be waged anonymously and is difficult to punish. iWar is deniable. Even if official
culpability could be proven, it is unclear how one state should respond to an iWar attack by
another. A criminal investigation would be no less problematic. If digital forensic investigation
could trace a malicious botnet to a single computer controlled a Denial of Service (DOS) attack, it
is unlikely that effective action could be taken to prosecute. The culpable computer might be in
another jurisdiction from which cooperation would not be forthcoming. If cooperation were
forthcoming, the culpable computer might have been operated from an Internet caf or at
another anonymous public connectivity site, making it impossible to determine who among the
many transient users was involved in a DOS attack that typically lasts only a short period of
time.
About Logic Bombs
Richard Clarke, CyberWar: The Next Threat to National Security (2010)
The idea of logic bombs is simple. In addition to leaving behind a trapdoor in a network so you
can get back in easily, without setting off alarms and without needing an account, cyber warriors
often leave behind a logic bomb so they dont have to take the time to upload it later on when
they need to use it. A logic bomb in its most basic form is simply an eraser, it erases all the
software on a computer, leaving it a useless hunk of metal. More advanced logic bombs could
first order hardware to do something to damage itself, like ordering an electric grid to produce a
surge that fries circuits in transformers, or causing an aircrafts control surfaces to go into the
dive position. Then it erases everything, including itself.
The CIA has used logic bombs before
Richard Clarke, CyberWar: The Next Threat to National Security (2010)
Americas national security agencies are now getting worried about logic bombs, since they
seem to have found them all over our electric grid. There is a certain irony here, in that the U.S.
military invented this form of warfare. One of the first logic bombs, and possibly the first
incidence of cyber war, occurred before there even really was much of an Internet. In the early
1980s, the Soviet leadership gave their intelligence agency, the KGB, a shopping list of Western
technologies they wanted their spies to steal for them. A KGB agent who had access to the list
decided he would rather spend the rest of his days sipping wine in a Paris caf than freezing in
Stalingrad, so he turned the list over to the French intelligence service in exchange for a new life
in France. France, which was part of the Western alliance, gave it to the U.S. Unaware that
Western intelligence had the list, the KGB kept working its way down, stealing technologies from
a host of foreign companies. Once the French gave the list to the CIA, President Reagan gave it
the okay to help the Soviets with their technology needs, with a catch. The CIA started a massive
program to ensure that the Soviets were able to steal the technologies they needed, but the CIA
introduced a series of minor errors into the designs for things like stealth fighters and space
weapons.
Another way of thinking about "cyberwar"
Christopher Ford, The New Atlantis (2010)
When Americans speak of cyberwar, we tend to think of lines of malicious code being sent from
one computer to another, generally via the Internet, in order to cause some kind of mischief: say,
taking down a power grid, or crashing the control systems for an air-defense network. But in fact,
the line between computer-on- computer attack and other forms of electronic assault is quite
fuzzy, and future cyber conflicts between sophisticated players may see wildly different means
and ends that we cannot now predict. An alternative way of discussing cyberwar is in terms not
of technology but of influence. In U.S. military doctrine, information warfare or information
operations (IO) are somewhat separate from cyber conflict. Information operations in time of
conflict include psychological operations, such as deception and perception management;
familiar examples from the twentieth century include dropping leaflets from airplanes, running
strategic misdirection operations, and broadcasting propaganda. Recently, this category has
broadened to include even such activities as giving interviews to the press or writing opinion
pieces for newspaper publication, as well as protection and assurance activities directed at
preserving the integrity and availability of ones own information. In the American understanding
of the terms, therefore, not all IO is cyber in nature, but the two can overlap: cyber attacks can
be used as a tool for accomplishing IO goals. For example, a combatant might hack into an
adversarys systems to plant false data or stories intended to sow fear or confusion.
feel-good site Upworthy to the abuse directed at women and minorities who write intelligent
criticism.) And what's negative? Is a manifesto for social change negative because it
criticizes the status quo or positive because it's idealistic?
But knowing about negativity bias has made me more skeptical of high-brow punditry that
defaults to dour views. If caustic wit is what garners a person whooping accolades for their
intelligence, surely public intellectuals adjust their approach accordingly.
Gibson told me that his study hadn't been cited or followed up on much by other
researchers. Maybe you weren't negative enough? I asked. He laughed: I guess so.
The President is in the Beast, his giant armored vehicle that resembles a Cadillac on steroids, on
his way back from the restaurant. The Secret Service pulled him out of the restaurant when the
blackout hit, but they are having a hard time getting through the traffic. Washingtons streets are
filled with car wrecks because the signal lights are all out. POTUS wants to know if its true what
his Secret Service agent told him, that the blackout is covering the entire eastern half of the
country. No, wait, what? Now theyre saying that the Vice Presidents detail says its out where
he is, too. Isnt he in San Francisco today? What time is it there?
You look at your watch. Its now 8:15 p.m. Within a quarter of an hour, 157 major metropolitan
areas have been thrown into knots by a nationwide power blackout hitting during rush hour.
Poison gas clouds are wafting toward Wilmington and Houston. Refineries are burning up oil
supplies in several cities. Subways have crashed in New York, Oakland, Washington, and Los
Angeles. Freight trains have derailed outside major junctions and marshaling yards on four major
railroads. Aircraft are literally falling out of the sky as a result of midair collisions across the
country. Pipelines carrying natural gas to the Northeast have exploded, leaving millions in the
cold. The financial system has also frozen solid because of terabytes of information at data
centers being wiped out. Weather, navigation, and communications satellites are spinning out of
their orbits into space. And the U.S. military is a series of isolated units, struggling to
communicate with each other.
Several thousand Americans have already died, multiples of that number are injured and trying
to get to hospitals. There is more going on, but the people who should be reporting to you cant
get through. In the days ahead, cities will run out of food because of the train-system failures
and the jumbling of data at trucking and distribution centers. Power will not come back up
because nuclear plants have gone into secure lockdown and many conventional plants have had
their generators permanently damaged. High-tension transmission lines on several key routes
have caught fire and melted. Unable to get cash from ATMs or bank branches, some Americans
will begin to loot stores. Police and emergency services will be overwhelmed.
In all the wars America has fought, no nation has ever done this kind of damage to our cities. A
sophisticated cyber war attack by one of several nation-states could do that today, in fifteen
minutes, without a single terrorist or soldier ever appearing in this country.
Shaping the internet age
Bill GatesChairman and Chief Software Architect, Microsoft Corp. (2000)
Opportunities and Challenges. Whenever a new technology emerges with the potential to
change the way people live and work, it sparks lively debate about its impact on our world and
concern over how widely it should be adopted. Some people will view the technology with
tremendous optimism, while others will view it as threatening and disruptive. When the
telephone was first introduced, many critics thought it would disrupt society, dissolve
communities, erode privacy, and encourage selfish, destructive behavior. Others thought the
telephone was a liberating and democratizing force that would create new business
opportunities and bring society closer together.
The Internet brings many of these arguments back to life. Some optimists view the Internet as
humanity's greatest invention--an invention on the scale of the printing press. They believe the
Internet will bring about unprecedented economic and political empowerment, richer
communication between people, a cultural renaissance, and a new era of economic prosperity
and world peace. At the other extreme, pessimists think the Internet will result in economic and
cultural exploitation, the death of privacy, and a decline in values and social standards.
If history is any guide, neither side of these arguments will be proved right. Just as the
telephone, electricity, the automobile, and the airplane shaped our world in the 20th century, the
Internet will shape the early years of the 21st, and it will have a profound--and overwhelmingly
positive--impact on the way we work and live. But it will not change the fundamental aspects of
business and society--companies will still need to make a profit, people will still need their social
framework, education will still require great teachers.
However, the current debate over how widely we should adopt this technology does raise some
serious issues that must be addressed to make the most of the Internet's vast potential.
Protecting intellectual property. The Internet makes it possible to distribute any kind of
digital information, from software to books, music, and video, instantly and at virtually no cost.
The software industry has struggled with piracy since the advent of the personal computer, but
as recent controversy over file-sharing systems such as Napster and Gnutella demonstrates,
piracy is now a serious issue for any individual or business that wants to be compensated for the
works they create. And since the Internet knows no borders, piracy is now a serious global
problem. Strong legislation such as the Digital Millennium Copyright Act (DMCA), cooperation
between nations to ensure strong enforcement of international copyright law and innovative
collaboration between content producers and the technology industry have already made an
impact on addressing this problem. As more and more digital media becomes easy to distribute
over the Internet, the government and private sector must work together to find appropriate
ways to protect the rights of information consumers and producers around the world.
Regulating global commerce. How can we regulate Internet commerce--or should we do it at
all? Because the Internet offers people an easy way to purchase goods and services across state
and national borders--generating tremendous economic growth in the process--it makes global
commerce even more challenging to tax or regulate effectively. But since the Internet's economic
effects result largely from the "friction-free" commerce it enables, any regulation that gets in the
way comes at a price: lost economic growth. As more and more business transactions take place
on the Internet, governments and businesses must cooperate to find innovative ways to regulate
and derive tax revenue from Internet commerce without interfering with the economic benefits it
can provide.
Protecting individual privacy. In the coming years, people will increasingly rely on the
Internet to share sensitive information with trusted parties about their finances, medical history,
personal habits, and buying preferences. At the same time, many will wish to safeguard this
information, and use the Internet anonymously. Although technology has placed individual
privacy at risk for decades--most consumers regularly use credit cards and exchange sensitive
information with merchants over the telephone--privacy will become a far more pressing issue as
the Internet becomes the primary way for people to manage their finances or keep in touch with
their physician. The use of personal information by retailers wishing to provide personalized
service and advertisers that want to target very specific audiences--some of whom have resorted
to gathering information from consumers without notifying them--has greatly increased public
concern over the safety of personal information. It has also left many people reluctant to trust
the Internet with their data.
Keeping the Internet secure. Security has always been a major issue for businesses and
governments that rely on information technology, and it always will be. Much the same is true for
individual security--long before the Internet, people were happily handing their credit cards to
restaurant waiters they had never met before, and that too is unlikely to change. But as our
economy increasingly depends on the Internet, security is of even greater concern. Widely
publicized incidents of Web site hacking, credit card fraud and identity theft have given the
Internet a largely unjustified "Wild West" reputation. In order to keep the Internet a safe place to
do business, software companies have a responsibility to work together to ensure that their
products always offer the highest levels of security. And the judicial system and the law
enforcement community must keep pace with technological advancements and enforce criminal
laws effectively and thoroughly.
Protecting our children. The Internet can revolutionize education, giving children the
opportunity to indulge their intellectual curiosity and explore their world. But while it helps them
to learn about dinosaurs or world history, it can also expose them to obscene, violent or
inappropriate content. And since the Internet is an unregulated global medium, it is hard to
"censor" in any traditional way. The private sector has already made great strides in giving
parents and teachers more control over what children can see and do on the Internet, through
filtering software that blocks access to objectionable Web sites; industry standards such as the
still-evolving Platform for Internet Content Selection (PICS) that enable helpful rating systems;
and Internet Service Providers (ISPs) that voluntarily regulate the activities of their customers.
Government has also played a part, encouraging the growth of the market for child-safety tools,
and increasing law enforcement's role in policing and prosecuting online predators. So far, the
issue of protecting children on the Internet has served as an excellent example of how
governments and the private sector can work together to tackle problems on the Internet.
Bridging the "digital divide." The Internet can empower and enrich the lives of
disadvantaged people around the world--but only if they have access to it. In the 1930s, the
United States government helped bridge the "electrical divide" by forming the Rural
Electrification Administration, which brought power to rural areas that could benefit most from
electrification. Similarly, "universal service" programs have helped some remote areas and
disadvantaged communities have access to inexpensive telephone service. The benefits of
widespread access to the Internet and communications technology are clear enough that
governments now need to decide whether a similar principle should be applied to ensure that
nobody is left behind in the Internet Age.
What is government's role? The Internet is a constantly changing global network that knows no
borders, presenting a unique problem for governments that need to address the many
challenges it presents. In the coming years, governments will have the opportunity to develop
thoughtful and innovative approaches to policies that protect their citizens while nurturing the
openness, flexibility, and economic opportunities that make the Internet such a compelling
technology.
The light hand of government regulation has created an environment that has encouraged the
Internet to flourish, and enabled companies to bring their innovations to consumers at
breathtaking speed. Over the next few years, governments worldwide will find it rewarding to
pursue policies that speed the building of the infrastructure that will make it possible to bring the
benefits of the Internet to more people. This includes finding ways to speed the implementation
of broadband technologies, deregulate where necessary to stimulate competition, resist the
temptation to enact new regulations, and redouble our efforts to protect content on the Internet
by strengthening and enforcing intellectual-property rights.
A world where the market for news, entertainment and information has been
perfected
Cass Sunstein, Republic.com 2.0 (2008)
It is some time in the future. Technology has greatly increased people's ability to filter what
they want to read, see, and hear. With the aid of the Internet, you are able to design your own
newspapers and magazines. You can choose your own programming, with movies, game shows,
sports, shopping, and news of your choice. You mix and match.
You need not come across topics and views that you have not sought out. Without any difficulty,
you are able to see exactly what you want to see, no more and no less. You can easily find out
what people like you tend to like and dislike. You avoid what they dislike. You take a close look
at what they like.
Maybe you want to focus on sports all the time, and to avoid anything dealing with business or
government. It is easy to do exactly that. Maybe you choose replays of your favorite tennis
matches in the early evening, live baseball from New York at night, and professional football on
the weekends. If you hate sports and want to learn about the Middle East in the evening from the
perspective you find most congenial, you can do that too. If you care only about the United
States and want to avoid international issues entirely, you can restrict yourself to material
involving the United States. So too if you care only about Paris, or London, or Chicago, or Berlin,
or Cape Town, or Beijing, or your hometown.
Perhaps you have no interest at all in news. Maybe you find news impossibly boring. If so,
you need not see it at all. Maybe you select programs and stories involving only music and
weather. Or perhaps your interests are more specialized still, concentrating on opera, or
Beethoven, or Bob Dylan, or modern dance, or some subset of one or more of the above. (Maybe
you like early Dylan and hate late Dylan.)
If you are interested in politics, you may want to restrict yourself to certain points of view by
hearing only from people with whom you agree. In designing your preferred newspaper, you
choose among conservatives, moderates, liberals, vegetarians, the religious right, and socialists.
You have your favorite columnists and bloggers; perhaps you want to hear from them and from
no one else. Maybe you know that you have a bias, or at least a distinctive set of tastes, and you
want to hear from people with that bias or that taste. If so, that is entirely feasible. Or perhaps
you are interested in only a few topics. If you believe that the most serious problem is gun
control, or climate change, or terrorism, or ethnic and religious tension, or the latest war, you
might spend most of your time reading about that problemif you wish from the point of view
that you like best.
Of course everyone else has the same freedom that you do. Many people choose to avoid news
altogether. Many people restrict themselves to their own preferred points of viewliberals
watching and reading mostly or only liberals; moderates, moderates; conservatives,
conservatives; neo-Nazis or terrorist sympathizers, Neo-Nazis or terrorist sympathizers. People in
different states and in different countries make predictably different choices. The citizens of Utah
see and hear different topics, and different ideas, from the citizens of Massachusetts. The
citizens of France see and hear entirely different perspectives from the citizens of China and the
United States. And because it is so easy to learn about the choices of people like you,
countless people make the same choices that are made by
others like them.
The resulting divisions run along many linesof religion, ethnicity, nationality, wealth, age,
political conviction, and more. People who consider themselves left-of-center make very different
selections from those made by people who consider themselves right-of-center. Most whites
avoid news and entertainment options designed for African Americans. Many African Americans
focus largely on options specifically designed for them. So too with Hispanics. With the reduced
importance of the general-interest magazine and newspaper and the flowering of individual
programming design, different groups make fundamentally different choices.
The market for news, entertainment, and information has finally been perfected. Consumers are
able to see exactly what they want. When the power to filter is unlimited, people can decide, in
advance and with perfect accuracy, what they will and will not encounter. They can design
something very much like a communications universe of their own choosing. And if they have
trouble designing it, it can be designed for them, again with perfect accuracy.