Professional Documents
Culture Documents
Science Technology and Society: Midterm
Science Technology and Society: Midterm
Technology
and Society
Midterm
Module
https://www.google.com/url?sa=i&url=https%3A%2F%2Fwww.pngitem.com%2Fmiddle%2FwhJiw_collection-of-technology-art-science-technology-and-society%2F&psig=AOvVaw3uYG_LugiJvMI9QBgJQJmV&ust=1597295366829000&source=images&cd=vfe&ved=0CA0QjhxqFwoTCPiHy-71lOsCFQAAAAAdAAAAABAJ
Name of Student________________________________________________________________________
Program/Course________________________________________________________________________
Address________________________________________________________________________________
Cellphone Number______________________________________________________________________
Facebook Account_______________________________________________________________________
Email Address __________________________________________________________________________
Marianne G. Arias FBAccount: Marianne Giron Arias Gmail Account: mariannegironarias@gmail.com
Cellphone Number:09952567566
Module 2 Science Technology and Society and Human Condition
This module introduces students to a number of relevant and timely philosophical foundations that will aid in
examining the functions, roles, and impacts of science and technology on society. The module is divided into
five sections. These sections aim to provide students with cogent and comprehensive knowledge on the
concept of human flourishing in the face of rapid scientific progress and technological development.
Diagnostics
Instructions: Rate the extent of your agreement to the following statements using the Osgood scale. You are
also given space to write any comment to further clarify your response.
" The essence of technology is by no means anything technological." Martin Heidegger (1977)
Martin Heidegger (1889-1976) is widely acknowledged as one of the most important
philosophers of the 20th century.He was a German philosopher who was part of the
Continental tradition of philosophy.
His stern opposition to positivism and technological world domination received
unequivocal support from leading postmodernists and post-structuralists of the time,
including Jacques Derrida, Michel Foucault, and Jean-Francois Lyotard.
In 1933, he joined the Nazi Party (NSDAP) and remained to be a member until it
was dismantled toward the end of World War II. This resulted in his dismissal from
the University of Freiburg in 1949. He was only able to resume teaching in 1951.
Heidegger's membership to the Nazi Party made him controversial-his philosophical
work was often eclipsed by his political affiliation, with critics saying that his philosophy would always be
rooted in his political consciousness.
Heidegger's work on philosophy focused on ontology or the study of 'being or dasein in German. His
philosophical works are often described as complicated, partly due to his use of complex compound German
words, such as Seinsvergesesnheit (Forgetfulness of Being), Bodenstandigkeit (Rootedness in Soil), and
Wesensverfassung (Essential Constitution).
To know more about the life and philosophy of Heidegger, watch a five-minute You Tube video entitled,
The Philosophy of Martin Heidegger which can be accessed on this link: https://www.youtube.
com/watch?v=Br1sGtA7XTU. Remember, it is important to understand basic concepts related to
Heidegger's philosophy to better make sense of his work.
Thus, for Heidegger, technology is a form of poeisis-a way of revealing that unconceals aletheia or the truth.
This is seen in the way the term techne, the Greek root word of technology, is understood in different
contexts. In philosophy, techne resembles the term episteme that refers to the human ability to make and
perform. Techne also encompasses knowledge and understanding. In art, it refers to tangible and intangible
aspects of life. The Greeks understood techne in the way that it encompasses not only craft, but other acts of
the mind, and poetry.
Enframing, according to Heidegger, is akin to two ways of looking the world: calculative thinking and
meditative thinking. In calculative thinking, humans desire to put an order to nature to better understand and
control it. In meditative thinking, humans allow nature to reveal itself to them without the use of force or
violence. One thinking is not necessarily better than the other. In fact, humans are capable ofusing both and
will benefit from being able to harmonize these ways of looking at the world. Yet, calculative thinking
tends to be more commonly utilized, primarily because humans' desire to control due to their fear of
irregularity.
Enframing, then, is a way of ordering (or framing) nature to better manipulate it. Enframing happens because
of how humans desire for security, even if it puts all of nature as a standing reserve ready for exploitation.
Modern technology challenges humans to enframe nature. Thus, humans become part of the standing reserve
and an instrument of technology, to be exploited in the ordering of nature. The role humans take as
instruments of technology through enframing is called destining. In destining, humans are challenged forth
by enframing to reveal what is real. However, this destining of humans to reveal nature carries with it the
danger of misconstruction or misinterpretation.
Recognizing its dangers of technology requires critical and reflective thinking on its use. For example,
social media has indeed connected people in the most efficient and convenient way imaginable, but it also
inadvertently gave rise to issues such as invasion of privacy, online disinhibition, and proliferation of
fake news. The line has to be drawn between what constitutes a beneficial use of social media and
dangerous one. As exemplified, social media comes with both benefits and drawbacks.
Because the essence of technology is nothing technological, essential reflection upon technology and
decisive confrontation with it must happen in a realm that is, on the one hand, akin to the essence of
technology and, on the other, fundamentally different from it. Such a realm is art. But certainly only if
reflection on its part, does not shut its art, for eyes to the constellation of truth after which we are
questioning (1977, p. 19).
Heidegger underscored the importance of questioning in the midst of technology. For him, there is
unparalleled wisdom gained only when humans are able to pause, think, and question what is around them.
Humans are consumed by technology when they are caught up in enframing and fail to pay attention to the
intricacies of technology, the brilliance of the purpose of humankind, and the genius of humans to bring forth
the truth.
Questioning is the piety of thought. It is only through questioning that humans are able to reassess their
position not only in the midst of technology around them, but also, and most importantly, in the grand
scheme of things. Heidegger posited that it is through questioning that humans bear witness to the crises that
a complete preoccupation with technology brings, preventing them from experiencing the essence of
technology. Thus, humans need to take a step back and reassess who they were, who they are, and who they
are becoming in the midst of technology in this day and age.
a
. bring forth b. challenge forth a. bring forth b. challenge forth
_______________________________________ _____________________________________________
_______________________________________ _____________________________________________
_______________________________________ __________________________________________
_______________________________________ ____________________________________________
Exercise 2. Reflection
Instructions: After studying the full text of Martin Heidegger's The Question Concerning Technology,
available on https://www2.hawaii.edu/~freeman/courses/phil394/The%20Question%20Concerning
%20Technology.pdf Answer the following:
1. What three concepts remain unclear or difficult for you to understand?
a.______________________________________________________________________________________
_______________________________________________________________________________________
b. _____________________________________________________________________________________
_______________________________________________________________________________________
c.______________________________________________________________________________________
_______________________________________________________________________________________
2. What three significant insights did you gain in studying this text?
a.______________________________________________________________________________________
_______________________________________________________________________________________
b.______________________________________________________________________________________
_______________________________________________________________________________________
c.______________________________________________________________________________________
_______________________________________________________________________________________
3. What three questions do you want to ask about the text?
a.______________________________________________________________________________________
_______________________________________________________________________________________
b.______________________________________________________________________________________
_______________________________________________________________________________________
c.______________________________________________________________________________________
_______________________________________________________________________________________
Facebook has been scrambling for weeks in the face of the disclosures on hijacking of private data by the
consulting group working for Donald Trump's 2016 campaign.
The British firm responded to the Facebook announcement by repeating its claim that it did not use data
from the social network in the 2016 election.
"Cambridge Analytica did not use GSR (Global Science Research) Facebook data or any derivatives of this
data in the US presidential election," the company said in a tweet. "Cambridge Analytica licensed data from
GSR for 30 million individuals, not 87 million."
"It's not enough just to give people a voice," he said. "We have to make sure people don't use that voice to
hurt people or spread disinformation."
Late Tuesday, April 3, Facebook said it deleted dozens of accounts linked to a Russian-sponsored internet
unit which has been accused of spreading propaganda and other divisive content in the United States and
elsewhere.
The social networking giant said it revoked the accounts of 70 Facebook and 65 Instagram accounts, and
removed 138 Facebook pages controlled by the Russia-based Internet Research Agency (IRA). The agency
has been called a "troll farm" due to its deceptive post aimed at sowing discord and propagating
misinformation.
The unit "has repeatedly used complex networks of inauthentic accounts to deceive and manipulate
people who use Facebook, including before, during and after the 2016 US presidential elections," said
a statement Facebook chief security officer Alex Stamos. Rappler.com
Source: Agence France-Presse. (2018, April 5). Facebook says 87 million may be affected by data privacy
scandal. Rappler. Retrieved on April 24, 2018 from https:// www.rappler.com/technology/news/199588
facebook-data-affected-cambridge- analytica-scandal.
Questions:
1. What is this data privacy scandal all about?
____________________________________________________________________________________
____________________________________________________________________________________
_________________________________________________________________________________
2. How does this Facebook privacy scandal relate to Heidegger's notion of revealing of modern
technology as challenging forth?
____________________________________________________________________________________
____________________________________________________________________________________
_________________________________________________________________________________ 3.
How are Facebook users 'enframed' in this particular data privacy scandal?
____________________________________________________________________________________
____________________________________________________________________________________
_________________________________________________________________________________
4. How do you think Facebook can be used in a way that is more consistent with Heidegger's idea of
poiesis or a bringing forth of technology?
____________________________________________________________________________________
____________________________________________________________________________________
_________________________________________________________________________________
5. How can the Heideggerian notion of 'questioning' guide Facebook users toward a beneficial use of
social media?
____________________________________________________________________________________
____________________________________________________________________________________
__________________________________________________________________________________
Lesson 2. Human Flourishing in Progress and De-development
Diagnostics
Instructions: Examine the picture and follow the prompt that follows.
Recent researches found that 70% of people in middle- and high-income countries believe that
overconsumption is putting the planet and society at risk. Discuss your thoughts about the following:
1. How do you think overconsumption puts the planet and society at risk?
____________________________________________________________________________________
____________________________________________________________________________________
_________________________________________________________________________________ 2.
What are the manifestations of society's tendency to over produce and over consume?
____________________________________________________________________________________
____________________________________________________________________________________
_________________________________________________________________________________
3. Should middle- and high-income countries regulate their growth and consumption? Why or why not?
____________________________________________________________________________________
____________________________________________________________________________________
__________________________________________________________________________________
Thoughts to Ponder
Despite efforts to close out the gap between the
rich and poor countries, a BBC report in 2015
stated that the gap in growth and development
just keeps on widening. Although there is no
standard measure of inequality, the report claimed
that most indicators suggest that the widening of
the growth gap slowed during the financial crisis
of 2007 but is now growing again. The increasing
inequality appears paradoxical having in mind the
efforts that had been poured onto the
development programs designed to assist poor
countries to rise from absent to slow progress.
With this backdrop and in the context of
unprecedented scientific and technological
advancement and economic development, individually humans must ask themselves whether they are
flourishing individually or collectively. If development efforts to close out the gap between the rich and poor
countries have failed, is it possible to confront the challenges of development through nonconformist
framework?
In the succeeding article, Jason Hickel, an anthropologist at the London School of Economics, criticizes the
failure of growth and development efforts to eradicating poverty seven decades ago. More importantly, he
offers a nonconformist perspective toward growth and development.
Forget developing' poor countries, it's time to 'de-develop' rich countries by Jason Hickel
This week, heads of state are gathering in New York to UN's new sustainable development sign the UN‘s
new sustainable development goals (SDGs). The main objective is to eradicate poverty by 2030. by 2030.
Beyoncé, One Direction and Malala are on board. It's set to be a monumental international celebration.
Given all the fanfare, one might think the SDGs are about to offer a fresh plan for how to save the world,
but beneath all the hype, it's business as usual. The main strategy for eradicating poverty is the same:
growth.
Growth has been the main object of development for the past 70 years, despite the fact that it's not working.
Since 1980, the global economy has grown by 380%, but the number of people living in poverty on less than
$5 (£3.20) a day has increased by more than 1.1 billion. That's 17 times the population of Britain so much
for the trickle-down effect.
Orthodox economists insist that all we need is yet more
growth. More progressive types tell us that we need to
shift some of the yields of growth from the richer
segments of the population to the poorer ones, evening
things out a bit. Neither approach is adequate. Why?
Because even at current levels of average global
consumption, we're overshooting our planet's biocapacity
by more than 50% each year.
2.
3.
4.
5.
Others might think that wealth is a potential candidate for the ultimate
good, but a critique of wealth would prove otherwise. Indeed, many, if not
most, aim to be financially stable, to be rich, or to be able to afford a
luxurious life. However, it is very common to hear people say that they aim
to be wealthy insofar as it would help them achieve some other goals.
Elsewhere, it is also common to hear stories about people who have become
very wealthy but remain, by and large, unhappy with the lives they lead. In
this sense, wealth is just an intermediate good-that is, only instrumental. It is
not the ultimate good because it is not self-sufficient and does not stop one
from aiming for some other 'greater' good.
Intellectual virtue or virtue of thought is achieved through education, time, and experience. Key intellectual
virtues are wisdom, which guides ethical behavior, and understanding, which is gained from scientific
endeavors and contemplation. Wisdom and understanding are achieved through formal and nonformal means.
Intellectual virtues are acquired through self-taught knowledge and skills as much as those knowledge and
skills taught and learned in formal institutions.
Moral virtue or virtue of character is achieved through habitual practice. Some key moral virtues are
generosity, temperance, and courage. Aristotle explained that although the capacity for intellectual virtue is
innate, it is brought into completion only by practice. It is by repeatedly being unselfish that one develops the
virtue of generosity. It is by repeatedly resisting and foregoing every inviting opportunity that one develops
the virtue of temperance. It is by repeatedly exhibiting the proper action and emotional response in the face
of danger that one develops the virtue of courage. By and large, moral virtue is like a skill. A skill is acquired
only through repeated practice. Everyone is capable of learning how to play the guitar because everyone has
an innate capacity for intellectual virtue, but not everyone acquires it because only those who devote time and
practice develop the skill of playing the instrument.
If one learns that eating too much fatty foods is bad for the health, he or she has to make it a habit to stay
away from this type of food because health contributes to living well and doing well. If one believes that too
much use of social media is detrimental to human relationships and productivity, he or she must regulate his
or her use of social media and deliberately spend more time with friends, and family, and work than in virtual
platform. If one understands the enormous damage to the environment that plastic materials bring, he or she
must repeatedly forego the next plastic item he or she could do away with. Good relationship dynamics and a
healthy environment contribute to one's wellness, in how he or she lives and what he or she does.
Both intellectual virtue and moral virtue should be in accordance with reason to achieve eudaimonia.
Indifference with these virtues, for reasons that are only for one's convenience, pleasure, or satisfaction, leads
humans away from eudaimonia.
A virtue is ruined by any excess and deficiency in how one lives and acts. A balance between two extremes
is a requisite of virtue. This balance is a mean of excess not in the sense of geometric or arithmetic average.
Instead, it is a mean relative to the person, circumstances, and the right emotional response in every
experience (NE 2:2; 2:6).
Consider the virtue of courage. Courage was earlier defined as displaying the right action and emotional
response in the face of danger. The virtue of courage is ruined by an excess of the needed emotional and
proper action to address a particular situation. A person who does not properly assess the danger and is
totally without fear may develop the vice of foolhardiness or rashness. Also, courage is ruined by a
deficiency of the needed emotion and proper action. When one overthinks of a looming danger, that he or
she becomes too fearful and incapable of acting on the problem, he or she develops the vice of cowardice.
___________________________________________________________________
Hidden Sugar Found on the Label Description:
_______________________________________________________________________________________
_______________________________________________________________________________________
_______________________________________________________________________________________
_______________________________________________________________________________________
_______________________________________________________________________________________
_______________________________________________________________________________________
_______________________________________________________________________________________
_______________________________________________________________________________________
_______________________________________________________________________________________
Diagnostics
Instructions: Rate the extent of your agreement to each statements by marking (1) the box that corresponds to
your response in each row
5-Extremly Agree 4-Somewhat Agree 3- To a Limited Extent 2-Somewhat Disagree 1- Extremely Disagree
Statements 5 4 3 2 1
1. Human nights are fundamental rights.
2. Responding to urgent global challenges allows setting aside some human
rights.
3. It is not the duty of scientists and innovators to protect the well-being and
dignity of humans
4. Human rights should be at the core of any scientific and technological
endeavor.
5. A good life is a life where human rights are upheld
6. Human rights should be integrated in the journey toward the ultimate good
7. It is not the primary function of science and technology to protect the weak,
poor, and vulnerable
8. There is no way for science and technology to fully function as a safeguard
of human rights
9. A human rights-based approach to science and technology development is
imperative
10. The protection of human rights and continued science technology and
advancement can work hand-in-hand
Mukherjee (2012) furthered that this approach identifies science as "a socially organized human activity
which is value-laden and shaped by organizational structures and procedures." Moreover, it requires an
answer to whether governments and other stakeholders can craft and implement science and technology
policies that "ensure safety, health and livelihoods; include people's needs and priorities in development and
environmental strategies; and ensure they participate in decision- making that affects their lives and
resources‖.
Multiple international statutes, declarations, and decrees have been produced to ensure well-being and
human dignity. Mukherjee listed some of the most important documents that center on a human rightsbased
approach to science, development, and technology, and their key principles:
'.
Table 2 Useful documents for a human-rights based approach to science. technology, and development
Document Key Principles
Universal Declaration of Human This document affirms everyone's right to participate in and benefit
Rights (Article 27) from scientific advances, and be protected from scientific misuses The
right to the benefits of science comes under the domain of 'culture,' so it
usually examined from a cultural rights perspective.
UNESCO Recommendation on This document affirms that all advances in scientific and technological
the Status of Scientific knowledge should solely be geared towards the welfare of the global
Researchers – 1974 (Article 4) citizens, and calls upon member states to develop necessary protocol
and policies tomonitor and secure this objective Countries are asked to
show that science and technology are integrated into policies that aim to
ensure more humane and just society.
UNESCO Declaration on the This document states, "Today, more than ever, science and its
Use of Scientific Knowledge applications are indispensable for development. All levels of
1999 (Art1icle 33) government and private sector should provide enhanced support for
building up an adequate and evenly distributed scientific and
technological capacity through appropriate education and research
programmes as an indispensable foundation for economic, social,
cultural and environmentally sound development. This is particularly
urgent for developing countries. This Declaration encompasses issues
such as pollution-free production, efficient resource use, biodiversity
protection, and brain drains.
A human rights-based approach to science, technology, and development sets the parameters for the
appraisal of how science, technology, and development promote human well-being. Thus, the discussion of
human rights in the face of changing scientific and technological contexts must not serve as merely
decorative moral dimension of scientific and technological policies. As Mukherjee(2012) posited, this
approach "can form the very heart of sustainable futures.
Human rights should be integral to the journey toward the ultimate good. They should guide humans not
only to flourish as individual members of society, but also to assist each other in flourishing collectively as a
society. Human rights are rights to sustainability, as Mukherjee put it. They may function as the 'golden
mean,' particularly by protecting the weak, poor, and vulnerable from the deficiencies and excesses of science
and technology. By imposing upon technology the moral and ethical duty to protect science and uphold
human rishis, there can be a more effective and sustainable approach to bridging the gap between poor and
rich countries on both tangible (e.g; services and natural resources,) and intangible (e.g. well-being and
human dignity) aspects. Ultimately, all these will lead humans to flourish together through science and
technology.
4. What is the danger of using human rights as merely decorative moral dimension of scientific and
technological policies?
_______________________________________________________________________________________
_______________________________________________________________________________________
_______________________________________________________________________________________
______________________________________________________________________________________
5. Do you agree with Mukherjee's assertion that a human rights-based approach to science, technology,
and development can form the very heart of sustainable futures? Explain.
_______________________________________________________________________________________
_______________________________________________________________________________________
_______________________________________________________________________________________
______________________________________________________________________________________
Choose one among the six approaches and make an analysis. Be guided by the following questions. (Use
letter size bond paper).
1. What is the instrument all about?
2. Who are the parties/signatories to the instrument?
3. What article/s or section/s of the instrument articulate the centrality of human rights vis-i-vis science,
technology, and development?
4. How does the instrument safeguard human rights in the face of science and technology?
5. What challenges stand in the way of the instrument and its key principles in safeguarding human
rights amidst the changing vscientific and technological contexts?
Assignment 4. Reading Enrichment Task
Instructions: Choose and read one of the two reading materials and answer the enrichment questions that
follow:
1. Evans, D. (2007, March 9). The ethical dilemmas of robotics. BBC News. Retrieved from
http://news.bbc.co.uk/2/hi/ technology/6432307.stm
Ethical questions
South Korea is one of the world's most hi-tech societies.
Citizens enjoy some of the highest speed broadband connections in the
world and have access to advanced mobile technology long before it hits
western markets.
The government is also well known for its commitment to future technology.
A recent government report forecast that robots would routinely carry out surgery by 2018.
The Ministry of Information and Communication has also predicted that every South Korean household will
have a robot by between 2015 and 2020.
In part, this is a response to the country's aging society and also an acknowledgement that the pace of
development in robotics is accelerating.
The new charter is an attempt to set ground rules for this future.
"Imagine if some people treat androids as if the machines were their wives," Park Hye-Young of the
ministry's robot team told the AFP news agency.
"Others may get addicted to interacting with them just as many internet users get hooked to the cyberworld."
Alien encounters
The new guidelines could reflect the three laws of robotics put forward by author Isaac Asimov in his short
story Runaround in 1942, she said.
Key considerations would include ensuring human control over robots, protecting data acquired by robots
and preventing illegal use.
Other bodies are also thinking about the robotic future. Last year a UK government study predicted that in
the next 50 years robots could demand the same rights as human beings.
The European Robotics Research Network is also drawing up a set of guidelines on the use of robots.
This ethical roadmap has been assembled by researchers who believe that robotics will soon come under the
same scrutiny as disciplines such as nuclear physics and Bioengineering.
A draft of the proposals said: "In the 21st Century humanity will coexist with the first alien intelligence we
have ever come into contact with - robots.
"It will be an event rich in ethical, social and economic problems."
Their proposals are expected to be issued in Rome in April.
b. Which among the instruments for a human rights-based approach to science, technology, and
development discussed in this section may be useful in contending with the ethical dilemmas of
robotics?
_______________________________________________________________________________________
_______________________________________________________________________________________
_______________________________________________________________________________________
_______________________________________________________________________________________
______________________________________________________________________________________
c. How can the instrument inform lawyers and ethicists and engineers and scientists in answering the
moral and legal questions raised by the developments in robotics?
_______________________________________________________________________________________
_______________________________________________________________________________________
_______________________________________________________________________________________
_______________________________________________________________________________________
______________________________________________________________________________________
2. Carr, N. (2008) Is Google making us stupid? What the internet is doing to our brains. The Atlantic.
Retrieved from https://www.theatlantic.con/magazine/archive/2008/07/is-google-making-
usstupid/306868/
"Dave, stop. Stop, will you? Stop, Dave. Will you stop, Dave?‖ So the supercomputer HAL pleads with
the implacable astronaut Dave Bowman in a famous and weirdly poignant scene toward the end of
Stanley Kubrick‘s 2001: A Space Odyssey. Bowman, having nearly been sent to a deep-space death by
the malfunctioning machine, is calmly, coldly disconnecting the memory circuits that control its artificial
― brain. ―Dave, my mind is going,‖ HAL says, forlornly. ―I can feel it. I can feel it.‖
I can feel it, too. Over the past few years I‘ve had an uncomfortable sense that someone, or something,
has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory. My mind
isn‘t going—so far as I can tell—but it‘s changing. I‘m not thinking the way I used to think. I can feel it
most strongly when I‘m reading. Immersing myself in a book or a lengthy article used to be easy. My
mind would get caught up in the narrative or the turns of the argument, and I‘d spend hours strolling
through long stretches of prose. That‘s rarely the case anymore. Now my concentration often starts to
drift after two or three pages. I get fidgety, lose the thread, begin looking for something else to do. I feel
as if I‘m always dragging my wayward brain back to the text. The deep reading that used to come
naturally has become a struggle.
I think I know what‘s going on. For more than a decade now, I‘ve been spending a lot of time online,
searching and surfing and sometimes adding to the great databases of the Internet. The Web has been a
godsend to me as a writer. Research that once required days in the stacks or periodical rooms of libraries
can now be done in minutes. A few Google searches, some quick clicks on hyperlinks, and I‘ve got the
telltale fact or pithy quote I was after. Even when I‘m not working, I‘m as likely as not to be foraging in
the Web‘s info-thickets—reading and writing e-mails, scanning headlines and blog posts, watching
videos and listening to podcasts, or just tripping from link to link to link. (Unlike footnotes, to which
they‘re sometimes likened, hyperlinks don‘t merely point to related works; they propel you toward them.)
For me, as for others, the Net is becoming a universal medium, the conduit for most of the information
that flows through my eyes and ears and into my mind. The advantages of having immediate access to
such an incredibly rich store of information are many, and they‘ve been widely described and duly
applauded. ―The perfect recall of silicon memory,‖ Wired‘s Clive Thompson has written, ―can be an
enormous boon to thinking.‖ But that boon comes at a price. As the media theorist Marshall McLuhan
pointed out in the 1960s, media are not just passive channels of information. They supply the stuff of
thought, but they also shape the process of thought. And what the Net seems to be doing is chipping away
my capacity for concentration and contemplation. My mind now expects to take in information the way
the Net distributes it: in a swiftly moving stream of particles. Once I was a scuba diver in the sea of
words. Now I zip along the surface like a guy on a Jet Ski.
I‘m not the only one. When I mention my troubles with reading to friends and acquaintances— literary
types, most of them—many say they‘re having similar experiences. The more they use the Web, the more
they have to fight to stay focused on long pieces of writing. Some of the bloggers I follow have also
begun mentioning the phenomenon. Scott Karp, who writes a blog about online media, recently
confessed that he has stopped reading books altogether. ―I was a lit major in college, and used to be [a]
voracious book reader,‖ he wrote. ―What happened?‖ He speculates on the answer: ―What if I do all my
reading on the web not so much because the way I read has changed, i.e. I‘m just seeking convenience,
but because the way I THINK has changed?‖
Bruce Friedman, who blogs regularly about the use of computers in medicine, also has described how
the Internet has altered his mental habits. ―I now have almost totally lost the ability to read and absorb a
longish article on the web or in print,‖ he wrote earlier this year. A pathologist who has long been on the
faculty of the University of Michigan Medical School, Friedman elaborated on his comment in a
telephone conversation with me. His thinking, he said, has taken on a ―staccato‖ quality, reflecting the
way he quickly scans short passages of text from many sources online. ―I can‘t read War and Peace
anymore,‖ he admitted. ―I‘ve lost the ability to do that. Even a blog post of more than three or four
paragraphs is too much to absorb. I skim it.‖
Anecdotes alone don‘t prove much. And we still await the long-term neurological and psychological
experiments that will provide a definitive picture of how Internet use affects cognition. But a recently
published study of online research habits, conducted by scholars from University College London,
suggests that we may well be in the midst of a sea change in the way we read and think. As part of the
five-year research program, the scholars examined computer logs documenting the behavior of visitors to
two popular research sites, one operated by the British Library and one by a U.K. educational consortium,
that provide access to journal articles, e-books, and other sources of written information.
They found that people using the sites exhibited ―a form of skimming activity,‖ hopping from one
source to another and rarely returning to any source they‘d already visited. They typically read no more
than one or two pages of an article or book before they would ―bounce‖ out to another site. Sometimes
they‘d save a long article, but there‘s no evidence that they ever went back and actually read it. The
authors of the study report:
It is clear that users are not reading online in the traditional sense; indeed there are signs that new
forms of ―reading‖ are emerging as users ―power browse‖ horizontally through titles, contents pages
and abstracts going for quick wins. It almost seems that they go online to avoid reading in the traditional
sense.
Thanks to the ubiquity of text on the Internet, not to mention the popularity of text-messaging on cell
phones, we may well be reading more today than we did in the 1970s or 1980s, when television was our
medium of choice. But it‘s a different kind of reading, and behind it lies a different kind of thinking—
perhaps even a new sense of the self. ―We are not only what we read,‖ says Maryanne Wolf, a
developmental psychologist at Tufts University and the author of Proust and the Squid: The Story and
Science of the Reading Brain. ―We are how we read.‖ Wolf worries that the style of reading promoted
by the Net, a style that puts ―efficiency‖ and ―immediacy‖ above all else, may be weakening our
capacity for the kind of deep reading that emerged when an earlier technology, the printing press, made
long and complex works of prose commonplace. When we read online, she says, we tend to become
―mere decoders of information.‖ Our ability to interpret text, to make the rich mental connections that
form when we read deeply and without distraction, remains largely disengaged.
Reading, explains Wolf, is not an instinctive skill for human beings. It‘s not etched into our genes the
way speech is. We have to teach our minds how to translate the symbolic characters we see into the
language we understand. And the media or other technologies we use in learning and practicing the craft
of reading play an important part in shaping the neural circuits inside our brains. Experiments
demonstrate that readers of ideograms, such as the Chinese, develop a mental circuitry for reading that is
very different from the circuitry found in those of us whose written language employs an alphabet. The
variations extend across many regions of the brain, including those that govern such essential cognitive
functions as memory and the interpretation of visual and auditory stimuli. We can expect as well that the
circuits woven by our use of the Net will be different from those woven by our reading of books and
other printed works.
But the machine had a subtler effect on his work. One of Nietzsche‘s friends, a composer, noticed a
change in the style of his writing. His already terse prose had become even tighter, more telegraphic.
―Perhaps you will through this instrument even take to a new idiom,‖ the friend wrote in a letter, noting
that, in his own work, his ―‗thoughts‘ in music and language often depend on the quality of pen and
paper.‖
―You are right,‖ Nietzsche replied, ―our writing equipment takes part in the forming of our
thoughts.‖ Under the sway of the machine, writes the German media scholar Friedrich A. Kittler ,
Nietzsche‘s prose
―changed from arguments to aphorisms, from thoughts to puns, from rhetoric to telegram style.‖
The human brain is almost infinitely malleable. People used to think that our mental meshwork, the
dense connections formed among the 100 billion or so neurons inside our skulls, was largely fixed by the
time we reached adulthood. But brain researchers have discovered that that‘s not the case. James Olds, a
professor of neuroscience who directs the Krasnow Institute for Advanced Study at George Mason
University, says that even the adult mind ―is very plastic.‖ Nerve cells routinely break old connections
and form new ones. ―The brain,‖ according to Olds, ―has the ability to reprogram itself on the fly,
altering the way it functions.‖
As we use what the sociologist Daniel Bell has called our ―intellectual technologies‖—the tools that
extend our mental rather than our physical capacities—we inevitably begin to take on the qualities of
those technologies. The mechanical clock, which came into common use in the 14th century, provides a
compelling example. In Technics and Civilization, the historian and cultural critic Lewis Mumford
described how the clock ―disassociated time from human events and helped create the belief in an
independent world of mathematically measurable sequences.‖ The ―abstract framework of divided time‖
became ―the point of reference for both action and thought.‖
The clock‘s methodical ticking helped bring into being the scientific mind and the scientific man. But it
also took something away. As the late MIT computer scientist Joseph Weizenbaum observed in his 1976
book, Computer Power and Human Reason: From Judgment to Calculation, the conception of the world
that emerged from the widespread use of timekeeping instruments ―remains an impoverished version of
the older one, for it rests on a rejection of those direct experiences that formed the basis for, and indeed
constituted, the old reality.‖ In deciding when to eat, to work, to sleep, to rise, we stopped listening to our
senses and started obeying the clock.
The process of adapting to new intellectual technologies is reflected in the changing metaphors we use to
explain ourselves to ourselves. When the mechanical clock arrived, people began thinking of their brains
as operating ―like clockwork.‖ Today, in the age of software, we have come to think of them as
operating ―like computers.‖ But the changes, neuroscience tells us, go much deeper than metaphor.
Thanks to our brain‘s plasticity, the adaptation occurs also at a biological level.
The Internet promises to have particularly far-reaching effects on cognition. In a paper published in
1936, the British mathematician Alan Turing proved that a digital computer, which at the time existed
only as a theoretical machine, could be programmed to perform the function of any other
informationprocessing device. And that‘s what we‘re seeing today. The Internet, an immeasurably
powerful computing system, is subsuming most of our other intellectual technologies. It‘s becoming our
map and our clock, our printing press and our typewriter, our calculator and our telephone, and our radio
and TV.
When the Net absorbs a medium, that medium is re-created in the Net‘s image. It injects the medium‘s
content with hyperlinks, blinking ads, and other digital gewgaws, and it surrounds the content with the
content of all the other media it has absorbed. A new e-mail message, for instance, may announce its
arrival as we‘re glancing over the latest headlines at a newspaper‘s site. The result is to scatter our
attention and diffuse our concentration.
The Net‘s influence doesn‘t end at the edges of a computer screen, either. As people‘s minds become
attuned to the crazy quilt of Internet media, traditional media have to adapt to the audience‘s new
expectations. Television programs add text crawls and pop-up ads, and magazines and newspapers
shorten their articles, introduce capsule summaries, and crowd their pages with easy-to-browse
infosnippets. When, in March of this year, TheNew York Times decided to devote the second and third
pages of every edition to article abstracts , its design director, Tom Bodkin, explained that the
―shortcuts‖ would give harried readers a quick ―taste‖ of the day‘s news, sparing them the ―less
efficient‖ method of actually turning the pages and reading the articles. Old media have little choice but to
play by the newmedia rules.
Never has a communications system played so many roles in our lives—or exerted such broad influence
over our thoughts—as the Internet does today. Yet, for all that‘s been written about the Net, there‘s been
little consideration of how, exactly, it‘s reprogramming us. The Net‘s intellectual ethic remains obscure.
About the same time that Nietzsche started using his typewriter, an earnest young man named Frederick
Winslow Taylor carried a stopwatch into the Midvale Steel plant in Philadelphia and began a historic
series of experiments aimed at improving the efficiency of the plant‘s machinists. With the approval of
Midvale‘s owners, he recruited a group of factory hands, set them to work on various metalworking
machines, and recorded and timed their every movement as well as the operations of the machines. By
breaking down every job into a sequence of small, discrete steps and then testing different ways of
performing each one, Taylor created a set of precise instructions—an ―algorithm,‖ we might say today
—for how each worker should work. Midvale‘s employees grumbled about the strict new regime,
claiming that it turned them into little more than automatons, but the factory‘s productivity soared.
More than a hundred years after the invention of the steam engine, the Industrial Revolution had at last
found its philosophy and its philosopher. Taylor‘s tight industrial choreography—his ―system,‖ as he
liked to call it—was embraced by manufacturers throughout the country and, in time, around the world.
Seeking maximum speed, maximum efficiency, and maximum output, factory owners used time-
andmotion studies to organize their work and configure the jobs of their workers. The goal, as Taylor
defined it in his celebrated 1911 treatise, The Principles of Scientific Management, was to identify and
adopt, for every job, the ―one best method‖ of work and thereby to effect ―the gradual substitution of
science for rule of thumb throughout the mechanic arts.‖ Once his system was applied to all acts of
manual labor, Taylor assured his followers, it would bring about a restructuring not only of industry but
of society, creating a utopia of perfect efficiency. ―In the past the man has been first,‖ he declared; ―in
the future the system must be first.‖
Taylor‘s system is still very much with us; it remains the ethic of industrial manufacturing. And now,
thanks to the growing power that computer engineers and software coders wield over our intellectual
lives, Taylor‘s ethic is beginning to govern the realm of the mind as well. The Internet is a machine
designed for the efficient and automated collection, transmission, and manipulation of information, and
its legions of programmers are intent on finding the ―one best method‖—the perfect algorithm—to carry
out every mental movement of what we‘ve come to describe as ―knowledge work.‖ Google‘s
headquarters, in Mountain View, California—the Googleplex—is the Internet‘s high church, and the
religion practiced inside its walls is Taylorism. Google, says its chief executive, Eric Schmidt, is
―a company that‘s founded around the science of measurement,‖ and it is striving to ―systematize
everything‖ it does. Drawing on the terabytes of behavioral data it collects through its search engine and
other sites, it carries out thousands of experiments a day, according to the Harvard Business Review, and
it uses the results to refine the algorithms that increasingly control how people find information and
extract meaning from it. What Taylor did for the work of the hand, Google is doing for the work of the
mind.
The company has declared that its mission is ―to organize the world‘s information and make it
universally accessible and useful.‖ It seeks to develop ―the perfect search engine,‖ which it defines as
something that ―understands exactly what you mean and gives you back exactly what you want.‖ In
Google‘s view, information is a kind of commodity, a utilitarian resource that can be mined and
processed with industrial efficiency. The more pieces of information we can ―access‖ and the faster we
can extract their gist, the more productive we become as thinkers.
Where does it end? Sergey Brin and Larry Page, the gifted young men who founded Google while
pursuing doctoral degrees in computer science at Stanford, speak frequently of their desire to turn their
search engine into an artificial intelligence, a HAL-like machine that might be connected directly to our
brains. ―The ultimate search engine is something as smart as people—or smarter,‖ Page said in a speech
a few years back. ―For us, working on search is a way to work on artificial intelligence.‖ In a 2004
interview with Newsweek, Brin said, ―Certainly if you had all the world‘s information directly attached
to your brain, or an artificial brain that was smarter than your brain, you‘d be better off.‖ Last year, Page
told a convention of scientists that Google is ―really trying to build artificial intelligence and to do it on
a large scale.‖
Such an ambition is a natural one, even an admirable one, for a pair of math whizzes with vast quantities
of cash at their disposal and a small army of computer scientists in their employ. A fundamentally
scientific enterprise, Google is motivated by a desire to use technology, in Eric Schmidt‘s words, ―to
solve problems that have never been solved before,‖ and artificial intelligence is the hardest problem out
there. Why wouldn‘t Brin and Page want to be the ones to crack it?
Still, their easy assumption that we‘d all ―be better off‖ if our brains were supplemented, or even
replaced, by an artificial intelligence is unsettling. It suggests a belief that intelligence is the output of a
mechanical process, a series of discrete steps that can be isolated, measured, and optimized. In Google‘s
world, the world we enter when we go online, there‘s little place for the fuzziness of contemplation.
Ambiguity is not an opening for insight but a bug to be fixed. The human brain is just an outdated
computer that needs a faster processor and a bigger hard drive.
The idea that our minds should operate as high-speed data-processing machines is not only built into the
workings of the Internet, it is the network‘s reigning business model as well. The faster we surf across the
Web—the more links we click and pages we view—the more opportunities Google and other companies
gain to collect information about us and to feed us advertisements. Most of the proprietors of the
commercial Internet have a financial stake in collecting the crumbs of data we leave behind as we flit
from link to link—the more crumbs, the better. The last thing these companies want is to encourage
leisurely reading or slow, concentrated thought. It‘s in their economic interest to drive us to distraction.
Maybe I‘m just a worrywart. Just as there‘s a tendency to glorify technological progress, there‘s a
countertendency to expect the worst of every new tool or machine. In Plato‘s Phaedrus, Socrates
bemoaned the development of writing. He feared that, as people came to rely on the written word as a
substitute for the knowledge they used to carry inside their heads, they would, in the words of one of the
dialogue‘s characters, ―cease to exercise their memory and become forgetful.‖ And because they would
be able to ―receive a quantity of information without proper instruction,‖ they would ―be thought very
knowledgeable when they are for the most part quite ignorant.‖ They would be ―filled with the conceit
of wisdom instead of real wisdom.‖ Socrates wasn‘t wrong—the new technology did often have the
effects he feared—but he was shortsighted. He couldn‘t foresee the many ways that writing and reading
would serve to spread information, spur fresh ideas, and expand human knowledge (if not wisdom).
The arrival of Gutenberg‘s printing press, in the 15th century, set off another round of teeth gnashing.
The Italian humanist Hieronimo Squarciafico worried that the easy availability of books would lead to
intellectual laziness, making men ―less studious‖ and weakening their minds. Others argued that cheaply
printed books and broadsheets would undermine religious authority, demean the work of scholars and
scribes, and spread sedition and debauchery. As New York University professor Clay Shirky notes,
―Most of the arguments made against the printing press were correct, even prescient.‖ But, again, the
doomsayers were unable to imagine the myriad blessings that the printed word would deliver.
So, yes, you should be skeptical of my skepticism. Perhaps those who dismiss critics of the Internet as
Luddites or nostalgists will be proved correct, and from our hyperactive, data-stoked minds will spring a
golden age of intellectual discovery and universal wisdom. Then again, the Net isn‘t the alphabet, and
although it may replace the printing press, it produces something altogether different. The kind of deep
reading that a sequence of printed pages promotes is valuable not just for the knowledge we acquire from
the author‘s words but for the intellectual vibrations those words set off within our own minds. In the
quiet spaces opened up by the sustained, undistracted reading of a book, or by any other act of
contemplation, for that matter, we make our own associations, draw our own inferences and analogies,
foster our own ideas. Deep reading, as Maryanne Wolf argues, is indistinguishable from deep thinking.
If we lose those quiet spaces, or fill them up with ―content,‖ we will sacrifice something important not
only in our selves but in our culture. In a recent essay, the playwright Richard Foreman eloquently
described what‘s at stake:
I come from a tradition of Western culture, in which the ideal (my ideal) was the complex, dense and
―cathedral-like‖ structure of the highly educated and articulate personality—a man or woman who
carried inside themselves a personally constructed and unique version of the entire heritage of the West.
[But now] I see within us all (myself included) the replacement of complex inner density with a new kind
of self—evolving under the pressure of information overload and the technology of the ―instantly
available.‖
As we are drained of our ―inner repertory of dense cultural inheritance,‖ Foreman concluded, we risk
turning into ―‗pancake people‘—spread wide and thin as we connect with that vast network of
information accessed by the mere touch of a button.‖
I‘m haunted by that scene in 2001. What makes it so poignant, and so weird, is the computer‘s emotional
response to the disassembly of its mind: its despair as one circuit after another goes dark, its childlike
pleading with the astronaut—―I can feel it. I can feel it. I‘m afraid‖—and its final reversion to what can
only be called a state of innocence. HAL‘s outpouring of feeling contrasts with the emotionlessness that
characterizes the human figures in the film, who go about their business with an almost robotic
efficiency. Their thoughts and actions feel scripted, as if they‘re following the steps of an algorithm. In
the world of 2001, people have become so machinelike that the most human character turns out to be a
machine. That‘s the essence of Kubrick‘s dark prophecy: as we come to rely on computers to mediate our
understanding of the world, it is our own intelligence that flattens into artificial intelligence.
Diagnostics
Instructions: Look at the picture. Do you think that there will come a time in the future that will no longer
need humans? Write your brief opinion on the space provided.
Can you imagine a future without the human race? Do you think that robots and machines can replace
humans? Do you believe that there will come a time when human existence will be at the mercy of robots and
machines? Is it also possible that medical breakthroughs in the future may go terribly wrong that a strain of
drug-resistant viruses could wipe out the entire human race?
For some, imagining a future without humans is nearly synonymous to the end of world. Many choose not to
speculate about a future where humans cease to exist while the world remains. However, a dystopian society
void of human presence is the subject of many works in literature and film. The possibility of such society is
also a constant topic of debates.
In April 2000, William Nelson Joy, an American computer scientist and chief scientist of Sun Microsystems,
wrote an article for Wired magazine entitled Why the future doesn't need us? In his article, Joy warned
against the rapid rise of new technologies. He explained that 21st-century technologies-genetics,
nanotechnology, and robotics (GNR)-are becoming very powerful that they can potentially bring about new
classes of accidents, threats, and abuses. He further warned that these dangers are even more pressing
because they do not require large facilities or even rare raw materials knowledge alone will make them
potentially harmful to humans.
Joy argued that robotics, genetic engineering, and nanotechnology pose much greater threats than
technological developments that have come before. He particularly cited the ability of nanobots to self-
replicate, which could quickly get out of control. In the article, he cautioned humans against overdependence
on machines. He also stated that if machines are given the capacity to decide on their own, it will be
impossible to predict how they might behave in the future. In this case, the fate of the human race would be
at the mercy of machines.
Joy also voiced out his apprehension about the rapid increase of computer power. He was also concerned
that computers will eventually become more intelligent than humans, thus ushering societies into dystopian
visions, such as robot rebellions. To illuminate his concern, Joy drew from Theodore Kaczynski's book,
Unabomber Manifesto, where Kaczynski described that the unintended consequences of the design and use
of technology are clearly related to Murphy's Law: "Anything that can go wrong, will go wrong." Kaczynski
argued further that overreliance on antibiotics led to the great paradox of emerging antibiotic-resistant strains
of dangerous bacteria. The introduction of Dichlorodiphenytrichloroethane (DDT) to combat malarial
mosquitoes, for instance, only gave rise to malarial parasites with multi-drug- resistant genes.
Since the publication of the article, Joy's arguments against 21st- century technologies have received both
criticisms and expression of shared concern. Critics dismissed Joy's article for deliberately presenting
information in an imprecise manner that obscures the larger picture or state of things. For one, John Seely
Brown and Paul Duguid (2001), in their article A Response to Bill Joy and the Doom-and- Gloom
Technofuturists, criticized Joy's failure to consider social factors and only deliberately focused on one part of
the larger picture. Others go as far as accusing Joy of being a neo-Luddite, someone who rejects new
technologies and shows technophobic leanings.
As a material, Joy's article tackles the unpleasant and uncomfortable possibilities that a senseless approach to
scientific and technological advancements may bring. Whether Joy's propositions are a real possibility or an
absolute moonshot, it is unavoidable to think of a future that will no longer need the human race. It makes
thinking about the roles and obligations of every stakeholder a necessary component of scientific and
technological advancement. In this case, it preeminently necessary that the scientific community,
governments, and businesses engage in a discussion to determine the safeguards of humans against the
potential dangers of science and technology.
1. Difficult Concepts
a. ______________________________________________________________________________
_______________________________________________________________________________
__
_______________________________________________________________________________
__ b.
______________________________________________________________________________
_______________________________________________________________________________
__
________________________________________________________________________________
c. _______________________________________________________________________________
________________________________________________________________________________
2. Learning Insights
a. Before Reading the article I thought that
_________________________________________________________________________________
_________________________________________________________________________________
_________________________________________________________________________________
However, after reading the article, I now think/learned that
_________________________________________________________________________________
_________________________________________________________________________________
________________________________________________________________________________
b. Before Reading the article I thought that
_________________________________________________________________________________
_________________________________________________________________________________
_________________________________________________________________________________
However, after reading the article, I now think/learned that
_________________________________________________________________________________
_________________________________________________________________________________
_________________________________________________________________________________
c. Before Reading the article I thought that
_________________________________________________________________________________
_________________________________________________________________________________
_________________________________________________________________________________
However, after reading the article, I now think/learned that
_________________________________________________________________________________
_________________________________________________________________________________
_________________________________________________________________________________
3. Discussion Questions
a. ________________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
b.________________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
c. ________________________________________________________________________________
______________________________________________________________________________
_____________________________________________________________________________