ch1 ch2

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 135

CIS 200

1
Ethics and Morality
 The term Ethics is derived from Ethos
(Greek), and Morality from Mores (Latin).
 Both terms translate roughly into notions
affecting “custom,” ”habit,” and “behavior.”
 Ethics is defined as the study of morality,
which raises two questions:
1) What is morality?
2) What is the study of morality?
Dilemmas and Moral Issues
 Before defining “morality” and a “moral
system,” it is worth noting that not every
moral issue (or moral problem) that arises is
(also) necessarily a moral dilemma.
 We sometimes tend to confuse the phrases
moral issue and moral dilemma.
 A dilemma is a situation where one must
choose between two undesirable options,
which often leads to one’s having to choose
between “the lesser of two evils.”
Moral Dilemmas vs. Moral Issues
(Continued
 It is also important to note that not
every dilemma is moral in nature.
 The example of the “runaway trolley”
(Scenario 2-1 in the textbook) illustrates
a moral dilemma.
SCENARIO 2–1: The Runaway
Trolley: A Classic Moral Dilemma
 Imagine that you are driving a trolley and that all of a sudden you
realize that the trolley’s brake system has failed. Further imagine that
approximately 80 meters ahead of you on the trolley track (a short
distance from the trolley’s station) five crew men are working on a
section of the track on which your trolley is traveling. You realize that
you cannot stop the trolley and that you will probably not be able to
prevent the deaths of the five workers. But then you suddenly realize
that you could “throw a switch” that would cause the trolley to go on
to a different track. You also happen to notice that one person is
working on that track. You then realize that if you do nothing, five
people will likely die, whereas if you engage the switch to change
tracks, only one person would likely die.).
Moral Dilemmas vs. Moral Issues
(Continued
 Most of the moral concerns/problem
that we examine in this text are moral
issues (as opposed to moral dilemmas).
What is Morality?
 a system of
Morality can be defined as
rules for guiding human conduct, and
principles for evaluating those rules.
Two points are worth noting in this definition:
i. morality is a system;

ii. it is a system comprised of moral rules and


principles.
 Moral rules can be understood as "rules of
conduct," which are very similar to "policies."
Rules of Conduct as “Policies”
 James Moor (2004) notes that policies can range
from formal laws to informal, implicit guidelines
for actions.
 Moor suggests that every act can be viewed as
an instance of a policy.
 There are two kinds of rules of conduct:
1) Directives for guiding our conduct as individuals
(at the micro-level)
2) Social Policies framed at the macro-level.
Directives
 Directives are rules (of conduct) that guide our
actions, and thus direct us to behave in certain
ways.
 Rules such as
 "Do not steal"
 "Do not harm others"
are examples of rules of conduct that direct us in
our individual moral choices at the "micro-ethi-
cal" level (i.e., the level of individual behavior).
Social Policies
 Some rules of conduct guide our actions at the
"macro-ethical" level by helping us frame social
policies.
 Rules such as
 “Proprietary software should not be copied“
 “Software that can be used to invade the privacy of
users should not be developed"
are both examples of rules of conduct that arise out
of our social policies.
 Notice the correlation between directives and social
policies (e.g., rules involving stealing).
Principles
 The rules of conduct in a moral system are
evaluated by way of standards called
principles.
 For example, the principle of "social utility“
(i.e., promoting the greatest good for the
greatest number) can be used to evaluate a
social policy such as
 “Proprietary software should not be copied
without permission."
Principles (Continued)
 In the previous example, the principle of
social-utility functioned as a kind of "litmus
test" for determining whether the policy
pertaining to proprietary software could be
justified on moral grounds.
 A policy, X, could be justified (on utilitarian
grounds) by showing that following Policy X
(i.e., not allowing the unauthorized copying
of software) would produce more overall
social utility (greater good for society).
Figure 2-1: Basic Components of
a Moral System
Rules of Conduct Principles of Evaluation
(Action-guiding rules, in the form (Evaluative standards used
of either directives or social to justify rules of conduct)
policies)

two types Examples include principles such


as of social utility and justice as
fairness

Rules for guiding the Rules for establishing


actions of individuals social policies
(micro-level ethical (macro-level ethical rules)
rules)

Examples include directives Examples include social policies such as:


such as:"Do not steal" and "Software should be protected“ and
"Do not harm others." "Privacy should be respected."
Bernard Gert’s Scheme of a Moral
System
 According to Bernard Gert (2005),
morality is a system that is:
 like a game, but more like an informal
game (e.g., a game of cards)
 public (open and accessible to all)
 rational (open to reason)
 impartial (as illustrated in Gert’s
“blindfold of justice”, fair to all).
Table 2-1: Four Features of
Gert’s Moral System
Public Informal Rational Impartial
The rules are The rules are The system is The system is
known to all informal, not based on not partial to
of the like formal principles of any one group
members. laws in a legal logical reason or individual.
system. accessible to all
its members.
Figure 2-2: Components of a
Moral System
Grounds for justifying moral principles Religion Philosophy Law

Principles of Evaluation
Moral principles
and rules Rules of Conduct

Source of moral rules Core Values


The Role of Values in a Moral
System
 The term value comes from the Latin valere,
which translates roughly into having worth or
being of worth (Pojman, 2006).
 Values can be viewed as objects of our
desires or interests.
 Examples of values include very general
notions such happiness, love, freedom, etc.
 Moral principles are ultimately derived from a
society's system of values.
Moral vs. Non-Moral Values
 Morals and values are are not necessarily
identical.
 Values can be either moral or non-moral.
 Reason informs us that it is in our interest to
develop values that promote our own
survival, happiness, and flourishing as
individuals.
 When used to further only our own self-
interests, these values are not necessarily
moral values.
Moral Values
 Once we bring in the notion of impartiality,
we begin to take the "moral point of view."
 When we frame the rules of conduct in a
moral system, we articulate a system of
values having to do with notions such as
autonomy, fairness, justice, etc., which are
moral values.
 Our basic moral values are derived from core
non-moral values.
Three Schemes for Grounding the
Evaluative Rules in a Moral System
 The principles are grounded in one of three
different kinds of schemes:
 religion;
 law;
 philosophical ethics.
 We will see how a particular moral principle
or rule – e.g., “Do not steal” – can be
justified from the vantage point of each
scheme.
Approach #1: Grounding Moral
Principles in a Religious System
 Consider the following rationale for why
stealing is morally wrong:

Stealing is wrong because it offends God


or because it violates one of God's (Ten)
Commandments.
 From the point of view of institutionalized
religion, stealing is wrong because of it
offends God or because it violates the
commands of a supreme authority.
Approach #2: Grounding Moral
Principles in a Legal System
An alternative rationale would be:
Stealing is wrong because it violates the law.

 Here the grounds for determining why


stealing is wrong are not tied to religion.
 If stealing violates a law in a particular nation
or jurisdiction, then the act of stealing can be
declared to be wrong independent of any
religious beliefs that one may or may not
happen to have.
Approach #3: Grounding Moral Principles
in a Philosophical System of Ethics
 A third way of approaching the question is:
Stealing is wrong because it is wrong
(independent of any form of external authority or
any external sanctions).

 On this view, the moral "rightness" or


"wrongness" of stealing is not grounded in
some external authoritative source.
 It does not appeal to an external authority,
either theological or legal, for justification.
Approach # 3 Continued
 Many philosophers and ethicists have argued
that, independent of either supernatural or
legal authorities, reason alone is sufficient to
show that stealing is wrong.
 They argue that reason can inform us that
there is something either in the act of
stealing itself, or in the consequences that
result from this kind of act, that makes
stealing morally wrong.
Approach # 3 Continued
 In the case of both law and religion, specific
sanctions against stealing exist in the form of
punishment.
 In the case of (philosophical) ethics, the only
sanction would be in the form of social
disapproval, and possibly social ostracism.
 For example, there is no punishment in a
formal sense.
 External conditions or factors, in the form of
sanctions, are irrelevant.
The Structure of Ethical Theories
 An essential feature of theory in general is
that it guides us in our investigations.
 In science, theory provides us with some
general principles and structures to
analyze our data.
 The purpose of ethical theory, like scientific
theory, is to provide a framework for
analyzing issues.
 Ideally, a good theory should be coherent,
consistent, comprehensive, and systematic.
The Structure of Ethical Theories
(Continued)
 To be coherent, the individual elements of the
theory must fit together to form a unified.
 For a theory to be consistent, its component
parts cannot contradict each other.
 To be comprehensive, a theory must be able to
apply broadly to a wide range of actions.
 And to be systematic, the theory cannot simply
address individual symptoms peculiar to specific
cases, while ignoring general principles that
would apply in similar cases.
Why Do we Need Ethical
Theories?
 Ethical theories can help us to avoid inconsistent
reasoning in our thinking about moral issues and
moral dilemmas.
 Recall again Scenario 2-1 (in the textbook), but now
imagine a variation of it in which victims of the trolley
accident are taken to the hospital and only limited
resources are available to the accident victims.
 Which moral principle would you use in deciding who
receives medical assistance and who does not?
 Can you also apply that principal consistently across
similar cases?
Four Kinds of Ethical Theories
Consequence-based Ethical
Theories

Consequence-based Ethical
Theories
(Utilitarianism)

Act Rule
Utilitarianism Utilitarianism
Consequence-based Ethical
Theories
 Some argue that the primary goal of a moral
system is to produce desirable consequences
or outcomes for its members.
 On this view, the consequences (i.e., the
ends achieved) of actions and policies that
provide the ultimate standard against which
moral decisions must be evaluated.
 So if choosing between acts A or B, the
morally correct action will be the one that
produces the most desirable outcome.
Consequence-based Theories
(Continued)
 In determining the best outcome, we
can ask the question, whose outcome?
 Utilitarians argue that it is the
consequences of the greatest number
of individuals, or the majority, in a
given society that deserve consideration
in moral deliberation.
Consequence-based Theories:
(Utilitarianism Continued)
 According to the utilitarian theory:
An individual act (X) or a social policy
(Y) is morally permissible if the
consequences that result from (X) or
(Y) produce the greatest amount of
good for the greatest number of
persons affected by the act or policy.
Consequence-based Theories:
(Utilitarianism Continued)
 Utilitarians draw on two key points in
defending their theory:
I. the principle of social utility should be

used to determine morality;


II. social utility can be measured by the
amount of happiness produced for
society as a whole.
Utilitarianism (continued)
 Utilitarians such as Jeremy
Bentham assume:
a) all people desire happiness;

b) happiness is an intrinsic good that


is desired for its own sake.
Utilitarianism (continued)
 According to John Stuart Mill:
The only possible proof showing that
something is audible is that people
actually hear it; the only possible
proof that something is visible is that
people actually see it; and the only
possible proof that something is des-
ired is that people actually desire it.
Act Utilitarianism
 According to act utilitarians:

An act, X, is morally permissible if


the consequences produced by
doing X result in the greatest good
for the greatest number of persons
affected by X.
Criticism of Act Utilitarianism
 Utilianiarism’s critics reject the emphasis on
the consequence of individual acts.
 They point out that in our day-to-day
activities, we tend not to deliberate on each
individual action as if that action were unique.
 Instead, they argue that we are inclined to
deliberate on the basis of certain principles or
general rules that guide our behavior.
Criticism of Act Utilitarianism
(continued)
 Consider some principles that may guide your
behavior as a consumer.
 Each time that you enter a store, do you ask
yourself the following question: “Shall I steal
item X in at this particular time?"
 Or, have you already formulated certain
general principles that guide your individual
actions, such as a principle to the effect: "It is
never morally permissible to steal."
Rule Utilitarianism
 Some utilitarians argue that it is the
consequences that result from following rules
or principles, not the consequences of
individual acts, that are important.
 According to rule utilitarianism:
An act, X, is morally permissible if the
consequences of following the general rule
(Y), of which act X is an instance, would
bring about the greatest good for the
greatest number.
Criticism of Rule Utilitarianism
 Critics tend to attack one or both of the
following aspects of utilitarian theory:
I. morality is ultimately tied to happiness or
pleasure;
II. morality can ultimately be determined by
consequences (of either acts or policies).
 Critics of utilitarianism ague that morality can
be grounded neither in consequences nor in
happiness.
Duty-based Ethical Theories

Duty-based Ethical Theories


(Deontology)

Act Rule
Deontology Deontology
Duty-based Ethical Theories
 Theories in which the notion of duty, or
obligation, serve a foundation for morality are
called deontological theories.

 They derive their meaning from the Greek


root Deon, which means duty.
Duty-based Ethical Theories
 Immanuel Kant (1724-1804) argued that
morality must ultimately be grounded in the
concept of duty or obligations that humans
have to one another.
 For Kant, morality can never be grounded in
the consequences of human actions.
 In Kant’s view, morality has nothing to do
with the promotion of happiness or the
achievement of desirable consequences.
Duty-based Ethical Theories
(Continued)
 Kant rejects utilitarianism in particular, and all
consequentialist ethical theories in general.

 For example, he points out that, in some


instances, performing our duties may result in
our being unhappy and may not necessarily lead
to consequences that are considered desirable.
Duty-based Ethical Theories
(Continued)
 Kant defends his ethical theory on
the grounds that:
1) humans are rational, autonomous
agents;
2) human beings are ends-in-

themselves, and not means to


ends.
 Kant argues that our rational nature
reveals to us that we have certain
duties or obligations to each other as
“rational beings” in a moral community.
 Kant argues that each individual,
regardless of his or her wealth,
intelligence, privilege, or circumstance,
has the same moral worth.
Rule Deontology
 For Kant, morality conforms to a standard or
objective test, a principle that he calls the
Categorical Imperative.
 Kant's imperative has a number of variations,
one of which directs us to:
Act always on that maxim or principle (or
rule) which ensures that all individuals will
be treated as ends-in-themselves and
never merely as a means to an end.
Rule Deontology (Continued)
 Another variation of the Categorical
Imperative can be paraphrased as:

Always act on that maxim or


principle (or rule) which can be
universally binding, without
exception, for all human beings.
 SCENARIO 2–4: Making an Exception for Oneself
 Bill, a student at Technical University, approaches his philosophy
instructor, Professor Kanting, after class one day to turn in a paper that
is past due. Professor Kanting informs Bill that since the paper is late,
he is not sure that he will accept it. But Bill replies to Professor Kanting
in a way that suggests that he is actually doing his professor a favor by
turning in the paper late. Bill reasons that if he had turned in the paper
when it was due, Professor Kanting would have been swamped with
papers. Now, however, Kanting will be able to read Bill’s paper in a
much more leisurely manner, without having the stress of so many
papers to grade at once. Professor Kanting then tells Bill that he
appreciates his concern about his professor’s well being, but he asks
Bill to reflect a bit on his rationale in this incident. Specifically, Kanting
asks Bill to imagine a case in which all of the students in his class,
fearing that their professor would be overwhelmed with papers arriving
at the same time, decided to turn their papers in one week late.
Categorical Imperative
 Kant believed that if everyone followed the
categorical imperative, we would have a
genuinely moral system.
 It would be a system based on two essential
principles:
i. universality,
ii. impartiality.
 In such as system, every individual would be
treated fairly since the same rules would
apply universally to all persons.
Criticisms of Rule Deontology
 Kant's categorical imperative has been criticized
because it cannot help us in cases where we
have two or more conflicting duties.
 For example, we have duties to both keep
promises and to tell the truth, and sometimes we
encounter situations in which we are required
either to:
a) tell the truth and break a promise or
b) to keep a promise and tell a lie.
 But Kant does not provide us with a mechanism
for resolving such conflicts.
Act Deontology
 David Ross argues that when two or more moral duties
clash, we have to look at the individual situation (or
“circumstance”) to determine which duty is overriding.
 Like act utilitarians, Ross stresses the importance of
analyzing individual actions and situations to determine
the morally appropriate course of action to take.
 Unlike utilitarians, Ross believes that we must not
consider the consequences of actions when deliberating
over which course of action morally trumps or
outweighs another.
Act Deontology (Continued)
 Like Kant (and deontologists in
general), Ross believes that the notion
of duty is ultimate criterion for
determining morality.
 Unlike Kant, however, Ross does not
believe that blind adherence to certain
maxims or rules can work in every case
for determining which duties we must
ultimately carry out.
Act Deontology (Continued)
 Ross believes that we can determine what our
overriding duty is in a particular situation by using
a two-step deliberative process, in which we:
a) reflect on the competing prima facie duties (or
self-evident),
b) weigh the evidence at hand to determine which
course of action would be required in a particular
circumstance.
SCENARIO 2–5:A Dilemma Involving
Conflicting Duties
 You promise to meet a classmate one evening at
7:00 in the college library to study together for a
midterm exam for a computer science course you are
taking. While driving in your car to the library, you
receive a call on your cell phone informing you that
your grandmother has been taken to the hospital and
that you should go immediately to the hospital. You
consider calling your classmate from your car, but
you realize that you don’t have his phone number.
You also realize that you don’t have time to try to
reach your classmate by e-mail. What should you do
in this case?
summarizes key features that differentiate act and
rule utilitarianism and act and rule deontology
Contract-based Ethical Theories

Contract-based ethical
theories
Contract-based Ethical Theories
 From the perspective of social-contract
theory, a moral system comes into
being by virtue of certain contractual
agreements between individuals.
 One of the earliest versions of a
contract-based ethical theory can be
found in the writings of Thomas
Hobbes.
Contract-based Ethical Theories
(Continued)
 One virtue of the social-contract model is that
it gives us a motivation for being moral.
 For example, it is in our individual self-
interest to develop a moral system with rules
(Pojman 2006).
 This type of motivation for establishing a
moral system is absent in both the utilitarian
or deontological theories.
 So a contract-based ethical theory would
seem to have one advantage over both.
Criticisms of Social Contract Theory
 Critics point out that social-contract theory
provides for only a minimalist morality.
 For example, it is minimalist in the sense that we
are obligated to behave morally only where an
explicit or formal contract exists (Pojman 2006).
 So if I have no express contract with you, or if a
country like the U.S. has no explicit contract with
a developing nation, there is no moral obligation
for me to help you, or no obligation for the U.S.
to come to the aid of that developing nation.
Criticism of Social Contract
Theory (Continued)
 We can think of many situations involving
morality where there are no express contracts
or explicit laws describing our obligations to
each other.
 Most of us also believe that in at least some
of these cases, we are morally obligated to
help others when it is in our power to do so.
 Consider the classic case involving Kitty
Genovese in New York in the 1960s.
Criticism of Social Contract
Theory (Continued)
 Philosophers differentiate between two kinds of
legal rights:
 positive rights,
 negative rights.
 Having a negative right to something means
simply that one has the right not to be interfered
with in carrying out the privileges associated with
that right.
 For example, your right to vote and your right to
own a computer are both negative rights.
Positive vs. Negative Rights
 The holder of a negative right has the right
(and the expectation) not to be interfered
with in exercising that right – e.g., the right
to vote in a particular election or the right to
purchase a computer.
 As the holder of these rights, you cannot
demand (or even expect) that others must
either physically transport you to the voting
polls, or provide you with a computer if you
cannot afford to purchase one.
Positive and Negative Rights
(Continued)
 Positive rights are very rare and are much
more difficult to justify philosophically.
 In the U.S., one's right to receive an
education is a positive right.
 Because all American citizens are entitled to
such an education, they must be provided
with a free public education.
 If education requires Internet access at
home, should students also be provided with
free Internet access?
Character-based Ethical Theories

Character-based ethical
theories
Virtue ethics

Being a moral person vs. following moral rules

What kind of person should be? vs. what should I do in such situation?

Acquiring the correct habits


Character-based Ethical Theories
 Virtue ethics(also sometimes called "character
ethics") ignores the roles that consequences,
duties, and social contracts play in moral
systems in determining the appropriate
standard for evaluating moral behavior.
 Virtue ethics focuses on criteria having to do
with the character development of individuals
and their acquisition of good character traits
from the kinds of habits they develop.
Character-based Ethical Theory
(continued)
 Virtue ethics can be traced back to Plato and
Aristotle.
 On this view, becoming an ethical person
requires more than simply memorizing and
deliberating on certain kinds of rules.
 What is also needed, in Aristotle view, is that
people develop certain virtues.
 Aristotle believed that to be a moral person,
one had to acquire the right virtues
(strengths or excellences).
Character-based Ethical Theories
(Continued)
 Aristotle believed that through the proper
training and acquisition of “good habits” and
good character traits, one could achieve
moral virtues, such as temperance, courage,
and so forth, that are need to "live well.“
 According to Aristotle, a moral person is one
who is necessarily disposed to do the right
thing.
Character-based Ethical Theories
(Continued)
 Instead of asking, “What should I do in such
and such a situation?", a virtue ethicist asks:
“What kind of person should I be?"
 The emphasis is on being a moral person -
not simply understanding what moral rules
are and how they apply in certain situations.
 While deontological and utilitarian theories
are "action-oriented" and "rule-oriented,"
virtue ethics is "agent-oriented" because it is
centered on the agent him/her-self.
Criticism of Character-based
Ethical Theories
 Character-based ethical systems tend to
flourish in cultures where there is a greater
emphasis placed on community life than on
individuals.
 In the West, since the Enlightenment, more
emphasis has been placed on the importance
of individual autonomy and individual rights.
 In the Ancient Greek world of Aristotle's time,
the notion of community was paramount.
Table 2-3: Four Types of Ethical
Theory
Type of Theory Advantages Disadvantages
Consequence-based Stresses promotion of Ignores concerns of justice
(Utilitarian) happiness and utility for the minority population

Duty-based (Deontology) Stresses the role of duty and Underestimates the


respect for persons importance of happiness and
social utility
Contract-based (Rights) Provides a motivation for Offers only a minimal
morality morality
Character-based (Virtue) Stresses moral development Depends on homogeneous
and moral education community standards for
morality
Can a Comprehensive Ethical Theory Be Framed
to Combine Two or More Traditional Theories?

 Some Ethicists have tried to combine aspects


of two more theories, such as
consequentialism and deontology.
 Moor (2004) has devised a framework called
“Just Consequentialism” that incorporates
aspects of:
a) deontology (justice),
b) utilitarianism (consequences).
 Moor’s theory has two steps or stages.
James Moor’s Ethical Framework of Just
Consequentialism: A Two-Step Strategy

1. Deliberate over various policies from an impartial point of view to determine whether they
meet the criteria for being ethical policies. A policy is ethical if it:
a. does not cause any unnecessary harms to individual groups
b. supports individual rights, the fulfilling of duties, etc.
2. Select the best policy from the set of just policies arrived at the deliberation stage by
ranking ethical policies in terms of benefits and justifiable (harms). In doing this, be sure
to:

a. weigh carefully between the good consequences and the bad consequences in the
ethical policies and

b. distinguish between disagreements about facts and disagreements about principles


and values, when deciding which particular ethical policy should be adopted.
CIS 200
1
Privacy Concerns

life

Commerce

Health Care

Education

Work

Recreation
Privacy and Cybertechnology
 Privacy issues involving cybertechnology
affect all of us, regardless of whether we
have ever owned or even used a
networked computer.
 Consider the amount of personal
information about us that can be acquired
from our commercial transactions in a
bank or in a (physical) store.
Privacy and Cybertechnology
(Continued)
 Also, consider that closed circuit television cameras
(CCTVs) located in public places and in shopping
malls record many of your daily movements as you
casually stroll through those environments.
 Current Web-based applications such as Google
Street View (a feature of Google Earth and Google
Maps) make use of satellite cameras and global
positioning system (GPS) software that enable users
to zoom in on your house or place of employment
and potentially record information about you.
Privacy and Cybertechnology
(Continued)
 Even if you use the Internet solely for
recreational purposes, your privacy is
threatened.
 Personal data, including data about our Web-
browsing interests, can now easily be
acquired by organizations whose need for this
information is not always clear.
 A user’s personal data acquired via his/her
online activities can be sold to third parties.
Privacy and Cybertechnology
(Continued)
 Privacy concerns now affect many
aspects of our day-to-day lives – from
commerce to healthcare to work.
 So, we have categories such as:
 consumer privacy,
 medical/healthcare privacy,
 employee/workplace privacy.
Privacy and Cybertechnology
(Continued)
 Are any privacy issues unique to cybertechnology?
 Privacy concerns have been exacerbated by
cybertechnology in at least four ways, i.e., by the:
1. amount of personal information that can now be
collected;
2. speed at which personal information can now be
transferred and exchanged;
3. duration of time in which personal information can
now be retained;
4. kind of personal information (such as transactional
information) that can be acquired.
What is Personal Privacy
 Privacy is a concept that is difficult to define.
 We sometimes speak of an individual’s privacy as
something that can be:
 lost,
 diminished,
 intruded upon,
 invaded,
 violated,
 breached.
What is Privacy (continued)?
 Privacy is sometimes viewed in terms of something
that can be diminished (i.e., as a repository of
personal information that can be eroded gradually) or
lost altogether.
 Privacy is sometimes also construed in terms of the
metaphor of a (spatial) zone that can be intruded
upon or invaded.
 Privacy is also sometimes analyzed in terms of
concerns affecting the confidentiality of information,
which can be breached or violated.
Classic Theories of Privacy
 Traditional (or classic) privacy theories
have tended to view privacy in
connection with notions such as:
 non-intrusion (into one’s space),
 non-interference (with one’s decisions),
 having control over/restricting access to
one’s personal information.
Non-intrusion Theories of Privacy
 Non-intrusion theories view privacy as
either:
 being let alone,
 being free from government intrusion
(into one’s physical space).
 This view is also sometimes referred to
as accessibility privacy (DeCew, 1997).
Non-interference Theories of
Privacy
 Non-interference theories view privacy
in terms of freedom from interference
in making decisions.
 This perspective emerged in the 1960s, following
the Griswold v. Connecticut (U.S. Supreme Court)
case in 1965.
 This view of privacy is also sometimes
referred to as decisional privacy.
The Control and Limited Access
Theories of Informational Privacy
 Informational privacy is concerned with
protecting personal information in
computer databases.
 Most people wish to have some control
over their personal information.
 In some cases, “privacy zones” have
been set up either to restrict or limit
access to one’s personal data.
Table 5-1: Three Views of
Privacy
Accessibility Privacy Privacy is defined in terms of one's
physically "being let alone," or
freedom from intrusion into one's
physical space.

Decisional Privacy Privacy is defined in terms of


freedom from interference in one's
choices and decisions.

Informational Privacy Privacy is defined as control over


the flow of one's personal
information, including the transfer
and exchange of that information.
A Comprehensive Account of Privacy
 Moor (2004) has articulated a privacy
theory that incorporates key elements
of the three classic theories
Moor’s Comprehensive Theory of
Privacy
 According to Moor:
“an individual has privacy in a
situation if in that particular situation
the individual is protected from
intrusion, interference, and
information access by others.”
Moor’s Theory of Privacy
(continued)
 A key element in Moor’s definition is his
notion of a situation, which can apply to
a range of contexts or “zones.”
 For Moor, a situation can be an
“activity,” a “relationship,” or the
“storage and access of information” in a
computer or on the Internet.
Moor’s Privacy Theory
(continued)
 Moor also distinguishes between “naturally
private” and “normatively private” situations
required for having:
a) natural privacy (in a descriptive
sense);
b) a right to privacy (in a normative
sense).
Applying Moor’s Natural vs.
Normative Privacy Distinction
 Using Moor’s natural/normative
privacy distinction, we can further
differentiate between a:
 loss of privacy,

 violation of privacy.
Descriptively Private vs.
Normatively Private Situations
 Review Scenario 5-1 (in the textbook), where
Tom walks into the computer lab (when no
one else is around) and sees Mary in the lab.
 In this natural/descriptively private situation,
Mary’s privacy is lost but not violated.
 Review Scenario 5-2, where Tom peeps
through the keyhole of Mary’s apartment
door and sees Mary typing at her computer.
 In this normatively private situation, Mary’s
privacy is not only lost but is also violated.
Nissenabum’s Theory of Privacy as
“Contextual Integrity”
 Nissenbaum’s privacy framework
requires that the processes used in
gathering and disseminating
information are
a) “appropriate to a particular context”

b) comply with norms that govern the flow


of personal information in a given
context.
Nissenbaum’s Theory (Continued)
 Nissenbaum (2004a) refers to
these two types of informational
norms as:
 norms of appropriateness,

 norms of distribution.
Nissenbaum’s Theory (Continued)
 Norms of appropriateness determine whether a given
type of personal information is either appropriate or
inappropriate to divulge within a particular context.
 Norms of distribution restrict or limit the flow of
information within and across contexts.
 When either norm is “breached,” a violation of
privacy occurs.
 Conversely, the contextual integrity of the flow of
personal information is maintained when both kinds
of norms are “respected”
Nissenbaum’s Theory (Continued)
 Like Moor’s privacy model, Nissenbaum’s
theory demonstrates why we must always
focus on the context in which information
flows, not the nature of the information itself,
in determining whether normative protection
is needed.
 Review Scenario 5-3 (in the textbook) on
Professor Robert’s seminar, which illustrates
the notion of “contextual integrity.”
Can Privacy Be Preserved in the
Digital Era?
 In 1999, Scott McNealy, CEO of Sun Microsystems,
uttered his now famous remark to a group of report-
ers: You have zero privacy anyway. Get over it.
 Froomkin (2000), Garfinkel (2000), and others have
expressed concerns about the “death of privacy.”
 But some believe that not all has yet been lost in the
battle over privacy.
 For example, some privacy advocates staunchly
believe that we should be vigilant about retaining and
safeguarding what little privacy we may still have.
What Kind of Value is Privacy?
 Three distinct questions can be distinguished
with respect to privacy as a value:
1. Is privacy an intrinsic value, or is it an
instrumental value?
2. Is privacy universally valued, or is it valued
mainly in Western industrialized societies (where
greater importance is placed on the individual
than on the broader community?)
3. Is privacy an important social value (as well as
an individual value)?
Is Privacy an Intrinsic Value or an
Instrumental Value?
 Is privacy something that is valued for
its own sake?
 In other words, is it an intrinsic value?
 Or, is privacy valued as a means to
some further end?
 Is it merely an instrumental value?
Is Privacy an Intrinsic or an
Instrumental Value (Continued)?
 Privacy does not seem to be valued for
its own sake, and thus does not appear
to have intrinsic worth.
 But privacy also seems to be more than
merely an instrumental value because it
is necessary (rather than merely
contingent) for achieving important
human ends (Fried, 1990).
Is Privacy an Intrinsic or an
Instrumental Value (Continued)?
 Fried notes that privacy is
necessary for important human
ends such as trust and friendship.
 Moor views privacy as an
expression of a “core value” – viz.,
security, which is essential for
human flourishing.
Privacy as a Universal Value
 Privacy has at least some importance in all
societies, but it is not valued the same in all
cultures.
 For example, privacy tends to be less valued
in many non-Western nations, as well as in
many rural societies in Western nations.
 Privacy also tends to be less valued in some
democratic societies where national security
and safety are considered more important
than individual privacy (e.g., as in Israel).
Privacy as an Important Social
Value
 Priscilla Regan (1995) notes that we tend to
underestimate the importance of privacy as
an important social value (as well as an
individual value).
 Regan believes that if we frame the privacy
debate in terms of privacy as a social value
(essential for democracy), as opposed to an
individual good, the importance of privacy is
better understood.
Cybertechology-related
Techniques that Threaten Privacy
 We examine three techniques that threaten privacy:
1) data-gathering techniques used to collect and record
personal information, often without the knowledge
and consent of users.
2) data-exchanging techniques used to transfer and
exchange personal data across and between
computer databases, typically without the knowledge
and consent of users.
3) data-mining techniques used to search for patterns
implicit in large databases in order to generate
consumer profiles based on behavioral patterns
discovered in certain groups.
Cybertechnology Techniques
Used to Gather Personal Data
 Personal data has been gathered at
least since Roman times (census data).
 Roger Clarke uses the term

dataveillance to capture two techniques


made possible by cybertechnology:
a) surveillance (data-monitoring),

b) data-recording.
Internet Cookies as a Surveillance
Technique
 “Cookies” are files that Web sites send to and
retrieve from the computers of Web users.
 Cookies technology enables Web site owners
to collect data about those who access their
sites.
 With cookies, information about one’s online
browsing preferences can be “captured”
whenever a person visits a Web site.
Cookies (Continued)
 The data recorded via cookies is stored on a
file placed on the hard drive of the user's
computer system.
 The information can then be retrieved from
the user's system and resubmitted to a Web
site the next time the user accesses that site.
 The exchange of data typically occurs without
a user's knowledge and consent.
Can the Use of Cookies be
Defended?
 Many proprietors of Web sites that use
cookies maintain that they are performing a
service for repeat users of their sites by
customizing a user's means of information
retrieval.
 For example,, some point out that, because
of cookies, they are able to provide a user
with a list of preferences for future visits to
that Web site.
Arguments Against Using Cookies
 Some privacy advocates argue that
activities involving the monitoring and
recording an individual's activities while
visiting a Web site violates privacy.
 Some also worry that information
gathered about a user via cookies can
eventually be acquired by or sold to
online advertising agencies.
RFID Technology as a
Surveillance Technique
 RFID (Radio Frequency IDentification) consists of
a tag (microchip) and a reader:
 The tag has an electronic circuit, which stores
data, and antenna that broadcasts data by radio
waves in response to a signal from a reader.
 The reader contains an antenna that receives the
radio signal, and demodulator that transforms
the analog radio into suitable data for any
computer processing that will be done (Lockton
and Rosenberg, 2005).
RFID Technology (Continued)
 RFID transponders in the form of “smart
labels” make it much easier to track inventory
and protect goods from theft or imitation.
 RFID technology also poses a significant
threat to individual privacy.
 Critics worry about the accumulation of RFID
transaction data by RFID owners and how
that data will be used in the future.
RFID Technology (Continued)
 Garfinkel (2004) notes that roughly 40
million Americans carry some form of
RFID device every day.
 Privacy advocates note that RFID
technology has been included in chips
embedded in humans, which enables
them to be tracked.
RFID Technology (Continued)
 Like Internet cookies (and other online data
gathering and surveillance techniques), RFID
threatens individual privacy.
 Unlike cookies, which track a user’s habits
while visiting Web sites, RFID technology can
track an individual’s location in the off-line
world.
 RFID technology also introduces concerns in-
volving “locational privacy”
Cybertechnology and
Government Surveillance
 As of 2005, cell phone companies are
required by the FCC to install a GPS (Global
Positioning System) locator chip in all new
cell phones.
 This technology, which assists 911 operators,
enables the location of a cell phone user to
be tracked within 100 meters.
 Privacy advocates worry that this information
can also be used by the government to spy
on individuals.
Computerized Merging
Techniques
 Computer merging is a technique of
extracting information from two or
more unrelated databases and
incorporating it into a composite file.
 Computer merging occurs whenever
two or more disparate pieces of
information contained in separate
databases are combined.
Computer Merging (Continued)
 Imagine a situation in which you voluntarily
provide information about yourself to three
different organizations – i.e., you give
information about your:
1) income and credit history to a lending
institution in order to secure a loan;
2) age and medical history to an insurance
company to purchase life insurance;
3) views on certain social issues to a political
organization you wish to join.
Computer Merging (Continued)
 Each organization has a legitimate need for
information to make decisions about you; for
example:
 insurance companies have a legitimate need
to know about your age and medical history
before agreeing to sell you life insurance;
 lending institutions have a legitimate need to
know information about your income and
credit history before agreeing to lend you
money to purchase a house or a car.
Computer Merging (Continued)
 Suppose that information about you in the
insurance company's database is merged with
information about you in the bank’s database or
in the political organization's database.
 When you gave certain information about
yourself to three different organizations, you
authorized each organization to have specific
information about you.
 However, it does not follow that you thereby
authorized any one organization to have some
combination of that information.
Computer Merging (Continued)
 Review Scenario 5-4 (in the textbook)
involving Double-Click (an online
advertising company that attempted to
purchase Abacus, Inc., an off-line
database company).
 If it had succeeded in acquiring Abacus,
Double-Click would have been able to
merge on- and off-line records.
Computer Matching
 Computer matching is a variation of
computer merging.
 Matching is a technique that cross-
checks information in two or more
databases that are typically
unrelated to produce "matching
records" or "hits."
Computer Matching (Continued)
 In practices involving federal and state
government organizations, computerized
matching has been used by various agencies
and departments to identify:
 potential law violators;
 individuals who have actually broken the law
or who are suspected of having broken the
law (welfare cheats, deadbeat parents, etc.).
Computer Matching (Continued)
 Income tax records could be matched against
state motor vehicle registration records
(looking for individuals reporting low incomes
but owning expensive automobiles).
 Consider an analogy in physical space where
your mail is matched (and opened) by
authorities to catch criminals suspected of
communicating with your neighbors.
Computer Matching (Continued)
 Some who defend matching argue:
If you have nothing to hide, you have nothing
to worry about.
 Others use the following kind of argument:
1. Privacy is a legal right.
2. Legal rights are not absolute.
3. When one violates the law (i.e., commits a
crime), one forfeits one's legal rights.
4. Therefore, criminals have forfeited their right
to privacy.
Computer Matching (Continued)
 Review Scenario 5-5 (in the text) involving
Super Bowl XXXV (in 2001), where a facial-
recognition technology was used to scan the
faces of individuals entering the stadium.
 The digitized facial images were instantly
matched against images contained in a
centralized database of suspected criminals
and terrorists.
 This practiced was criticized by many civil-lib-
erties proponents at that time (January 2001).
Data Mining
 Data mining involves the indirect gathering of
personal information via an analysis of
implicit patterns discoverable in data.
 Data-mining activities can generate new and
sometimes non-obvious classifications or
categories.
 Individuals whose data is mined could
become identified with or linked to certain
newly created groups that they might never
have imagined to exist.
Data Mining (Continued)
 Current privacy laws offer individuals little-to-
no protection for how personal information
that is acquired through data-mining activities
is subsequently used.
 Yet, important decisions can be made about
individuals based on the patterns found in the
personal data that has been “mined.”
 Some uses of data-mining technology raise
special concerns for personal privacy.
Data Mining (Continued)
 Why is mining personal data controversial?
 Unlike personal data that resides in explicit
records in databases, information acquired
about persons via data mining is often
derived from implicit patterns in the data.
 The patterns can suggest "new" facts,
relationships, or associations about that
person, such as that person's membership in
a newly "discovered" category or group.
Data Mining (Continued)
 Much personal data collected and used
in data-mining applications is generally
considered to be information that is
neither confidential nor intimate.
 So, there is a tendency to presume that
personal information generated by or
acquired via data mining techniques
must by default be public data.
Data Mining (Continued)
 Review Scenario 5-6 (in the text) involving Lee, a
(hypothetical) 35-year old executive named Lee, who:
 applies for an automobile loan for a BMW;
 has an impeccable credit history.
 A data-mining algorithm “discovers” that:
I. Lee belongs to a group of individuals likely to start their
own business;
II. people who start business in this field are also likely to
declare bankruptcy within the first three years;
 Lee is denied the loan for the BMW based on the profile
revealed by the data-mining algorithms, despite his credit
score.
Data Mining (Continued)
 Although the preceding scenario (involving Lee) is
merely hypothetical, an actual case (that was similar
to this) occurred in 2008.
 In that incident, a person had two credit cards
revoked and had the limit on a third credit card
reduced because of certain associations that the
company made with respect to where this person:
 shopped,
 lived,
 did his banking (Stuckey 2009).
Data Mining (Continued)
 In that (2008) case, a data-mining algorithm
used by the bank “discovered” that this
person (whose credit cards were revoked):
 purchased goods at a store where typical
patrons who also purchased items there
defaulted on their credited card payments;
 lived in an area that had a high rate of home
foreclosures, even though he made his
mortgage payments on time.
Web Mining: Data Mining on the Web
 Traditionally, most data mining was done in
large “data warehouses” (i.e., off-line).
 Data mining is now also used by commercial
Web sites to analyze data about Internet
users, which can then be sold to third parties.
 This process is sometimes referred to as
“Web mining.”
 Review Scenario 5-7 (in the text) on “Face-
book Beacon,” as an example of Web mining.
Table 5-2: Three Techniques
Used to Manipulate Personal Data
Data Merging A data-exchanging process in which personal
data from two or more sources is combined to
create a "mosaic" of individuals that would not
be discernable from the individual pieces of data
alone.

Data Matching A technique in which two or more unrelated


pieces of personal information are cross-
referenced and compared to generate a match or
"hit," that suggests a person's connection with
two or more groups.

Data Mining A technique for "unearthing" implicit patterns in


large databases or "data warehouses," revealing
statistical data that associates individuals with
non-obvious groups; user profiles can be
constructed from these patterns.

You might also like