Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 13

Privacy and Information Technology

First published Thu Nov 20, 2014

Human beings value their privacy and the protection of their personal sphere of life. They value some
control over who knows what about them. They certainly do not want their personal information to be
accessible to just anyone at any time. But recent advances in information technology threaten privacy
and have reduced the amount of control over personal data and open up the possibility of a range of
negative consequences as a result of access to personal data. The 21st century has become the century
of Big Data and advanced Information Technology allows for the storage and processing of exabytes of
data. The revelations of Edward Snowden have demonstrated that these worries are real and that the
technical capabilities to collect, store and search large quantities of data concerning telephone
conversations, internet searches and electronic payment are now in place and are routinely used by
government agencies. For business firms, personal data about customers and potential customers are
now also a key asset. At the same time, the meaning and value of privacy remains the subject of
considerable controversy. The combination of increasing power of new technology and the declining
clarity and agreement on privacy give rise to problems concerning law, policy and ethics. The focus of
this article is on exploring the relationship between information technology (IT) and privacy. We will both
illustrate the specific threats that IT and innovations in IT pose for privacy, and indicate how IT itself
might be able to overcome these privacy concerns by being developed in a ‘privacy-sensitive way’. We
will also discuss the role of emerging technologies in the debate, and account for the way in which moral
debates are themselves affected by IT.

1. Conceptions of privacy and the value of privacy

Discussions about privacy are intertwined with the use of technology. The publication that began the
debate about privacy in the Western world was occasioned by the introduction of the newspaper
printing press and photography. Samuel D. Warren and Louis Brandeis wrote their article on privacy in
the Harvard Law Review (Warren & Brandeis 1890) partly in protest against the intrusive activities of the
journalists of those days. They argued that there is a “right to be left alone” based on a principle of
“inviolate personality”. Since the publication of that article, the debate about privacy has been fueled by
claims for the right of individuals to determine the extent to which others have access to them (Westin
1967) and claims for the right of society to know about individuals. The privacy debate has co-evolved
with the development of information technology. It is therefore difficult to conceive of the notions of
privacy and discussions about data protection as separate from the way computers, the Internet, mobile
computing and the many applications of these basic technologies have evolved.

1.1 Constitutional vs. informational privacy

Inspired by subsequent developments in U.S. law, a distinction can be made between (1) constitutional
(or decisional) privacy and (2) tort (or informational) privacy (DeCew 1997). The first refers to the
freedom to make one's own decisions without interference by others in regard to matters seen as
intimate and personal, such as the decision to use contraceptives or to have an abortion. The second is
concerned with the interest of individuals in exercising control over access to information about
themselves and is most often referred to as “informational privacy”. Think here, for instance, about
information disclosed on Facebook or other social media. All too easily, such information might be
beyond the control of the individual.

1.2 Accounts of the value of privacy

The debates about privacy are almost always revolving around new technology, ranging from genetics
and the extensive study of bio-markers, brain imaging, drones, wearable sensors and sensor networks,
social media, smart phones, closed circuit television, to government cybersecurity programs, direct
marketing, RFID tags, Big Data, head-mounted displays and search engines. There are basically two
reactions to the flood of new technology and its impact on personal information and privacy: the first
reaction, held by many people in IT industry and in R&D, is that we have zero privacy in the digital age
and that there is no way we can protect it, so we should get used to the new world and get over it. The
other reaction is that our privacy is more important than ever and that we can and we must attempt to
protect it.

1.3 Personal Data

Personal information or data is information or data that is linked or can be linked to individual persons.
Examples include date of birth, sexual preference, whereabouts, religion, but also the IP address of your
computer or metadata pertaining to these kinds of information. Personal data can be contrasted with
data that is considered sensitive, valuable or important for other reasons, such as secret recipes,
financial data, or military intelligence. Data that is used to secure other information, such as passwords,
are not considered here. Although such security measures may contribute to privacy, their protection is
only instrumental to the protection of other information, and the quality of such security measures is
therefore out of the scope of our considerations here.

1.4 Moral reasons for protecting personal data

The following types of moral reasons for the protection of personal data and for providing direct or
indirect control over access to those data by others can be distinguished (van den Hoven 2008):

Prevention of harm: Unrestricted access by others to one's passwords, characteristics, and whereabouts
can be used to harm the data subject in a variety of ways.

Informational inequality: Personal data have become commodities. Individuals are usually not in a good
position to negotiate contracts about the use of their data and do not have the means to check whether
partners live up to the terms of the contract. Data protection laws, regulation and governance aim at
establishing fair conditions for drafting contracts about personal data transmission and exchange and
providing data subjects with checks and balances, guarantees for redress.
Informational injustice and discrimination: Personal information provided in one sphere or context (for
example, health care) may change its meaning when used in another sphere or context (such as
commercial transactions) and may lead to discrimination and disadvantages for the individual.

Encroachment on moral autonomy: Lack of privacy may expose individuals to outside forces that
influence their choices.

These formulations all provide good moral reasons for limiting and constraining access to personal data
and providing individuals with control over their data.

1.5 Law, regulation, and indirect control over access

Data protection laws are in force in almost all countries. The basic moral principle underlying these laws
is the requirement of informed consent for processing by the data subject. Furthermore, processing of
personal information requires that its purpose be specified, its use be limited, individuals be notified and
allowed to correct inaccuracies, and the holder of the data be accountable to oversight authorities
(OECD 1980). Because it is impossible to guarantee compliance of all types of data processing in all these
areas and applications with these rules and laws in traditional ways, so-called privacy-enhancing
technologies and identity management systems are expected to replace human oversight in many cases.
The challenge with respect to privacy in the twenty-first century is to assure that technology is designed
in such a way that it incorporates privacy requirements in the software, architecture, infrastructure, and
work processes in a way that makes privacy violations unlikely to occur.

Concepts of Information Security

This chapter discusses security policies in the context of requirements for information security and the
circumstances in which those requirements must be met, examines common principles of management
control, and reviews typical system vulnerabilities, in order to motivate consideration of the specific sorts
of security mechanisms that can be built into computer systems—to complement nontechnical
management controls and thus implement policy—and to stress the significance of establishing GSSP.
Additional information on privacy issues and detailing the results of an informal survey of commercial
security officers is provided in the two chapter appendixes.

Organizations and people that use computers can describe their needs for information security and trust
in systems in terms of three major requirements:
Confidentiality: controlling who gets to read information;

Integrity: assuring that information and programs are changed only in a specified and authorized
manner; and

Availability: assuring that authorized users have continued access to information and resources.

These three requirements may be emphasized differently in various applications. For a national defense
system, the chief concern may be ensuring the confidentiality of classified information, whereas a funds
transfer system may require strong integrity controls. The requirements for applications that are
connected to external systems will differ from those for applications without such interconnection. Thus
the specific requirements and controls for information security can vary.

A security policy is a concise statement, by those responsible for a system (e.g., senior management), of
information values, protection responsibilities, and organizational commitment. One can implement that
policy by taking specific actions guided by management control principles and utilizing specific security
standards, procedures, and mechanisms. Conversely, the selection of standards, procedures, and
mechanisms should be guided by policy to be most effective.

Management controls are the mechanisms and techniques—administrative, procedural, and technical—
that are instituted to implement a security policy. Some management controls are explicitly concerned
with protecting information and information systems, but the concept of management controls includes
much more than a computer's specific role in enforcing security. Note that management controls not
only are used by managers, but also may be exercised by users. An effective program of management
controls is needed to cover all aspects of information security, including physical security, classification of
information, the means of recovering from breaches of security, and above all training to instill
awareness and acceptance by people. There are trade-offs among controls. For example, if technical
controls are not available, then procedural controls might be used until a technical solution is found.

A major conclusion of this report is that the lack of a clear articulation of security policy for general
computing is a major impediment to improved security in computer systems. Although the Department
of Defense (DOD) has articulated its requirements for controls to ensure confidentiality, there is no
articulation for systems based on other requirements and management controls (discussed below)—
individual accountability, separation of duty, auditability, and recovery. This committee's goal of
developing a set of Generally Accepted System Security Principles, GSSP, is intended to address this
deficiency and is a central recommendation of this report.
In computing there is no generally accepted body of prudent practice analogous to the Generally
Accepted Accounting Principles promulgated by the Financial Auditing Standards Board (see Appendix
D). Managers who have never seen adequate controls for computer systems may not appreciate the
capabilities currently available to them, or the risks they are taking by operating without these controls.
Faced with demands for more output, they have had no incentive to spend money on controls.
Reasoning like the following is common: "Can't do it and still stay competitive"; "We've never had any
trouble, so why worry"; "The vendor didn't put it in the product; there's nothing we can do."

On the basis of reported losses, such attitudes are not unjustified (Neumann, 1989). However, computers
are active entities, and programs can be changed in a twinkling, so that past happiness is no predictor of
future bliss. There has to be only one Internet worm incident to signal a larger problem. Experience since
the Internet worm involving copy-cat and derivative attacks shows how a possibility once demonstrated
can become an actuality frequently used.1

SECURITY POLICIES-RESPONDING TO REQUIREMENTS FOR CONFIDENTIALITY, INTEGRITY, AND


AVAILABILITY

The weight given to each of the three major requirements describing needs for information security—
confidentiality, integrity, and availability—depends strongly on circumstances. For example, the adverse
effects of a system not being available must be related in part to requirements for recovery time. A
system that must be restored within an hour after disruption represents, and requires, a more
demanding set of policies and controls than does a similar system that need not be restored for two to
three days. Likewise, the risk of loss of confidentiality with respect to a major product announcement
will change with time. Early disclosure may jeopardize competitive advantage, but disclosure just before
the intended announcement may be insignificant. In this case the information remains the same, while
the timing of its release significantly affects the risk of loss.

Confidentiality

Confidentiality is a requirement whose purpose is to keep sensitive information from being disclosed to
unauthorized recipients. The

Page 53

Suggested Citation:"Concepts of Information Security." National Research Council. 1991. Computers at


Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi:
10.17226/1581. ×
Add a note to your bookmark

secrets might be important for reasons of national security (nuclear weapons data), law enforcement
(the identities of undercover drug agents), competitive advantage (manufacturing costs or bidding
plans), or personal privacy (credit histories) (see Chapter Appendix 2.1).

The most fully developed policies for confidentiality reflect the concerns of the U.S. national security
community, because this community has been willing to pay to get policies defined and implemented
(and because the value of the information it seeks to protect is deemed very high). Since the scope of
threat is very broad in this context, the policy requires systems to be robust in the face of a wide variety
of attacks. The specific DOD policies for ensuring confidentiality do not explicitly itemize the range of
expected threats for which a policy must hold. Instead, they reflect an operational approach, expressing
the policy by stating the particular management controls that must be used to achieve the requirement
for confidentiality. Thus they avoid listing threats, which would represent a severe risk in itself, and avoid
the risk of poor security design implicit in taking a fresh approach to each new problem.

The operational controls that the military has developed in support of this requirement involve
automated mechanisms for handling information that is critical to national security. Such mechanisms
call for information to be classified at different levels of sensitivity and in isolated compartments, to be
labeled with this classification, and to be handled by people cleared for access to particular levels and/or
compartments. Within each level and compartment, a person with an appropriate clearance must also
have a "need to know" in order to gain access. These procedures are mandatory: elaborate procedures
must also be followed to declassify information.2

Classification policies exist in other settings, reflecting a general recognition that to protect assets it is
helpful to identify and categorize them. Some commercial firms, for instance, classify information as
restricted, company confidential, and unclassified (Schmitt, 1990). Even if an organization has no secrets
of its own, it may be obliged by law or common courtesy to preserve the privacy of information about
individuals. Medical records, for example, may require more careful protection than does most
proprietary information. A hospital must thus select a suitable confidentiality policy to uphold its
fiduciary responsibility with respect to patient records.

In the commercial world confidentiality is customarily guarded by security mechanisms that are less
stringent than those of the national security community. For example, information is assigned to an
"owner" (or guardian), who controls access to it.3 Such security mechanisms are capable of dealing with
many situations but are not as resistant to certain attacks as are mechanisms based on classification and
manda-

Page 54

Suggested Citation:"Concepts of Information Security." National Research Council. 1991. Computers at


Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi:
10.17226/1581. ×

Add a note to your bookmark

tory labeling, in part because there is no way to tell where copies of information may flow. With Trojan
horse attacks, for example, even legitimate and honest users of an owner mechanism can be tricked into
disclosing secret data. The commercial world has borne these vulnerabilities in exchange for the greater
operational flexibility and system performance currently associated with relatively weak security.

the host system, the availability of individual teller machines is of less concern.

A telephone switching system, on the other hand, does not have high requirements for integrity on
individual transactions, as lasting damage will not be incurred by occasionally losing a call or billing
record. The integrity of control programs and configuration records, however, is critical. Without these,
the switching function would be defeated and the most important attribute of all—availability—would
be compromised. A telephone switching system must also preserve the confidentiality of individual calls,
preventing one caller from overhearing another.

Security needs are determined more by what a system is used for than by what it is. A typesetting
system, for example, will have to assure confidentiality if it is being used to publish corporate proprietary
material, integrity if it is being used to publish laws, and availability if it is being used to publish a daily
paper. A general-purpose time-sharing system might be expected to provide confidentiality if it serves
diverse clientele, integrity if it is used as a development environment for software or engineering
designs, and availability to the extent that no one user can monopolize the service and that lost files will
be retrievable.

MANAGEMENT CONTROLS-CHOOSING THE MEANS TO SECURE INFORMATION AND OPERATIONS


The setting of security policy is a basic responsibility of management within an organization.
Management has a duty to preserve and protect assets and to maintain the quality of service. To this
end it must assure that operations are carried out prudently in the face of realistic risks arising from
credible threats. This duty may be fulfilled by defining high-level security policies and then translating
these policies into specific standards and procedures for selecting and nurturing personnel, for checking
and auditing operations, for establishing contingency plans, and so on. Through these actions,
management may prevent, detect, and recover from loss. Recovery depends on various forms of
insurance: backup records, redundant systems and service sites, self-insurance by cash reserves, and
purchased insurance to offset the cost of recovery.

Preventing Breaches of Security—Basic Principles

Management controls are intended to guide operations in proper directions, prevent or detect mischief
and harmful mistakes, and give

Page 57

Suggested Citation:"Concepts of Information Security." National Research Council. 1991. Computers at


Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi:
10.17226/1581. ×

Add a note to your bookmark

early warning of vulnerabilities. Organizations in almost every line of endeavor have established controls
based on the following key principles:

Individual accountability,

Auditing, and

Separation of duty.
These principles, recognized in some form for centuries, are the basis of precomputer operating
procedures that are very well understood.

Individual accountability answers the question: Who is responsible for this statement or action? Its
purpose is to keep track of what has happened, of who has had access to information and resources and
what actions have been taken. In any real system there are many reasons why actual operation may not
always reflect the original intentions of the owners: people make mistakes, the system has errors, the
system is vulnerable to certain attacks, the broad policy was not translated correctly into detailed
specifications, the owners changed their minds, and so on. When things go wrong, it is necessary to
know what has happened, and who is the cause. This information is the basis for assessing damage,
recovering lost information, evaluating vulnerabilities, and initiating compensating actions, such as legal
prosecution, outside the computer system.

To support the principle of individual accountability, the service called user authentication is required.
Without reliable identification, there can be no accountability. Thus authentication is a crucial
underpinning of information security. Many systems have been penetrated when weak or poorly
administered authentication services have been compromised, for example, by guessing poorly chosen
passwords.

The basic service provided by authentication is information that a statement or action was made by a
particular user. Sometimes, however, there is a need to ensure that the user will not later be able to
claim that a statement attributed to him was forged and that he never made it. In the world of paper
documents, this is the purpose of notarizing a signature; the notary provides independent and highly
credible evidence, which will be convincing even after many years, that a signature is genuine and not
forged. This more stringent form of authentication, called nonrepudiation, is offered by few computer
systems today, although a legal need for it can be foreseen as computer-mediated transactions become
more common in business.

Auditing services support accountability and therefore are valuable to management and to internal or
external auditors. Given the reality that every computer system can be compromised from within,

Page 58
Suggested Citation:"Concepts of Information Security." National Research Council. 1991. Computers at
Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi:
10.17226/1581. ×

Add a note to your bookmark

and that many systems can also be compromised if surreptitious access can be gained, accountability is a
vital last resort. Auditing services make and keep the records necessary to support accountability.
Usually they are closely tied to authentication and authorization (a service for determining whether a
user or system is trusted for a given purpose—see discussion below), so that every authentication is
recorded, as is every attempted access, whether authorized or not. Given the critical role of auditing,
auditing devices are sometimes the first target of an attacker and should be protected
accordinglyResponding to Breaches of Security Recovery controls provide the means to respond to,
rather than prevent, a security breach. The use of a recovery mechanism does not necessarily indicate a
system shortcoming; for some threats, detection and recovery may well be more cost-effective than
attempts at total prevention. Recovery from a security breach may involve taking disciplinary or legal
action, notifying incidentally compromised parties, or changing policies, for example. From a technical
standpoint, a security breach has much in common with a failure that results from faulty equipment,
software, or operations. Usually some work will have to be discarded, and some or all of the system will
have to be rolled back to a clean state. Security breaches usually entail more recovery effort than do
acts of God. Unlike proverbial lightning, breaches of security can be counted on to strike twice unless the
route of compromise has been shut off. Causes must be located. Were passwords compromised? Are
backups clean? Did some user activity compromise the system by mistake? And major extra work—
changing all passwords, rebuilding the system from original copies, shutting down certain
communication links or introducing authentication procedures on them, or undertaking more user
education—may have to be done to prevent a recurrence.

DEVELOPING POLICIES AND APPROPRIATE CONTROLS Ideally a comprehensive spectrum of security


measures would ensure that the confidentiality, integrity, and availability of computer-based systems
were appropriately maintained. In practice it is not possible to make ironclad guarantees. The only recipe
for perfect security is perfect isolation: nothing in, nothing out. This is impractical, and so security
policies will always reflect trade-offs between cost and risk. The assets to be protected should be
categorized by value, the vulnerabilities by importance, and the risks by severity, and defensive measures
should be installed accordingly. Residual vulnerabilities should be recognized. Planning a...

Responding to Breaches of Security

Recovery controls provide the means to respond to, rather than prevent, a security breach. The use of a
recovery mechanism does not necessarily indicate a system shortcoming; for some threats, detection
and recovery may well be more cost-effective than attempts at total prevention. Recovery from a
security breach may involve taking disciplinary or legal action, notifying incidentally compromised
parties, or changing policies, for example. From a technical standpoint, a security breach has much in
common with a failure that results from faulty equipment, software, or operations. Usually some work
will have to be discarded, and some or all of the system will have to be rolled back to a clean state.

Security breaches usually entail more recovery effort than do acts of God. Unlike proverbial lightning,
breaches of security can be counted on to strike twice unless the route of compromise has been shut off.
Causes must be located. Were passwords compromised? Are backups clean? Did some user activity
compromise the system by mistake? And major extra work—changing all passwords, rebuilding the
system from original copies, shutting down certain communication links or introducing authentication
procedures on them, or undertaking more user education—may have to be done to prevent a
recurrence.

DEVELOPING POLICIES AND APPROPRIATE CONTROLS

Ideally a comprehensive spectrum of security measures would ensure that the confidentiality, integrity,
and availability of computer-based systems were appropriately maintained. In practice it is not possible
to make ironclad guarantees. The only recipe for perfect security is perfect isolation: nothing in, nothing
out. This is impractical, and so security policies will always reflect trade-offs between cost and risk. The
assets to be protected should be categorized by value, the vulnerabilities by importance, and the risks by
severity, and defensive measures should be installed accordingly. Residual vulnerabilities should be
recognized.

Planning a security program is somewhat like buying insurance. An organization considers the following:

The value of the assets being protected.

The vulnerabilities of the system: possible types of compro-

Page 60

Suggested Citation:"Concepts of Information Security." National Research Council. 1991. Computers at


Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi:
10.17226/1581. ×
Add a note to your bookmark

mise, of users as well as systems. What damage can the person in front of the automated teller machine
do? What about the person behind it?4

Threats: do adversaries exist to exploit these vulnerabilities? Do they have a motive, that is, something
to gain? How likely is attack in each case?

Risks: the costs of failures and recovery. What is the worst credible kind of failure? Possibilities are death,
injury, compromise to national security, industrial espionage, loss of personal privacy, financial fraud,
election fraud.

The organization's degree of risk aversion.

Thence follows a rough idea of expected losses. On the other side of the ledger are these:

Available countermeasures (controls and security services),

Their effectiveness, and

Their direct costs and the opportunity costs of installing them.

The security plans then become a business decision, possibly tempered by legal requirements and
consideration of externalities (see ''Risks and Vulnerabilities," below).

Ideally, controls are chosen as the result of careful analysis.5 In practice, the most important
consideration is what controls are available. Most purchasers of computer systems cannot afford to have
a system designed from scratch to meet their needs, a circumstance that seems particularly true in the
case of security needs. The customer is thus reduced to selecting from among the various preexisting
solutions, with the hope that one will match the identified needs.

Some organizations formalize the procedure for managing computer-associated risk by using a control
matrix that identifies appropriate control measures for given vulnerabilities over a range of risks. Using
such a matrix as a guide, administrators may better select appropriate controls for various resources. A
rough cut at addressing the problem is often taken: How much business depends on the system? What is
the worst credible kind of failure, and how much would it cost to recover? Do available mechanisms
address possible causes? Are they cost-effective?

The computer industry can be expected to respond to clearly articulated security needs provided that
such needs apply to a broad enough base of customers. This has happened with the Orange Book visà vis
the defense community—but slowly, because vendors were not convinced the customer base was large
enough to warrant accelerated investments in trust technology.

However, for many of the management controls discussed above,

Page 61

Suggested Citation:"Concepts of Information Se

You might also like