CIPT Onl Mod1Transcript PDF

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

PRIVACY IN TECHNOLOGY

ONLINE TRAINING TRANSCRIPT


MODULE 1: FOUNDATIONAL PRINCIPLES OF PRIVACY IN TECHNOLOGY

Introduction
Introduction

As technology continues to drive organizations to provide automation and innovation for their consumers,
employees and other stakeholders, development of any system, product, service, or process should be
done with the goal of protecting the privacy of individuals. The shift from paper-based to electronic
practices has created greater complexity in how information is collected, used, stored and destroyed. This
emerging ecosystem of technological advances and increased use of information within organizations has
raised concerns among individuals who wish to protect their personal information, while legal mandates
and regulations regarding consumer privacy continue to be driving forces for organizations.

Privacy technologists are tasked with the development of the ecosystems that provide organizations with
the tools and functionality that will best meet their goals and obligations. Privacy technologists may
include technology professionals, such as audit, risk and compliance managers; data professionals, such
as data architects and data scientists; and system designers and developers, including software engineers
and privacy engineers. For continuity purposes, the term privacy technologist will be used throughout this
training to reference the many professionals that play a role in protecting privacy in or with technology.

Keeping privacy at the forefront of all technology ecosystem designs, rather than a mere afterthought, will
assist in mitigating risks and maintaining compliance without compromising the functionality or capabilities
of systems. In this module, we will examine the data life cycle, discuss the foundational principles of
privacy by design, explore privacy risk models and frameworks, discuss the concepts behind value-
sensitive design and demonstrate ways in which providing end-to-end privacy protection will build trust
and confidence with consumers and maintain organizational integrity.

Navigation instructions

To begin learning, select the first chapter within this module. In addition, you may access the transcript to
follow along with a text version of the module by selecting the “Transcript and Resources” button. You
may use the navigation bar in the left-hand column of your player to revisit specific topics. Also, you may
select the magnifying glass at the top of the menu tab to access the search function and locate text used
in the module. You may access a list of additional reading to supplement the information in this course by
selecting the “Transcript and Resources” button, along with a map showing the alignment between this
training and the Privacy in Technology certification outline of the body of knowledge. The following online
modules contain a mix of both auditory and textual learning.

The data life cycle

©2022, International Association of Privacy Professionals, Inc. (IAPP)


Learning objectives

• Summarize the data life cycle


• Distinguish between a privacy notice and a privacy policy

The data life cycle (1)

The data life cycle refers to how data flows through an organization, including business processes and
technology systems. The components of the data life cycle—collection, use, disclosure, retention and
destruction—are intended to be generic and adaptable to different situations. From a technology
perspective, this model provides a framework for technology professionals to look at the privacy of
personal data end-to-end throughout the life cycle.

This diagram illustrates the basic parts of a data life cycle. Each part will be explored in further depth
throughout this training. Select “Next” to read more about the data life cycle.

The data life cycle (2)

The data life cycle is shaped by the privacy objectives and business practices of an organization. The
organization must specify the purpose for which information will be collected and used and maintain
consistency with how it is managed between actual practices and stated practices throughout the data life
cycle.

The challenge for privacy technologists is in helping their organization develop a data ecosystem that has
the capability to evolve with an organization’s shifting purposes and business needs and which is designed
to maximize how information is utilized while minimizing privacy risk.

Here, the basic parts of the data life cycle are shown. Look for the life cycle symbol on corresponding
slides for further information on each part. You can also select the symbol for a life cycle review. Select
“Next” to begin exploring.

Collection (1)

Data collection occurs at various points during the data life cycle process and in various ways. Select each
category to explore the types of data collection:

First party: First-party collection occurs when an individual provides their personal information directly to
the data collector.

Surveillance: In a surveillance method of data collection, an individual’s data stream behavior is observed
through their activities, including online searches or websites they engage with, while the individual’s
activity is not interrupted.

Repurposing: Previously collected data may be used for a different purpose other than that for which it
was initially collected, such as a mailing address collected for shipping purposes later being used for
sending marketing materials. Repurposing is also sometimes referred to as secondary use.

Third party: Third-party collection happens when previously collected information is transferred to a third
party to enable a new data collection. For example, a third party may be a marketing service that
aggregates and provides information about website users that helps online behavioral advertising tools
provide relevant ads to individuals.

Collection (2)

Methods of collection are either active or passive. Active collection is when the data subject is aware that
collection is taking place and takes an action to enable the collection, such as filling out and submitting an
online form. Passive collection occurs without requiring any action from the participant and is not always
obvious, such as background collection of a user’s web browser version and IP address.

Consent

©2022, International Association of Privacy Professionals, Inc. (IAPP)IAPP)


Regardless of how data is collected, obtaining prior consent may be an integral part of the ecosystem
(depending on the purpose for which the personal data is collected). Consent may be explicit or implicit.
Explicit consent requires the user to take an action, such as selecting an option to allow the collection of
information that the application provider wants to use to improve services and functionality. Implied
consent does not require the user to take an action. In some territories, such as the EU, implied consent is
not considered valid and the user (or individual) must take a positive action for consent to be legally valid.

An implicit example might be presenting the user with terms of service that state the individual’s use of
the service means they agree with those terms. Various consent mechanisms can be integrated into
technology ecosystems to provide individuals with transparency around privacy notices and data collection
activity. Organizations and privacy technologists should be aware of the rules and regulations regarding
consent for their industry.

Consent examples

Review the examples of consent and indicate if each is explicit consent or implied consent.

Clicking a button that acknowledges a privacy notice has been received

A privacy notice link appears at the foot of a web page

Users must choose to opt in or out of collection of information before using a website

A website’s privacy notice discloses that any information provided will be shared with a third party for
marketing purposes

Use and disclosure (1)

An organization that collects personal information should have a privacy notice in place. A privacy notice is
a statement made to data subjects that describes how an organization collects, uses, retains and discloses
personal information. Notices should indicate what information will be collected. A privacy notice may also
be referred to as a privacy statement, a fair processing statement, or, sometimes, a privacy policy,
although the term privacy policy is more commonly used to refer to the internal statement that governs
an organization or entity’s handling of personal information. Select “Next” to read more about use and
disclosure.

Use and disclosure (2)

Collected information may be used in different forms throughout the data life cycle. For example,
processing of data for security or fraud prevention purposes is one way to use data. Access of data by an
individual who simply reads the information is also considered data use.

Privacy technologists need to ensure that data is being used and disclosed only for the purposes for which
it was collected.

Repurposing collected data or disclosing it in ways other than originally stated in the privacy notice can
create privacy harms and may even be illegal. The risk to privacy should be assessed before any
information is repurposed or disclosed in a new context, and privacy technologists should remember that
it may also be necessary to update notices and request additional consent from individuals.

Retention

Data should be retained only as long as it is reasonably necessary and in compliance with legal and
regulatory requirements as well as applicable standards. What is reasonably necessary will depend on the
specific purpose or purposes for which the data is collected, together with legal, regulatory and industry
standards. Consideration of these elements will determine whether the data needs to be retained in the
short term or for a longer period of time. This in turn can assist the privacy technologist in determining
the type of storage media (for example, an electronic archive for personal data that needs to be retained
but does not need to be readily accessible) and treatments that may need to be applied to the personal

©2022, International Association of Privacy Professionals, Inc. (IAPP)IAPP)


data during its life cycle (for example anonymization of data fields or deletion of data types). If new uses
for collected information arise and thus require longer retention periods, some jurisdictions require data
subjects to be notified, issued a new privacy notice, or in some cases, given an opportunity to update their
consent. Regardless of whether or not this is mandated by law, it is good practice to ensure that
individuals are aware of any changes to original policy notices or privacy expectations.

Offline storage

Data stored online can take up valuable network resources, so offline storage may make sense. Storing
data off premises can guard against organizational data loss should a building be destroyed or there is a
persistent power outage. Although there are advantages to storing data offline or off premises, these
choices are not without risks, especially when sensitive data is involved. Risks and benefits should be
weighed when deciding whether, and when, to move data off-network or off-site. Once the decision has
been made to move data to offline data storage, the privacy risks associated with it may change and
should be assessed to determine whether and how protections should change. For example, sensitive
personal information may require encryption during transfer to offline storage and at rest.

Business continuity planning

Retention policies should cover either minimum or maximum retention periods, depending on the type and
intended use of data. It should be continually reviewed and evaluated based on assessment of business
risk. When organizations process personal information as part of their operations, they have quality
requirements associated with the data so that it is sufficiently timely, relevant, accurate and complete for
their purposes. These requirements should also be considered when developing business continuity plans.
Organizations should identify business-sensitive data that must be retained to support a disaster recovery
scenario and work with privacy technologists on appropriate storage options. The International
Organization for Standardization created standard ISO 22313:2020 to provide guidelines on business
continuity planning. Select the link for a supplemental reference on this standard.

Destruction (1)

Privacy technologists should work with their organization to determine when and how personal data will be
destroyed, as there are risks with retaining unnecessary data or keeping data longer than permitted, as
well as risks in deleting information prematurely.

Also, the sensitivity of information informs the strength of the destruction method that should be used.
Risks will be covered in greater detail later in this training.

Destruction (2)

A destruction plan should be applied to an organization’s records management plan to ensure the proper
removal of data. Simply stating that the data should be destroyed is not always sufficient. There should be
clear guidelines on how to destroy the data based on its type and the medium within which it is held. To
aid in the destruction of expired files, apply a retention period attribute to the properties of a file. Once
the custom attribute has been added, it is easier to retrieve the file to determine when it needs to be
destroyed. It is also possible to automate enforcement of retention schedules, such as by periodically
running a program that reads the “Retention Period” value from the file and deletes the file once the
retention period has passed.

The U.S. NIST Special Publication 800-88, Rev. 1, provides media-appropriate techniques for sanitizing
storage devices and destroying data, including overwriting with pseudorandom data, degaussing
electromagnetic devices, or incinerating physical media. The level of destruction may be contingent on the
sensitivity of the information. Select “Next” to continue learning about destruction.

Destruction (3)

Select each type of media for more information about potential issues that impact data destruction.

©2022, International Association of Privacy Professionals, Inc. (IAPP)IAPP)


Digital content: Disks should be appropriately formatted before use to ensure that all data placed
on them can eventually be deleted. Hard drives, tapes and other magnetic media will need to be
degaussed.

Portable media: Portable media, such as CDs, DVDs and flash drives, have unique challenges
precisely because they are portable and therefore harder to regulate, monitor and track. It may be
more difficult to enforce deletion policies, and employees need to be trained on their appropriate
use, including receiving regular reminders about established use and deletion policies. ROMs, CDs,
DVDs and other “WORM” (write once, read many) media will need to be physically, and possibly
professionally, destroyed.

Hard copy: The primary challenge with “hard copy” documents, such as paper records, lies in
determining what documents need to be destroyed and when. Established policies and guidelines
should be put in place that also include who will be responsible for the documents’ destruction, how
the documents will be destroyed and what mechanisms are in place to ensure the destruction has
actually taken place.

Summary

• The data life cycle refers to how data flows through an organization, and its components—
collection, use, disclosure, retention and destruction—are intended to be generic and adaptable to
different situations.
• The data life cycle is shaped by the privacy objectives and business practices of an organization.
• Data collection occurs during different times of the life cycle process and includes four types: first
party, surveillance, repurposing and third party.
• These methods of collection are either active (when a data subject is aware of data collection
occurring) or passive (when no action from the data subject is required for collection to occur).
• If a business or organization collects personal information, they should have a privacy notice in
place. A privacy notice details to data subjects how a business or organization collects, uses,
retains and discloses personal information. Privacy technologists need to ensure data is being used
and disclosed only for the purposes for which it was collected as well as retained only as long as it
is reasonably necessary in connection with the purpose for which that data was collected.
• Privacy technologists also need to work with their organizations to determine when and how
collected data is destroyed via a destruction plan to ensure the proper removal of data.

Review

1. Surveillance happens at what point in the data life cycle?

Collection
Use
Retention
Destruction

2. Privacy technologists should ensure that collected data is which of the following? Select all that apply.

Retained indefinitely
Used only for the purposes for which it was collected
Destroyed in accordance with organizational guidelines
Repurposed and reused in as many ways as possible

3. What term is used when previously collected data is used for a purpose other than that for which it
was initially collected?

Repurposing
Recycling

©2022, International Association of Privacy Professionals, Inc. (IAPP)IAPP)


Retention
Reuse

Foundational principles of privacy by design

Learning objective

• Recognize the foundational principles of privacy by design

Foundational principles of privacy by design

Dr. Ann Cavoukian served as information and privacy commissioner of Ontario, Canada for 17 years.
During that time, she conceptualized the framework known as Privacy by Design (PbD). PbD embodies
seven principles based on proactively incorporating privacy into all levels of operations organically, rather
than viewing it as a tradeoff or something to add to a system, product, service or process after it has been
built. This generalized definition was intentional on the part of Dr. Cavoukian as she wanted developers to
have flexibility while promoting the integration of privacy in system design. The seven privacy-by-design
principles are intended to provide privacy technologists with more concrete guidance in meeting privacy
principles.

Principle 1: Proactive, not Reactive; Preventative, not Remedial

Privacy protection must be a forethought in any technology system, product, process or service
development. Making privacy a consideration in the design phase—instead of reacting to privacy harms as
they arise in the future—helps to mitigate potential privacy risks and violations.

Thinking about privacy when designing a system, product, service or process helps practitioners design
these things with privacy considerations built in instead of trying to figure out how to address them in a
design that may be less flexible when privacy is considered later.

Principle 2: Privacy as the Default Setting

When personal information is used beyond or outside of the scope of what an individual expects, their
privacy is in danger of being violated. Individuals should not be solely responsible for protecting their
privacy; the default of a technology ecosystem should be that of preserving individuals’ privacy. Said
another way, privacy is achieved automatically without the individual having to take explicit action. For
example, many systems incorporate an opt-in feature for users to consent to future contact by an
organization before the user provides any personal information. This is considered a privacy-friendly
alternative to the opt-out selection, which indicates an assumption to intrude unless the user takes action,
like unchecking a box.

There are also expectations of privacy within specific contexts as in healthcare or finance. For example,
when one visits a healthcare professional, there is an understanding that the information shared will be
used only in the healthcare context (for treatment or payment processing, for example). With this in mind,
Helen Nissenbaum, a professor of information science at Cornell Tech, developed the concept of
“contextual integrity;” this refers to the preservation of situational expectations where there is an
understanding between participants based on societal norms or past interactions. This concept will be
explored further later in this module.

Principle 3: Privacy Embedded into Design

Privacy should be embedded into the design and architecture of technology systems and business
practices such that a system cannot operate without privacy-preserving functionality. This principle
suggests that privacy is not only included in the design of a program but is integral to the design. Privacy

©2022, International Association of Privacy Professionals, Inc. (IAPP)IAPP)


technologists may employ mechanisms such as designing online forms to collect data in a structured
format to prevent the collection of irrelevant personal information, using system logging capabilities to
record access and changes to personal information, or encryption for instant messenger programs—all
examples of privacy embedded into design.

Principle 4: Full Functionality—Positive Sum, Not Zero Sum

Understanding the organization’s need to use and protect personal information aids privacy technologists
in designing systems that still allow for desired performance and functionality while protecting information
privacy. Privacy-enhancing technologies are not a trade-off for other parts of a system, but rather a
synergistic win-win relationship.

Principle 5: End-to-End security—Full Life Cycle Protection

Consideration of personal information at every stage in the data life cycle—collecting, processing, storing,
sharing and destroying—is essential in any system design. By assessing the potential privacy risks
associated with each stage of the information life cycle, appropriate security measures can be evaluated
and implemented to mitigate these risks, and privacy technologists can better manage and secure the
end-to-end life cycle of information.

Principle 6: Visibility and Transparency—Keep it Open

Since the 1970s, providing notice to individuals regarding the use of their personal information has been a
cornerstone of privacy. Information that communicates how the organization uses, shares, stores and
deletes personal information should not be misleading, confusing or obscured. Visibility and transparency
in privacy notices not only helps reduce privacy risks but also allows individuals to make informed
decisions about their own information and gives them a choice when considering whether to use a service
and when deciding what or how much they wish to disclose.

Principle 7: Respect for User Privacy; Keep it User Centric

The individual is the principal beneficiary of privacy and the one affected when it is violated. Privacy
technologists and organizations should keep individuals’ needs, and the risks to them, at the forefront
when developing data ecosystems. Designing for privacy while respecting the best interest of the
individual is imperative in maintaining a balance of power between the individual and the organization that
holds their personal information.

Summary

• The Privacy by Design (PbD) framework is a set of seven principles conceived by Dr. Ann
Cavoukian for the purposes of incorporating privacy into all levels of an organization’s operations.
• Principle 1 (Proactive, Not Reactive; Preventative, Not Remedial) aims to make privacy a
forethought and not an afterthought in any technology system, product, process or service
development.
• Principle 2 (Privacy as the Default Setting) states individuals should not be solely responsible for
protecting their privacy; the default of a technology ecosystem should be that of preserving
individuals’ privacy.
• Principle 3 (Privacy Embedded into Design) suggests that privacy is not only included in the design
of a program but is integral to the design.
• Principle 4 (Full Functionality—Positive Sum, Not Zero Sum) aims for designing systems that
protect information privacy without losing any desired performance or functionality.
• Principle 5 (End-to-End security—Full Life Cycle Protection) puts forth that through assessment of
potential privacy risks, appropriate security measures can be evaluated and implemented, and in
turn privacy technologists can better secure the end-to-end life cycle of information.
• Principle 6 (Visibility and Transparency—Keep it Open) aims to communicate information to
individuals about how an organization uses, shares, stores or deletes personal information in a
transparent, non-misleading or confusing way.

©2022, International Association of Privacy Professionals, Inc. (IAPP)IAPP)


• Principle 7 (Respect for User Privacy; Keep it User Centric) reminds privacy technologists that the
individual is the principal beneficiary of privacy and the one affected when it is violated.

Review

1. What is the primary purpose of a privacy-by-design (PbD) framework?

To outline the legal and ethical expectations of a robust privacy program


To provide a framework of steps that should be incorporated into the creation of any new design
To provide guidance for proactively incorporating privacy into all levels of operations
To specify the technology and procedures that should be used to ensure personal information is
protected

Privacy risk models and frameworks


Learning objective

• Examine privacy risk models and frameworks

Privacy risk models and frameworks

Managing risk is an integral part of developing reliable software. Types of risks range from programmatic,
as in projecting costs and meeting deadlines, to technical risks that can cause breaches. Risk is defined as
a potential threat or issue, along with the impact the threat or issue could cause, and the likelihood that it
will occur. Essentially: What could go wrong? What are the privacy implications if it does? And how likely
is it actually going to happen? Risk levels are commonly assigned a value of low, medium or high impact.
Identifying risks early can assist with the development of specific administrative, operational and technical
measures to manage these risks. Analysts make use of privacy risk models to help them identify and align
threats with the system’s vulnerabilities to mitigate and plan for these risks. Management options can
include: accepting the risk as is; transferring the risk to another entity; mitigating the risk by applying an
appropriate control or design change; or avoiding the risk via abandoning a functionality, data or the
system itself.

Privacy risk management is an evolving field; however, there are several models available to privacy risk
analysts that may be used as assessment tools. We will begin with the most frequently used and long-
standing models and also explore other common options that technology professionals may choose to
incorporate into their projects.

Legal compliance (1)

Statutory and regulatory mandates prescribe aspects of privacy risk models and frameworks that handle
personal information. This includes the type of data collected, what the system does with that data, and
how the data is protected, stored, and disposed of.

What compliance regulations impact your organization’s data handling procedures?

Select here for an example of a compliance regulation that might impact an organization’s data handling
procedures.

“The data subject shall have the right to receive the personal data concerning him or her, which he
or she has provided to a controller, in a structured, commonly used and machine-readable format
and have the right to transmit those data to another controller without hindrance from the
controller to which the personal data have been provided …” (GDPR Article 20.1 – Right to data
portability)

Legal compliance (2)

©2022, International Association of Privacy Professionals, Inc. (IAPP)IAPP)


To ensure compliance, both business process and system owners must understand the specific obligations
and prohibitions their organizations are subject to and must work with their system design teams to relay
those requirements, as well as identify and address any threats and vulnerabilities associated with the
technologies that will be used.

Fair Information Practice Principles (1)

Fair Information Practice Principles (also referred to as FIPPs) are a set of long-standing privacy values
that exist in various forms globally. FIPPs work alongside compliance models to mandate: notice, choice,
and consent; access to information; controls on information; and how information is managed. Many
organizations around the world have adopted the FIPPs in their privacy risk management
recommendations.

FIPPs are a high-level abstraction of privacy compared to legal and policy structures that are more
specific. How the FIPPs are addressed varies based on the nature of the system, product, service or
process. Interpretation is necessary to determine how they should be applied when designing, building
and operating a system.

Select “Next” to continue learning about FIPPS.

Fair Information Practice Principles (2)

One common principle is to restrict the collection, use and sharing of information to only that which is
necessary to meet the purpose of a system. With that in mind, system interfaces should be designed so
that the only data elements that pass between systems are those relevant to the purpose for sharing. For
example, when a medical provider and a payment processor need to share information for billing
purposes, they may need to share an individual’s name and mailing address, but not the doctor’s notes
from the patient visit.

Nissenbaum’s “contextual integrity” (1)

Recall the example discussed earlier in this module. When visiting a healthcare professional, an individual
understands that the information shared will be used only in the healthcare context (for treatment or
payment processing, for example). As stated earlier, Helen Nissenbaum defines this as contextual
integrity, which is the maintaining of personal information in alignment with the informational norms that
apply to a particular context. These norms are generally domain-specific, such as banking or medical.

Consider the following diagram to illustrate the concept of contextual integrity.

Actors: The senders and receivers of personal information

Attributes: The types of information being shared

Transmission principles: Those that govern the flow of information

Nissenbaum “contextual integrity” (2)

Use the space to describe a situation in which contextual integrity would apply; then select “Submit” to
review an example.

Example: A patient visits a doctor with complaints (actors) and an x-ray is taken to determine the cause
of their discomfort (attribute). The doctor shares results with a specialist to determine a course of action
(transmission).

Nissenbaum “contextual integrity” (3)

When disruptions from the informational norms occur, privacy problems arise.

Using the example provided previously, if the doctor were to communicate treatment options via postal
mail, to either the patient’s home or work address, it could cause potential risks to privacy and the norms

©2022, International Association of Privacy Professionals, Inc. (IAPP)IAPP)


that govern a patient-doctor relationship, since the mail from the specialist could give away information
about the type of ailment the individual may have (for example, if the envelope showed a return address
for a cancer center).

One of the challenges for privacy technologists when considering context is that these norms do not
generally have a preexisting reference point for privacy risks. Privacy technologists must work with
organizations to identify relevant, existing norms and then determine how a system may disrupt those
norms. Interpreting and designing for vulnerabilities is particularly crucial when new technology is
introduced or when existing programs and practices are modified.

Calo’s harms dimensions (1)

Ryan Calo is an associate professor of law at the University of Washington whose areas of expertise
include cyberlaw, privacy and robotics. In an article for the Indiana Law Journal, he identified two
dimensions of privacy harm: objective and subjective. Objective harm occurs when privacy has been
violated and direct harm is known to exist. It involves the forced or unanticipated use of personal
information and is generally measurable and observable. Subjective harm exists when an individual
expects or perceives harm, even if the harm is not observable or measurable. An individual’s perception of
privacy invasion can cause fear, anxiety and even embarrassment. The relationship between subjective
and objective harms is analogous to the legal relationship between assault and battery. Assault is the
threat of unwanted physical contact, while battery is the experience of unwanted physical contact.
Similarly, subjective privacy harms amount to discomfort and other negative feelings, while objective
privacy harms involve actual adverse consequences.

Select the button for an example of these dimensions.

Consider a hypothetical situation where there was a large breach of personal financial information.
Those individuals whose identities were stolen or whose credit was damaged by hackers are victims
of objective harm (direct harm is known to exist). However, the individuals who did not
experience a direct harm (there is no evidence that their personal information was lost or used by
hackers) might still experience subjective harm due to their concern that they might have been
impacted by the breach or because of the amount of time and money spent freezing their credit
accounts and paying for credit monitoring.

Calo’s harms dimensions (2)

Subjective harm impacts individuals on a psychological and behavioral level, while objective harms can
result in loss of business opportunity, consumer trust or even social detriment to the individual. Even just
the fear of harm can be enough to limit an individual’s sense of freedom and may impact their decision to
use an organization’s system as a result. Since both types of harm have an impact on one’s privacy,
individuals may take similar steps to protect their information.

To assess the potential for subjective and objective harm, a privacy technologist may examine elements of
the system that relate to individuals’ expectations of how their information may be used, actual usage—
including surveillance or tracking—and consent or lack thereof to the collection and use of that
information. Clear privacy notices and controls can and should be used to build and retain individuals’
trust.

NIST frameworks

The National Institute of Standards and Technology (NIST) provides standards, guidelines and best
practices for managing cybersecurity-related risks, including the Risk Management Framework, the
Cybersecurity Framework and the Privacy Framework. The NIST Privacy Framework is a voluntary risk
management tool alongside the NIST Cybersecurity Framework. The NIST Privacy Framework is intended

©2022, International Association of Privacy Professionals, Inc. (IAPP)IAPP)


to assist organizations in communicating and organizing privacy risk, as well as rationalizing privacy to
build or evaluate a privacy governance program.

NICE Framework

The National Initiative for Cybersecurity Education’s Cybersecurity Workforce Framework (NICE
Framework) is a nationally focused resource published by NIST, which categorizes and describes
cybersecurity work. The NICE Framework establishes common terminology to describe cybersecurity work
and is intended to be applied in all sectors (public, private and academic).

Factors Analysis in Information Risk (FAIR)

The Factors Analysis in Information Risk (FAIR) model breaks down risk by its constituent parts, then
further breaks down those parts to find factors that estimate the overall risk. The goal is not to completely
eliminate risk, but rather to build a logical and defensible range of potential risk. FAIR constructs a basic
framework that breaks risks into the frequency of action and magnitude of violations. It asks, how often
will a violation occur and over what period of time? And what impact will that violation have?

Perspectives: Applying privacy frameworks in the real world

Jonathan Cantor, CIPP/US, CIPP/G, Deputy Chief Privacy Officer, Senior Vice President, Truist

One thing privacy delivers—there’s lots of acronyms and you certainly learn to speak in tongues and
people look at you with strange faces. But one of the things I want to caution people against in this
context is, rarely will you ever find yourself in a situation to say, “Oh, I’m going to use Nissenbaum, and
only Nissenbaum, in this role.”

That’s not how real life works. What you will find is, “Huh. The NIST way of thinking about this, with a
little bit of that contextual way of thinking that Nissenbaum presents, where I’m thinking about harm, sort
of, in context, as opposed to thinking about it so rigidly.” Or grasping onto that concept that Calo talks
about in his framework, about, “Oh, you know, this is a, for lack of a better term, a measurable harm,
against this sort of harm that is only in the eye of the beholder,” is just a useful way of thinking about it.
And a useful way of analyzing and explaining a problem. And you will often find that mooshing them
together like that will help you and your business partners think through how the system will work for
your system, will work for your business, will work for your partner.

Because it’s almost impossible to say, “This is the only way.” The only time I’ve ever really seen that was
in my own time in government where NIST was THE rule. And even there, it’s not like these other
frameworks aren’t also available to you, it’s just NIST was the default rule. So, you can always think about
all these things together, but it’s important to keep in mind that not one of these things exists to the
exclusion of the others. They’re all there to use them as a tool to help you operate in context.

As always, I will always say as a privacy pro, underlying all of these things is real value in understanding
how the privacy principles—those Fair Information Practice Principles—work through these things. These
are ways of taking those core principles that are the center of privacy and the center of privacy law and
operationalizing them in a way of making them work through a business system and making them work.
But you still need it all. And don’t over-insist on following one set of rules to the exclusion of others. Use
them all for what they’ll work for, and when another one doesn’t work, go to the other one.

It’s also, I will tell you, while you may need to know it here, you may not, in real life—no one will ever ask
you, “Oh, is that Nissenbaum?” or, “Oh, is that Calo?” That is not where your life is going to be. People will
want to know how the thought process works. I can tell you from my own experience here in real life,
people, when they’re looking at privacy through that risk lens and they want to understand how things
happen in real life, when they’re looking at a breach, or they want to understand how to approach a new
large investment in a data system, and they want to eyeball, like, where are those risks and where are my
key focal points, they’re going to want to understand, and you say, “Oh, well there’s a real risk of harm.”

©2022, International Association of Privacy Professionals, Inc. (IAPP)IAPP)


They’re going to want to understand what you mean by that before they make decisions that cost dollars
and cents or simply abandon something that’s been planned for a while.

And if you’re able to say to them, “Here’s where the harms are,” and in context to this and you’re sort of
leaning on these—nobody cares that it was Helen Nissenbaum that came up with the concept of contextual
harm. What they care about is that you’re able to articulate the problem in a meaningful way that makes
sense to others. So, keep that one in mind.

Summary

• Managing risk is an important part of developing software and there are several models and
frameworks available to privacy technologists to be used as assessment tools.
• Fair Information Practice Principles (also referred to as FIPPs) work alongside compliance models to
mandate: notice, choice, and consent; access to information; controls on information; and how
information is managed.
• Helen Nissenbaum defines contextual integrity as maintaining personal information in alignment
with the informational norms that apply to a particular context, such as domain-specific norms
(e.g., banking or medical).
• Ryan Calo identified two dimensions of privacy harm: objective and subjective. Objective harm
occurs when privacy has been violated and direct harm is known to exist. Subjective harm exists
when an individual expects or perceives harm, even if the harm is not observable or measurable.
• The National Institute of Standards and Technology (NIST) provides standards, guidelines and best
practices for managing cybersecurity-related risks, including the Risk Management Framework, the
Cybersecurity Frameworkand the Privacy Framework.
• The National Initiative for Cybersecurity Education’s Cybersecurity Workforce Framework (NICE
Framework) is a nationally-focused resource published by NIST, which categorizes and describes
cybersecurity work.
• The Factors Analysis in Information Risk (FAIR) model breaks down risk by its constituent parts,
then further breaks down those parts to find factors that estimate the overall risk.

Review

1. What is the difference between objective harms and subjective harms?

Only objective harms impact an individual’s decision to use a software program


Objective harms are measurable and observable; subjective harms are only expected or perceived
by the individual
Objective harms are the primary type of harm that should be considered when determining
whether a privacy harm has occurred
Objective harms impact individuals on a psychological and behavioral level while subjective harms
can result in loss of business opportunities or consumer trust

2. Which privacy risk model or framework is described as maintaining personal information in alignment
with the informational norms that apply to a particular context?

Nissenbaum’s contextual integrity


Calo’s harms dimensions
Value-sensitive design
Privacy by design

Value-sensitive design
Learning objective

©2022, International Association of Privacy Professionals, Inc. (IAPP)IAPP)


• Explain the concepts of value-sensitive design

Value-sensitive design

An important realization when designing privacy interfaces and website user experiences is that privacy
constitutes a value. While privacy is considered important by individuals and society, the desire for privacy
may compete with other values and norms. Privacy preferences, concerns and expectations are not
uniform but rather context-specific, user-specific, malleable and often difficult to define.

Value-sensitive design is a design approach that accounts for moral and ethical values and should be
considered when assessing the overall “value” of a design. In addition to privacy, these values might
include things such as trust, fairness, informed consent, courtesy or freedom from bias. Value-sensitive
design methods help to systematically assess the values at play in relation to specific technologies and
respective stakeholders. It then assesses how the technology might meet or violate those values and
strives to iteratively develop designs that are sensitive to and respectful of those values. The goal of
value-sensitive design is that stakeholders should see their values reflected in the final design.

How design affects users

Value-sensitive design emphasizes the ethical values of both direct and indirect stakeholders. Direct
stakeholders are those who directly interact with a system. Indirect stakeholders are any others who are
affected by the system.

For example, a mail order company’s database system might be used by its customer service
representatives and the inventory control, billing, and packing and shipping departments, all of whom
would be considered direct stakeholders. The customers would be indirect stakeholders, even though it is
their personal information that is contained in the database records.

Value-sensitive design is an iterative process which involves conceptual, empirical and technical
investigations.

Select each type of investigation to learn more.

Conceptual. The conceptual investigation identifies the direct and indirect stakeholders, attempts
to establish what those stakeholders might value, and determines how those stakeholders may be
affected by the design.

Empirical. The empirical investigation focuses on how stakeholders configure, use or are otherwise
affected by the technology.

Technical. The technical investigation examines how the existing technology supports or hinders
human values and how the technology might be designed to support the values identified in the
conceptual investigation.

Value-sensitive design methods (1)

Value-sensitive design focuses not just on the design of technology but also on the co-evolution of
technologies and social structures. In the case of privacy, this means considering the interplay of
technological solutions, regulatory solutions and organizational solutions when trying to resolve identified
value tensions.

Value-sensitive design methods (2)

In their book, A Survey of Value Sensitive Design Methods, Batya Friedman, David Hendry and Alan
Borning have identified 14 targeted design methods for engaging values in the context of technology.
Select “Next” to learn more.

©2022, International Association of Privacy Professionals, Inc. (IAPP)IAPP)


Direct and indirect stakeholder analysis: Direct and indirect stakeholders, as well as any
potential benefits, harms or tensions that may affect them, are identified.

Value source analysis: Project, designer and stakeholder values are assessed and the ways in
which each group’s values may be in conflict are considered.

The co-evolution of technology and social structure: Strives to engage both technology and
social structure in the design space with a goal of identifying new solutions that might not be
apparent when considering either alone.

Value scenarios: Used to generate narratives, or scenarios, to identify, communicate or illustrate


the impact of design choices on stakeholders and their values.

Value sketches: Make use of sketches, collages or other visual aids to elicit values from
stakeholders.

Value-oriented semi-structured interviews: Use interview questions to elicit information about


values and value tensions.

Scalable information dimensions: A values-elicitation method that uses questions to determine


the scalable dimensions of information such as proximity, pervasiveness or granularity of
information.

Value-oriented coding manuals: Used to code and then analyze qualitative information
gathered through one of the other methods.

Value-oriented mock-ups, prototypes, or field deployments: Can be used to elicit feedback


on potential solutions or features of new technologies or systems that are still in development.

Ethnographically-informed inquiries regarding values and technology: Examine the


relationships between values, technology and social structures as they evolve over time.

The model of informed consent online: Provides design principles and a value-analysis method
for considering informed consent in online contexts.

Value dams and flows: Ways of both identifying design options that are unacceptable to most
stakeholders (the value “dams”) and removing them from the design space, while also identifying
value “flows,” which are those design options that are liked by most stakeholders.

The value-sensitive action reflection model: Uses prompts to encourage stakeholders to


generate or reflect on design ideas.

Envisioning Cards™: A set of cards developed by Friedman and her colleagues, which can be
used to facilitate many of the other methods.

Strategies for skillful practice

In addition to the 14 value-sensitive design methods previously discussed, Friedman and her colleagues
also suggest several strategies for the practice of these methods. Many of these strategies are also
relevant in privacy impact assessments, as well as in user experience research, and design in general. The
difference is that value-sensitive design places not only people and their needs at the center of the design
process, but also the values that are important to them.

Select below to explore each strategy in more depth.

Clarify project values. Establish what values a project and the project team will strive to support.
What does privacy, informed consent, transparency and other privacy-related values mean for this
project and team?
Identify direct and indirect stakeholders. A value-sensitive approach to stakeholder analysis
aims to identify both direct and indirect stakeholders. Privacy needs and expectations may vary

©2022, International Association of Privacy Professionals, Inc. (IAPP)IAPP)


based on stakeholders’ characteristics and group identities, and individuals may be part of multiple
stakeholder groups.
Identify benefits and harms for stakeholders. Benefits and harms should be considered on
individual, societal and environmental levels. In the course of investigations, a simple but
illuminating practice is to ask why when people express positive or negative sentiment toward a
system or design in order to more deeply understand their reasoning and motivations or concerns.
Identify and elicit potential values. Benefits and harms that have already been identified are a
starting point for identifying corresponding values. The mapping of benefits and harms to
corresponding values can be straightforward (as in an unanticipated data-sharing practice that
affects privacy), or indirect (for instance, if additional effects due to surveillance practices curtail
people’s self-expression).
Develop working definitions of key values. Define what constitutes a specific value and spell
out the components that make up the value. For instance, informed consent is composed of, on
one hand, discovery, processing and comprehension of information; and, on the other hand,
voluntariness, comprehension and agreement.
Identify potential value tensions. Values do not exist in isolation, and they frequently conflict
with each other as well as with other requirements. However, value tensions rarely pose binary
trade-offs (such as the mistaken belief that you can have security or privacy, but not both), but
rather put constraints on potential designs (for example, posing the challenge of how security
requirements can be satisfied while also respecting privacy requirements).

The Design Thinking process

When considering value-sensitive design methods, it is important to indicate the relevance to the “Design
Thinking process.” The Design Thinking process has five phases: Empathize, Define, Ideate, Prototype and
Test, and it also follows an iterative approach. Combining the value-sensitive design methods with a
process such as this is important to understanding the integration of values with current system design
methodologies.
Summary

• Value-sensitive design is a design approach that accounts for moral and ethical values and should
be considered when assessing the overall “value” of a design.
• Value-sensitive design emphasizes the ethical values of both direct and indirect stakeholders.
Direct stakeholders are those who directly interact with a system. Indirect stakeholders are any
others who are affected by the system.
• Value-sensitive design focuses not just on the design of technology but also on the co-evolution of
technologies and social structures.
• When considering value-sensitive design methods, it is important to indicate the relevance to the
“Design Thinking process.”
• The Design Thinking process has five phases: Empathize, Define, Ideate, Prototype and Test, and it
also follows an iterative approach.
• Combining the value-sensitive design methods with the Design Thinking process is important to
understanding the integration of values with current system design methodologies.

Review

1. What is value-sensitive design?

An iterative design process in which designers focus on the users and their needs in each phase
of the design process
An iterative investigative approach to design that takes human values into account during the
design process

©2022, International Association of Privacy Professionals, Inc. (IAPP)IAPP)


A design process with a focus on the potential return on investment (monetary value) of each
design feature
An investigative process intended to establish the ROI for each potential design option

Review answers

The data life cycle


1. Collection
2. Used only for the purposes for which it was collected; Destroyed in accordance with
organizational guidelines
3. Repurposing
Foundational principles of privacy by design
1. To provide guidance for proactively incorporating privacy into all levels of operations
Privacy risk models and frameworks
1. Objective harms are measurable and observable; subjective harms are only expected or
perceived by the individual
2. Nissenbaum’s contextual integrity
Value-sensitive design
1. An iterative investigative approach to design that takes human values into account during the
design process

*Quiz questions are intended to help reinforce key topics covered in the module. They are not meant to
represent actual certification exam questions.

©2022, International Association of Privacy Professionals, Inc. (IAPP)IAPP)

You might also like