Professional Documents
Culture Documents
Introduction To Privacy Enhancing Technologies A Classification Based Approach To Understanding Pets 1St Edition Carlisle Adams
Introduction To Privacy Enhancing Technologies A Classification Based Approach To Understanding Pets 1St Edition Carlisle Adams
https://ebookmeta.com/product/introduction-to-privacy-enhancing-
technologies-a-classification-based-approach-to-understanding-
pets-1st-edition-carlisle-adams/
https://ebookmeta.com/product/introduction-to-geospatial-
technologies-5th-edition-bradley-a-shellito/
https://ebookmeta.com/product/an-interactive-approach-to-
enhancing-early-word-learning-from-research-to-best-practice-1st-
edition-lakshmi-gogate/
https://ebookmeta.com/product/understanding-cybersecurity-
technologies-a-guide-to-selecting-the-right-cybersecurity-
tools-1st-edition-moallem-abbas/
Ecospatiality A Place Based Approach to American
Literature 1st Edition Lowell Wyse (Author)
https://ebookmeta.com/product/ecospatiality-a-place-based-
approach-to-american-literature-1st-edition-lowell-wyse-author/
https://ebookmeta.com/product/introduction-to-mediation-
moderation-and-conditional-process-analysis-a-regression-based-
approach-methodology-in-the-social-sciences-third-edition-hayes/
https://ebookmeta.com/product/introduction-to-quantum-
nanotechnology-a-problem-focused-approach-1st-edition-duncan-
steel/
https://ebookmeta.com/product/a-practitioner-s-guide-to-data-
governance-a-case-based-approach-1st-edition-uma-gupta/
https://ebookmeta.com/product/introduction-to-microprocessor-
based-systems-design-guiliano-donzellini/
Carlisle Adams
This work is subject to copyright. All rights are solely and exclusively
licensed by the Publisher, whether the whole or part of the material is
concerned, specifically the rights of translation, reprinting, reuse of
illustrations, recitation, broadcasting, reproduction on microfilms or in
any other physical way, and transmission or information storage and
retrieval, electronic adaptation, computer software, or by similar or
dissimilar methodology now known or hereafter developed.
The publisher, the authors and the editors are safe to assume that the
advice and information in this book are believed to be true and accurate
at the date of publication. Neither the publisher nor the authors or the
editors give a warranty, expressed or implied, with respect to the
material contained herein or for any errors or omissions that may have
been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.
This Springer imprint is published by the registered company Springer
Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham,
Switzerland
I am happy to dedicate this book to my lovely wife Marion, who not only
provided endless encouragement throughout this process, but also
tangibly helped (in countless big and little ways) to vastly improve this
book and bring it to completion. Thank you so much for everything!
Preface
This book has been refined and developed in a graduate course that I
have taught 15 times at the University of Ottawa since 2004; the
classification-based approach that it uses was first published as a
research paper in 2006. Like a number of my colleagues around the
world, I have for a long time been somewhat frustrated by the absence
of a textbook on PETs that I could use for my course. While it is
appropriate and reasonable in a graduate course to point students to a
set of foundational and recent academic papers on the topic of
discussion, I have nevertheless felt that it would be useful to have a
single text that can serve as a general introduction to the field.
Eventually, since no one else seemed to be doing this (and with the
unexpected coincidence of a sabbatical year and a global pandemic
removing all other options for things to distract me!), I decided to put
this book together.
Consequently, the primary target audience of this book is graduate
and upper-year undergraduate students in computer science and
software engineering with an interest in technologies that protect and
enhance privacy in online environments. However, my hope is that
students, researchers, practitioners, and interested parties in many
other fields and at many different levels will find this book instructive
and helpful. Although privacy is a much bigger field than just PETs ,
privacy advocates in all areas can benefit from understanding the role
that PETs can play in guarding some aspects of privacy in our online
interactions.
In terms of organization, I feel that it is essential to begin with the
first two chapters, which motivate the need for PETs and describe the
classification-based approach that this book uses (the privacy tree ).
The six subsequent chapters (on various example PETs in each of the
six leaves of the tree) are mostly standalone and can be read in any
order. Chapter 9 discusses how to use the privacy tree in practice , and
Chap. 10 maps out a path forward, and so it is fitting that they should
be read after a basic understanding of PETs and the privacy tree is in
place from the previous eight chapters. For readers that are new to the
tools and techniques of cryptography, it will be helpful to read Chap. 11,
“Crypto Primer” prior to Chaps. 3, 4, 5, 6, 7, and 8; others may skip this
chapter or simply use it to see a description of a particular unfamiliar
term or concept. Finally, students and researchers in the field may find
Chap. 12 valuable: this chapter collects together all the references at
the end of the individual chapters throughout the book, adds a
hyperlink to each reference (for easy retrieval), and provides a (short)
bibliography of recommended reading in privacy.
I would like to greatly acknowledge the academic system in which
we operate: without a year-long sabbatical in which to focus, it would
have been far more difficult to complete this book in a reasonable time!
I am extremely grateful, too, for the comments and feedback I received
from several reviewers on an initial draft of this manuscript; my many
thanks go to Mozhgan Nasr Azadani, Ian Goldberg, Marion Rodrigues,
and Paul Syverson. Their careful reading and attention to detail
significantly improved the clarity, coverage, and correctness of the text
in many places. (But note, of course, that any errors remaining in the
book are entirely due to the author, not the reviewers!) Finally, I would
like to thank the many friends and relatives who encouraged and
supported me in this project, as well as Susan Lagerstrom-Fife and the
terrific team at Springer who brought this book to publication with
dedication, efficiency, and professionalism.
Carlisle Adams
Ottawa, ON, Canada
Contents
1 The Privacy Minefield
1.1 Threats to Privacy
1.2 The Battle for Supremacy Over Personal Data
1.2.1 Controlling the Goods and Services Available to Alice
1.2.2 Controlling the Choices That Alice (Apparently Freely)
Makes
1.2.3 Controlling What Alice Can Say, What She Can Do, and
Where She Can Go
1.3 High-Stakes Hide-and-Seek
1.4 Summary
References
2 A Collection of Tools:The Privacy Tree
2.1 Many Privacy Enhancing Technologies
2.2 Classification (Privacy Tree)
2.3 Previous Work on Classifications for Privacy
2.3.1 Antón et al.(2002)
2.3.2 Rezgui et al.(2003)
2.3.3 Skinner et al.(2006)
2.3.4 Adams (2006)
2.3.5 Solove (2008)
2.3.6 Nissenbaum (2009)
2.3.7 Pfitzmann and Hansen (2010)
2.3.8 Heurix et al.(2015)
2.3.9 OPC (2017)
2.4 The Selected Privacy Tree
2.5 The Remainder of This Book
References
3 Limiting Exposure by Hiding the Identity
3.1 Mix Network
3.1.1 The Basic Scheme
3.1.2 Enhancements
3.1.3 Strengths
3.1.4 Disadvantages, Limitations, and Weaknesses
3.2 Anonymous Remailer
3.2.1 The Basic Scheme
3.2.2 Enhancements
3.2.3 Strengths
3.2.4 Disadvantages, Limitations, and Weaknesses
3.3 Onion Routing and Tor
3.3.1 The Basic Scheme
3.3.2 Enhancements
3.3.3 Strengths
3.3.4 Disadvantages, Limitations, and Weaknesses
3.4 Summary
References
4 Limiting Exposure by Hiding the Action
4.1 Transport Layer Security (SSL/TLS)
4.1.1 The Basic Scheme
4.1.2 Enhancements
4.1.3 Strengths
4.1.4 Disadvantages, Limitations, and Weaknesses
4.2 Network Layer Security (IPsec in Transport Mode)
4.2.1 The Basic Scheme
4.2.2 Enhancements
4.2.3 Strengths
4.2.4 Disadvantages, Limitations, and Weaknesses
4.3 Private Information Retrieval (PIR)
4.3.1 The Basic Scheme
4.3.2 Enhancements
4.3.3 Strengths
4.3.4 Disadvantages, Limitations, and Weaknesses
4.4 Summary
References
5 Limiting Exposure by Hiding the Identity-Action Pair
5.1 Network Layer Security (IPsec in Tunnel Mode)
5.1.1 The Basic Scheme
5.1.2 Enhancements
5.1.3 Strengths
5.1.4 Disadvantages, Limitations, and Weaknesses
5.2 Off-the-Record (OTR) Messaging
5.2.1 The Basic Scheme
5.2.2 Enhancements
5.2.3 Strengths
5.2.4 Disadvantages, Limitations, and Weaknesses
5.3 Summary
References
6 Limiting Disclosure by Hiding the Identity
6.1 k-Anonymity
6.1.1 The Basic Scheme
6.1.2 Enhancements
6.1.3 Strengths
6.1.4 Disadvantages, Limitations, and Weaknesses
6.2 Credential Systems
6.2.1 The Basic Scheme
6.2.2 Enhancements
6.2.3 Strengths
6.2.4 Disadvantages, Limitations, and Weaknesses
6.3 Summary
References
7 Limiting Disclosure by Hiding the Attribute
7.1 Database Protection Approaches
7.1.1 The Basic Scheme
7.1.2 Enhancements
7.1.3 Strengths
7.1.4 Disadvantages, Limitations, and Weaknesses
7.2 Multi-Party Computation
7.2.1 The Basic Scheme
7.2.2 Enhancements
7.2.3 Strengths
7.2.4 Disadvantages, Limitations, and Weaknesses
7.3 ε-Differential Privacy
7.3.1 The Basic Scheme
7.3.2 Enhancements
7.3.3 Strengths
7.3.4 Disadvantages, Limitations, and Weaknesses
7.4 Summary
References
8 Limiting Disclosure by Hiding the Identity-Attribute Pair
8.1 Hippocratic Databases (HDB)
8.1.1 The Basic Scheme
8.1.2 Enhancements
8.1.3 Strengths
8.1.4 Disadvantages, Limitations, and Weaknesses
8.2 Platform for Privacy Preferences Project (P3P)
8.2.1 The Basic Scheme
8.2.2 Enhancements
8.2.3 Strengths
8.2.4 Disadvantages, Limitations, and Weaknesses
8.3 Architecture for Privacy Enforcement Using XML (APEX)
8.3.1 The Basic Scheme
8.3.2 Enhancements
8.3.3 Strengths
8.3.4 Disadvantages, Limitations, and Weaknesses
8.4 Credential Systems Showing Properties of Attributes
8.4.1 The Basic Scheme
8.4.2 Enhancements
8.4.3 Strengths
8.4.4 Disadvantages, Limitations, and Weaknesses
8.5 Summary
References
9 Using the Privacy Tree in Practice
9.1 In Conjunction with Security Technologies
9.2 In Conjunction with the Legal Infrastructure
9.3 In Conjunction with Other Technologies
9.3.1 Software Defined Networking (SDN)
9.3.2 Machine Learning (ML)
9.4 Summary
References
10 The Path Forward
10.1 The First Step:Decisions
10.1.1 Hide-and-Seek (Revisited)
10.1.2 Defense-in-Depth
10.2 The Next Step:Actions
10.3 Summary
References
Part I Supplemental Chapters
11 Crypto Primer
11.1 Terminology
11.2 Goals: Confidentiality, Integrity, Authenticity,
Cryptographic Strength
11.3 Realizing These Goals in Practice
11.3.1 Confidentiality
11.3.2 Integrity
11.3.3 Authenticity
11.3.4 Cryptographic Strength
11.4 Summary
References
12 Source Material
12.1 Full List of Hyperlinked References
12.2 Hyperlinked Bibliography
12.3 Further Reading on Selected Topics
Glossary
Index
About the Author
Carlisle Adams
is a professor in the School of Electrical Engineering and Computer
Science (EECS) at the University of Ottawa. Prior to his academic
appointment in 2003, he worked for 13 years in industry (Nortel,
Entrust), in the design and international standardization of a variety of
cryptographic and security technologies for the Internet. Dr. Adams’
research interests include all aspects of applied cryptography and
security. Particular areas of interest and technical contributions include
the design and analysis of symmetric encryption algorithms (including
the CAST family of symmetric ciphers), the design of large-scale
infrastructures for authentication (including secure protocols for
authentication and certificate management in Public Key Infrastructure
(PKI) environments), and comprehensive architectures and policy
languages for access control in electronic networks (including X.509
attribute certificates and the XACML policy language).
Dr. Adams has maintained a long-standing interest in the creation of
effective techniques to preserve and enhance privacy on the Internet.
His contributions in this area include techniques to add delegation,
non-transferability, and multi-show to digital credentials, architectures
to enforce privacy in web-browsing environments, and mechanisms to
add privacy to location-based services and blockchains. He was co-chair
of the international conference Selected Areas in Cryptography (1997,
1999, 2007, and 2017), and was general chair of the 7th International
Privacy Enhancing Technologies Symposium (2007).
He lives in Ottawa with his wife and children and enjoys music, good
food, and classic movies (old and new).
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer
Nature Switzerland AG 2021
C. Adams, Introduction to Privacy Enhancing Technologies
https://doi.org/10.1007/978-3-030-81043-6_1
Abstract
This chapter discusses why privacy enhancing technologies (PETs) exist
and why they are important. It provides a definition of privacy that will
be assumed throughout this book and describes a number of threats to
privacy in our online world. Finally, it suggests that we are in a battle
for supremacy over our personal data and argues that unless we
increase our awareness and employ suitable tools, there can be serious
consequences for our personal lives.
Privacy
What is privacy? Unfortunately, privacy is one of those frustratingly
“slippery” words: everyone seems confident that they know intuitively
what it means, but formal definitions inevitably end up being
incomplete and unsatisfying. The reason for this is that privacy is multi-
faceted; it means different things to different people, and its meaning
depends heavily on circumstances and on the participants in a given
situation. Daniel Solove, in his 2008 book Understanding Privacy
(2008), writes “Currently, privacy is a sweeping concept, encompassing
(among other things) freedom of thought, control over one’s body,
solitude in one’s home, control over personal information, freedom
from surveillance, protection of one’s reputation, and protection from
searches and interrogation.” He further quotes various scholars who
have claimed that privacy is “difficult to define because it is
exasperatingly vague and evanescent”, that it is “infected with
pernicious ambiguities”, and that it is “complex” and “entangled in
competing and contradictory dimensions.”
Nevertheless, it seems self-evident that having a working definition
of privacy is a necessary precursor to being able to comprehend and
reason about technologies that are designed to enhance privacy!
Solove’s book has the goal of developing “a new understanding of
privacy that strives to account for privacy’s breadth and complexities
without dissipating into vagueness.” He does a very thorough and
impressive job of building a definition (ultimately, a taxonomy; see Sect.
2.3 of Chap. 2) of privacy from a comprehensive collection of activities
that pose privacy problems (i.e., activities that impinge upon privacy
and serve as socially recognized privacy violations). Although this
privacy definition is expected to be very helpful for creating law and
policy to address privacy issues, it is not suitable for analyzing and
comparing different PETs because it was constructed exclusively from a
legal perspective.
Among the many technical privacy definitions in the literature, one
of the more interesting is the proposal by Ken Barker et al. in 2009
(2009). This proposal treats privacy as a 4-dimensional space, where
the dimensions are Purpose (with values of Single, Reuse Same, Reuse
Selected, Reuse Any, and Any), Visibility (with values Owner, House,
Third Party , and All/World), Granularity (with values Existential,
Partial, and Specific), and Retention (with values Unspecified and Date).
A given technology can then be described as a particular point in this
space, allowing different technologies to be usefully compared and
contrasted. Although this definition is very helpful when considering
privacy in the context of data repositories (which is the environment
for which it was designed), it is not applicable to other types of PETs ,
such as anonymous communications or online proof of attribute
ownership.
We rely, therefore, on the privacy definition that has historically
been the basis for PETs research: an entity’s ability to control how,
when, and to what extent personal information about it is
communicated to others (Westin 1967) (see also (Brands 2000)). This
definition may not be ideally appropriate for some areas of legal
privacy interest (such as freedom of thought, control over one’s body, or
solitude in one’s home), but it does capture the underlying goal of the
vast majority of PETs . Furthermore, while this definition is not as
nuanced as the proposal by Barker, et al., it is applicable to many PETs
(not just PETs for data repositories), while still capturing the essential
flavour of the privacy required in a data repository setting.
(Note that our chosen privacy definition , proposed by Alan Westin in
1967, has not been adopted by all PETs . For a given individual, Bob,
Westin’s definition addresses a flow of Bob’s personal information (held
by him or by another holder ) outward to other entities. Another view of
privacy – often referred to as “the right to be left alone” – considers
control over the flow of information into Bob’s personal space (e.g.,
protection from unwanted advertising). There are PETs (such as ad-
blockers) that are designed for this notion of privacy, but we have left
them as out-of-scope for the purposes of this book. This is for three
reasons. First, if the advertisement is generic (i.e., not targeted to or
personalized for Bob in any way), then straightforward ad blocking is
typically difficult and ineffective: it is somewhat analogous to trying to
prevent him from seeing a billboard as he drives along the highway.
Second, if the ad is generic but the ad blocking tool is more advanced
(providing so-called privacy preserving targeted advertising), then the
tool typically uses either naïve or sophisticated private information
retrieval (PIR) techniques that we discuss in Chap. 4. Third, if the ad from
the ad company is not generic but is specifically targeted to Bob, then this
must be based on some of his personal information that the ad company
previously acquired, and that prior privacy violation can be addressed by
the PETs that we do cover.)
1.1 Threats to Privacy
“Alice” (not her real name) is an individual living her life in our modern
society. She is a member of a family; she is an employee of a company;
she is a citizen of a country. Like most people, she spends a significant
amount of time online (working, playing, shopping, learning, banking,
and interacting endlessly with friends and family).
As we have noted, data about Alice gets collected at various places
and in various ways. The following are just a few examples.
– Government (this is sometimes referred to as a “womb-to-tomb
dossier”):
Birth record
Social Insurance Number (SIN), Social Security Number (SSN), or
similar lifelong identifying number
Records about marriages, divorces, and death
Accident reports and traffic citation records
Voting record
Tax record
Record of professional affiliation
Worker’s Compensation record
Public employee personnel record
Street list record
Property tax assessment record
Police arrest record
Court record
Record of jurors in court cases
Records of special civil proceedings (such as bankruptcy)
Divorce court proceedings
Criminal court proceedings
– Companies that Alice deals with
Credit card company
Cell phone company
Banking and financial institutions
Microsoft, Apple, Google, etc.
Social media (Facebook, Twitter, Instagram, etc., etc., etc.)
Grocery stores (especially through store loyalty cards)
– Companies that Alice does not deal with (or does not realize she is
dealing with)
Ad companies
Ad-serving companies (such as DoubleClick (a subsidiary of
Google))
“Information brokering” and data collection companies (such as
ChoicePoint, Acxiom)
Some of this data is collected as a result of normal living (getting
married, acquiring a mortgage or a loan, making a credit card purchase,
and so on). Some is collected as a result of software (her browser giving
information to websites, cookies that are stored and later replayed,
health monitoring wearables that send readings to a cloud service,
spyware installed on her machine). Some comes from Alice willingly
giving her information in return for goods, services, convenience,
discounts, or notoriety (although of course Alice may give this without
a full understanding of what her information may subsequently be used
for). Some comes from court or government actions (such as
subpoenas). Some comes from business dealings outside of Alice’s
control (mergers, acquisitions, amalgamation of online and offline data,
data mining practices to derive greater insight from existing data, and
so on).
Not only is there incessant data collection, but it is also the case that
data never gets deleted. Many years ago, people would occasionally do
a “clean-up” of their files, sorting them into topics or categories, saving
those that were important and discarding the rest. Today, with the ever-
decreasing cost of memory and the ever-increasing volume of data, it is
much simpler (and ultimately much cheaper because so much time is
saved) to simply keep everything. Even if a company makes a
determined effort to delete specific data (in order to comply with
retention requirements or right-to-be-forgotten obligations in privacy
laws, for example), it is impossible to guarantee with absolute certainty
that this data has not been copied any number of times and stored in
any number of locations anywhere in the world. In short, the safest
assumption is always that any piece of data that was ever created still
exists somewhere.
This probably goes without saying, but just to be explicit: Alice’s
data has tangible value. Any entity that stores any data about Alice
treats it as an asset. Every piece of personal information about her is
worth money to someone and can be sold for actual cash (either
individually, or in some kind of collection: “Here is a list of all the people
in this region who own dogs or cats”).
Furthermore, processing, combining, correlating, organizing,
sanitizing, and mining Alice’s data gives it greater value. The processing
of Alice’s data adds meaning and insight; it allows deeper connections
to be made or additional facts to be learned. Ultimately, it increases the
amount of data about her and, additionally, makes her data more
reliable and more actionable. Consequently, it has greater significance
and usefulness, and so it is worth more money.
Finally, it is important to recognize that many entities (including
Alice!) only see the benefits in all this data collection.
– Alice is quite happy to have companies store information about her
because in exchange she can receive personalized service, discounts,
and helpful recommendations. She may also be able to build/expand
personal and professional contacts (for example, on social media or
job-focused networking sites).
– Companies are happy to have Alice’s information because they can
improve their service to her, which may lead to greater customer
loyalty and additional future sales.
– Governments and law enforcement agencies are happy to have
citizen information because it helps them to enhance safety and
security for society (for example, they may be able to find criminals,
detect fraud, dismantle drug distribution rings, locate terrorist cells,
and so on, more quickly and more easily).
The result of these five points (data gets collected, it never gets
deleted, it has value, it can be processed to give it greater value, and
most entities are happy with the status quo) is a set of threats to
privacy .
The threat that leaps most quickly to mind for most people is
targeted advertising . Alice might have a somewhat uncomfortable
feeling that “someone is watching her”. As soon as she does a search for
some item on her search engine, suddenly ads for that item, or very
similar items, start showing up when she uses Facebook, YouTube,
Gmail, and a variety of other platforms. She may find that she starts
getting recommendations for related items in various places (including
her physical mailbox!). This might all be fine if this is an item she is
really interested in, but it starts to feel more like a privacy violation if
she has no interest in this item (for example, she searched for, and
purchased, a CD or book for a friend as a gift, and does not like that
genre of music or writing herself) or if she no longer has interest in this
item (for example, she purchased a washing machine and does not need
another one).
Another privacy threat related to the above is unauthorized
dissemination of personal data . Alice sees that these different
applications and platforms (from apparently different companies) all
seem to know about the search she did, the transaction she completed,
or the data she shared with one of them. Why is her data being given, or
sold, to these third parties without her consent or approval?
Privacy threats that do not seem to spring so readily to mind for
many people are those connected with use of social media. Many
people use social media venues extensively and post a vast amount of
highly personal information on these platforms. This can lead to at least
five kinds of threats to privacy.
Physical danger . Many people unthinkingly (perhaps naively) post
precise current or future location information on social media.
Something like “Every Tuesday at 6:00 a.m. I go alone to the woods near
my house and just commune with nature. I really need this!” or “I can’t
wait for the party at Derek’s place on Friday night. Of course I’ll have to
walk there because I won’t be in any shape to drive back. ☺” allows any
malefactor to know exactly where Alice will be at a given time (in the
woods by her house, or on a footpath between her place and Derek’s
place). This, of course, opens her up to a host of potential physical
attacks on her person.
– Risk to personal security . The opposite of the previous threat is just
as dangerous: if the malefactor knows where Alice is, then he also
knows where Alice is not. If Alice is at Derek’s place on Friday night,
then she is clearly not at her own house. This knowledge can lead to
a variety of potential attacks related to Alice’s personal security,
rather than to her own physical danger (such as home break-in,
kidnapping of her child, car theft, and so on).
– Identity theft . Another major risk for Alice is identity theft. Many
people put narratives about themselves on social media (for
example, in a blog post) and reveal many things about their
upbringing, family, hobbies, pets , interests, heroes, favourite
books/movies/characters/songs, and so on. Very often, this type of
information is used in the creation of passwords and in the
responses to identity verification questions (e.g., mother’s maiden
name, or last three residential addresses). Consequently, someone
familiar with this narrative may have a much easier time stealing
Alice’s identity and using it to purchase a large item or take out a
loan.
– Loss of data ownership . When Alice posts data, pictures, and videos
on social media, she may not realize that she might no longer own
these items. It is essential to read the privacy policies of these
various platforms because some of them explicitly take ownership of
anything posted on their site, meaning that they can then do
anything that they wish with this information. Alice will have no legal
recourse to stop them from using her information because she
consented to their Terms of Use (likely without reading it) when she
created an account on the platform.
– Human profiling . There may be serious short-term and long-term
privacy implications for Alice, depending on what information she
has posted on social media. People will read her posts and profile her
(make decisions about what type of person she is in a number of
different categories). Based on their judgements (“she seems like a bit
of a gossip”, “she’s easily bored”, “she’s pretty immature”, “she’s a real
complainer!”), she may not get hired into a specific job, or may get
passed over for a promotion. Depending on what she has posted, she
may not win a nomination for public office many years later.
Outcomes like this will occur even if these judgements are
completely wrong or if Alice has changed significantly in the
intervening years, but of course Alice may never learn what these
incorrect judgements were and how they negatively impacted the
opportunities available to her.
The threats to privacy are myriad. The above has outlined a few
examples, but there are countless others. A recent whitepaper by Jules
Polonetsky and Elizabeth Renieris gives the authors’ forecast of ten
privacy risks to watch in the next decade (2020). Included in their list
are the following:
– Risks to personal physical or behavioural information due to
increased use of biometric-based user interface (UI) systems;
– Risks to (unprotected) patient-generated data as it is used to assess
the safety and effectiveness of drugs or medical devices;
– Risks due to “intimate” IoT devices (such as smart contact lenses,
Bluetooth-enabled “smart pills”, and Wi-Fi-enabled pacemakers), as
well as further development of brain-machine interfaces (BMI) that
allow people to communicate and control devices with their
thoughts alone;
– Risks due to increasingly sophisticated artificial intelligence (AI )-
based systems (including augmented-reality/virtual-reality systems
and collaborative robots), along with their further incorporation into
personal spaces such as homes and private offices;
– Risks due to ever-greater precision and accuracy of location
information from a wide range of mobile and wearable devices,
which can reveal sensitive information about religious beliefs, health
issues, mobility patterns, and behaviour;
– Risks due to the emergence of smart communities (smart cities),
particularly with respect to the plethora of sensors that will capture
data in public spaces; and
– Risks due to the possibility of quantum computers that can
potentially break existing cryptographic protections and enable
computing that is orders of magnitude faster (providing more
advanced computation and better predictive analysis on personal
data).
In all these cases there is collection, storage, processing and use of
personal data on an ever-expanding scale, leading to a phenomenal rise
in the number of privacy threats in every aspect of society.
Your cell phone provider tracks your location and knows who’s
with you. Your online and in-store purchasing patterns are
recorded, and reveal if you’re unemployed, sick, or pregnant.
Your e-mails and texts expose your intimate and casual friends.
Google knows what you’re thinking because it saves your private
searches. Facebook can determine your sexual orientation
without you ever mentioning it.
The powers that surveil us do more than simply store this
information. Corporations use surveillance to manipulate not
only the news articles and advertisements we each see, but also
the prices we’re offered. Governments use surveillance to
discriminate, censor, chill free speech, and put people in danger
worldwide. And both sides share this information with each
other or, even worse, lose it to cybercriminals in huge data
breaches.
Much of your personal information is clearly already in the hands of
others, and trying to protect it now is the uncertain province of laws,
regulations, and policies. But these other entities are never satisfied;
they never feel that they have enough information. If you want to
prevent them from acquiring more of your information tomorrow (and
thereby regain a sense of empowerment and confidence that you are
not completely helpless), you will need to engage in a battle for
supremacy over your personal data. Every battle requires weapons. This
is where PETs enter the picture.
“I’m looking for some tools. I have a bit of work to do around the
house.”
“Very good, sir. Right this way.” The clerk takes James to Aisles B and
E. “Here you go.”
“Thank you very much – these both look perfect!” James pays for his
purchases and heads home feeling quite optimistic. Later that day,
James finds that the hand saw does a very poor job at trimming his
hedges, and the 3″ paintbrush does a painfully slow and uneven job
of painting his dining room wall. Disappointed and frustrated, James
considers abandoning home improvement altogether.
The above story about James and the hardware store is applicable to
many situations, and particularly to privacy enhancing technologies.
PETs offer many different tools to users, but unless Alice knows what
task she wants to accomplish, she could very easily choose the wrong
tools and not improve the aspect of privacy that she cares about
(regardless of how technically good her chosen tools are). Many people
– both experts and non-experts – might tell Alice to consider onion
routing or ε-Differential Privacy because these are well-known and
well-respected PETs , but onion routing won’t help her if she wants to
protect her internal database from malicious insider attacks, and ε-
Differential Privacy won’t help her if she wants to write and enforce a
privacy policy for her organization. She needs to be able to pick the
right PET for her intended task.
1.4 Summary
We began this chapter by motivating why PETs exist and why they are
important. This was followed by a brief discussion of definitions for
privacy, ending with the definition that we will assume throughout the
book. We presented some threats to privacy and the risks that can arise
(including, among other things, physical danger, identity theft, and
human profiling).
We then suggested that individuals are now in a battle for
supremacy over their personal data: other entities want your data so
much that you need to fight to keep it to yourself. Entities want your
data so that they can control you in such a way as to derive some
benefit for themselves. They may use your personal data to control the
goods and services available to you, to control the choices that you
make, and ultimately to control what you can say, what you can do, and
where you can go.
PETs are an important collection of weapons for our battle. They
can be highly effective in our fight to keep our private data out of the
hands of others. However, they will only be useful if we have some
understanding of what they do and what aspect of privacy each one is
designed to protect.
This book is an introduction to the world of PETs. Its objective is to
present the topic through the lens of the privacy goals that Alice may
wish to accomplish, and to describe, as examples, a number of PETs that
illustrate these privacy goals and how they can be achieved. This
approach will hopefully allow Alice (and you, the reader!) to better
understand what needs to be protected and the options available for
protecting it.
The following chapter discusses the foundation of our approach, a
relatively simple construct we refer to as the privacy tree .
2. Choose two major social media platforms. Read the privacy policy
of each platform very carefully. What does each policy say about
data ownership, data retention, data use, and sharing of data with
third parties? Are there privacy concerns that you feel someone
should think about before clicking “OK” to the policy?
References
S. Baker, The Numerati (Houghton Mifflin Company, 2008)
J. Burkell, P.M. Regan, Voter preferences, voter manipulation, voter analytics: Policy
options for less surveillance and more autonomy. Internet Policy Rev. 8(4), 22 (2019,
Dec 31)
Federal: Federal Trade Commission, In FTC study, five percent of consumers had
errors on their credit reports that could result in less favorable terms for loans,
Federal Trade Commission press release (2013, Feb 11)
D. Ingram, Factbox: Who is Cambridge Analytica and what did it do?, Reuters, (2018,
Mar 19)
F. McDonough, Careless whispers: How the German public used and abused the
Gestapo, The Irish Times (2015, Sep 28)
M. Mohan, Singapore Police Force can obtain TraceTogether data for criminal
investigations: Desmond Tan, Channel News Asia (2021, Jan 4)
B. Schneier, Data and Goliath: The Hidden Battles to Collect Your Data and Control Your
World (W. W. Norton and Company, 2015)
O. Solon, Insecure wheels: Police turn to car data to destroy suspects’ alibis, NBC
News (2020, Dec 28)
A.F. Westin, Privacy and Freedom (Ig Publishing, 2015). (This is a new printing of the
original 1967 edition of this book, which is now out of print)
S. Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New
Frontier of Power (Public Affairs, 2019)
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer
Nature Switzerland AG 2021
C. Adams, Introduction to Privacy Enhancing Technologies
https://doi.org/10.1007/978-3-030-81043-6_2
Abstract
This chapter highlights the fact that many different PETs have been
proposed and used over the past four decades. It suggests that a
classification may be a helpful way to compare and contrast these PETs,
leading to a greater understanding of which ones are most appropriate
for particular privacy goals. A literature review is provided and one
selected classification is described in some detail.
The items at the leaf of a single branch at the bottom of the tree are
most similar in the classification; items at the leaves of different
branches are dissimilar in one or more explicitly-defined ways.
A classification for PETs would allow us to compare the various
proposals that exist. It would also identify any gaps in this field: a
branch with no leaves would highlight the fact that there are no
technologies in a particular area, which would allow researchers to
either design new proposals with those characteristics, or explore
whether that category in the classification is actually useful (or
achievable) in real-world environments.
In this book, I refer to such a classification as the “privacy tree”.
Existing PETs proposals and tools are placed at the leaves of this tree,
allowing us to readily see which ones are suitable for which specific
purposes. All the technologies are intended to protect privacy in some
sense, but exactly how they achieve this, and exactly what aspect(s) of
privacy they protect, becomes much clearer when we see them in the
“bigger-picture” context of the privacy tree.
Having continued the pursuit of the enemy and crossed the river
Garonne, four leagues below Toulouse, on the 5th of April, and
attacked the French on the 10th of the same month at Toulouse on
the left of the town, the redoubts were taken and retaken several
times during the day. The enemy retreated at night, having suffered
great loss; that of the Eighty-seventh was one brevet-major, four
serjeants, one drummer, and twenty-two rank and file killed; one
lieutenant, one ensign, six serjeants, and sixty-four rank and file
wounded: total, one hundred. Its strength in the field was four
hundred and sixty-four.
Killed.
Brevet-Major—Henry Bright.
Wounded.
Lieutenant—William Wolsley Lanphier.
Ensign—Abraham F. Royse.
1817.
EIGHTY-SEVENTH REGIMENT
Madeley lith. 3 Wellington St. Strand
For Cannon’s Military Records
FOOTNOTES: