Waldman Columbia 0054D 12535

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 260

Privacy As Trust: Sharing Personal Information in a Networked World

Ari Ezra Waldman

Submitted in partial fulfillment of the


requirements for the degree of
Doctor of Philosophy
in the Graduate School of Arts and Sciences

COLUMBIA UNIVERSITY

2015
©2015
Ari Ezra Waldman
All rights reserved
ABSTRACT

Privacy As Trust: Sharing Personal Information in a Networked World

Ari Ezra Waldman

Global data networks pose potential dangers to personal privacy by making much of

our information available for others to see. Our credit card numbers, names and addresses,

prescription drug histories, intimate photographs, and even our movements along city streets

are subject to relatively easy surveillance through network technologies. This thesis addresses

a threat to privacy occasioned by modern life in a networked world: since limited disclosure

of certain personal information is often necessary to participate in modern social,

commercial, and professional life, under what circumstances, if any, can we retain privacy

rights in information previously disclosed? The answer is explored through a sociological

lens and concludes that, under certain circumstances, disclosures in contexts of trust are

private.

Traditional conceptualizations of what it means for something to be private and

legally protected as such are inadequate to respond to the challenges stemming from

technological advancement. Developed over time and influenced by inherent political and

philosophical biases, conventional theories of privacy actually endanger our rights in a world

where privacy invasions are more frequent, less avoidable, and damaging. Reorienting legal

analysis of invasions of privacy around principles of trust would protect personal privacy in

the modern world. Ultimately, this thesis argues that disclosures made in contexts of trust

that give rise to obligations of confidence and discretion are not truly made public and,
therefore, should retain legal protection as private. And trust, evidence presented shows, is

not limited to close relationships among intimates or legally-defined relationships; trust

extends to relationships and social connections based on several social factors, including

strong overlapping networks, identity sharing, and experience.

The locus of theoretical inquiry is primarily the effects of the internet on privacy and

how to solve the problem of limited disclosures. In this respect, the main contribution of

this thesis is the articulation of social trust as a basis for drawing the line between what is

public and what is private. The locus of empirical inquiry is Facebook, an online social

network platform that is used not as a perfect proxy for all social interactions and

disclosures, but as a case study to highlight the problems of modern social interaction, its

effects on personal privacy, and the role of trust in at least some decisions to share personal

information. Here, the thesis’s main contribution is the study of how trust in others,

including strangers, influences sharing on an online social network and the identification of

social indicia of trust that inspires sharing. The recommendations are legal, spanning tort,

constitutional and intellectual property law, providing both scholars and judges with

theoretical and practical tools for strengthening privacy during technology’s remarkable

journey forward.
TABLE OF CONTENTS

LIST OF TABLES AND FIGURES .......................................................................................... iv

ACKNOWLEDGEMENTS ........................................................................................................... v

DEDICATION ................................................................................................................................ vii

INTRODUCTION: The Roadmap............................................................................................. 1

CHAPTER ONE: The Social History ...................................................................................... 12

Section 1.1: The Social Construction Model .................................................................... 14

Section 1.2: The Social History of Tort Privacy Law ...................................................... 19

Section 1.3: The Social History of Constitutional Privacy Law ..................................... 26

Section 1.4: The Social History of Statutory Privacy Law .............................................. 31

CHAPTER TWO: The Scholarship........................................................................................... 35

Section 2.1: Privacy as Freedom From ............................................................................... 36

Section 2.1.1: Separation, Sequestration, and Exclusion ................................... 37

Section 2.1.2: Intimacy, Secrecy, and Deviance.................................................. 43

Section 2.2: Privacy as Freedom For .................................................................................. 51

Section 2.2.1: Individuality, Independence, and Personhood .......................... 51

Section 2.2.2: Autonomy, Choice, and Control .................................................. 55

Section 2.3: Moving Away From Rights ........................................................................... 62

CHAPTER THREE: The Theory ............................................................................................. 65

Section 3.1: Social Theories of Privacy ............................................................................. 66

Section 3.2: Breaches of Privacy as Breaches of Trust ................................................... 73

Section 3.3: The Sociology of Trust................................................................................... 80


i
Section 3.3.1: What is Trust? ................................................................................. 81

Section 3.3.2: How Does Trust Develop? ........................................................... 84

Section 3.4: Benefits of Privacy-As-Trust ......................................................................... 89

CHAPTER FOUR: The Data ..................................................................................................... 95

Section 4.1: Questions and Hypotheses ............................................................................ 96

Section 4.2: What We Know............................................................................................... 97

Section 4.3: Research Design ............................................................................................ 100

Section 4.4: Describing the Data ...................................................................................... 103

Section 4.5: Results and Discussion ................................................................................. 106

Section 4.6: Limitations ..................................................................................................... 119

CHAPTER FIVE: The Effects: The Tort of Breach of Confidentiality ....................... 123

Section 5.1: The Tort of Breach of Confidentiality ....................................................... 125

Section 5.2: Further Disclosure of Previously Revealed Information ........................ 131

CHAPTER SIX: The Effects: The Fourth Amendment and the


Third-Party Doctrine .................................................................................................................... 140

Section 6.1: The Fourth Amendment.............................................................................. 143

Section 6.2: Interpretive Flexibility in Fourth Amendment Jurisprudence................ 145

Section 6.2.1: Privacy as Property....................................................................... 147

Section 6.2.2: Privacy as Secrecy ......................................................................... 152

Section 6.3: Privacy As Trust and the Fourth Amendment ......................................... 156

Section 6.3.1: Applying Privacy-as-Trust to Sensory Enhancing


Investigative Technologies .................................................................................. 157

Section 6.3.2: Applying Privacy-as-Trust to Internet-Based Collection


and Aggregation of Personal Data ..................................................................... 165

ii
CHAPTER SEVEN: The Effects: Public Versus Private in Intellectual Property .... 169

Section 7.1: The “Public Use” Bar and Denial of Social Relationships ..................... 173

Section 7.2: Trade Secret Law’s Respect for Social Relationships .............................. 184

Section 7.3: Respecting Relationships in Patent Law .................................................... 190

CHAPTER EIGHT: Conclusion and Next Steps ............................................................... 195

Section 8.1: What We Have Learned ............................................................................... 195

Section 8.2: Response to Objections ............................................................................... 198

Section 8.3: Steps for Future Research............................................................................ 202

TABLES AND FIGURES .......................................................................................................... 205

REFERENCES ............................................................................................................................. 214

APPENDIX ................................................................................................................................... 240

iii
LIST OF TABLES AND FIGURES
Table 4.4.1:
Comparison of Sample to Facebook Population, Generally .................................................... 104
Figure 4.5.1:
Relationship Between Trust and Sharing, Generally .................................................................. 108
Figure 4.5.2:
Relationship Between Trust and Sharing Intimate Information ............................................... 108
Table 4.5.3:
Demographic Correlations with Sharing on Facebook .............................................................. 109
Table 4.5.4:
Multiple Regression: Total Sharing on Facebook ...................................................................... 111
Table 4.5.5:
Multiple Regression: Total Intimate Sharing on Facebook ...................................................... 112
Table 4.5.6:
Predicting Importance of Sharing Same Sexual Orientation
for Willingness to Accept Friend Requests from Strangers ...................................................... 118
Table 7.1.1:
The Relationship Between Inventor Control and “Public Use” .............................................. 174
Table 7.1.2:
The Impact of Confidentiality Agreements on Findings of “Public Use” ............................. 174

iv
ACKNOWLEDGEMENTS
This project could not have succeeded without the advice, assistance, and

mentorship of so many colleagues, whose intellect and standing in their fields are

inspirational. Their influences dominate this thesis and I am in their debt.

I would like to thank the faculty in the Department of Sociology at Columbia

University for allowing me to pursue this project. I am deeply grateful to my adviser, Gil

Eyal, and the other members of my dissertation committee: Diane Vaughan, Jeffrey

Goldfarb, Debbie Becher, and Greg Eirich. Other scholars at Columbia University,

particularly Peter Bearman, have also helped shape me a sociologist, researcher, and scholar.

I am also grateful to the chair of the Sociology Department and its director of graduate

studies for always having my interests as an academic at heart. Thanks are also due to

Andrea Solomon, Senior Associate Dean for Academic Administration. Although I have

tried to meet their standards of excellence, insight, and thoughtfulness, I harbor reasonable

doubt of my success. Nevertheless, these scholars have instilled within me an unending drive

for improvement in the hopes of exceeding their expectations.

Special thanks are due the administration, faculty, and staff of New York Law

School. They have supported my work, encouraged me during the dissertation writing

process, listened to my work-in-progress talks, and gave outstanding and essential feedback

on my work. I would like to offer particular thanks to the following: Dean and President

Anthony Crowell, Associate Dean for Academic Affairs Deborah Archer, and Professors

Art Leonard, Jake Sherkow, Ed Purcell, Robert Blecker, Richard Chused, Tamara Belinfanti,

Richard Sherwin, Steve Ellman, Alan Appel, Dan Warshawsky, Ruti Teitel, Howard Meyers,

Frank Munger, Houman Shadab, and Nadine Strossen. Thanks also go out to Joanne

Ingham for her advice and expertise. Essential support was provided by Jeffrey Saavedra,
v
one of my most remarkable students. I would like to thank all of my New York Law School

students: teaching some of the material covered in this thesis and discussing it with my

students in Information Privacy Law, Internet Law, Intellectual Property, and even Torts has

helped crystalize my understanding of some of these complex concepts. My students have

helped make this project better.

The final draft is the product of countless works-in-progress talks and discussions.

For their comments, critiques, and expressions of support, I would like to thank Dan

Hunter, Eric Goldman, Derek Bambauer, Joshua A. T. Fairfield, and all those who attended

the Internet Scholars Works-in-Progress conferences in 2014 in New York (sponsored by

New York Law School) and in 2015 in Santa Clara, CA (sponsored by Santa Clara Law

School). Special thanks to Mark Lemley, Jessica Silbey, and Gregory Mandel, my fellow

panelists at the 2015 Intellectual Property Works-in-Progress Conference in Washington.

I would not have succeeded without the advice and support of my mentors: Danielle

Keats Citron, Frank Pasquale, Richard Sherwin, Michael Sandel, Tony Varona, and Daniel

Solove. My work is indebted to theirs; I stand on their shoulders. All errors are, of course,

my own.

And thanks to my family—my partner, Adam Oestreicher; my parents, Debby and

Harvey Waldman; and my sister Genna and her family, Ilan, Benjamin, and Ethan Klein—

for their constant support and encouragement and for understanding when I was not

available to babysit, visit home, cook dinner, or sleep.

vi
DEDICATION

For Adam, who makes it all worthwhile.

vii
INTRODUCTION:
The Roadmap
The link between trust and sharing is intuitively understood, yet inadequately studied.

On some level, the link seems obvious. We trust our parents, closest friends, and spouses to

keep our confidences; that is why we share our confidential, even stigmatizing information

with them rather than with, say, a supervisor at work or a random stranger on the subway.

Apple recognizes that a link exists: since updating to iOS 8.0 or later, all iPhone users are

asked to affirmatively “trust” a computer to which they connect their phone via USB before

data is exchanged between the devices. Even Uber, a for-hire vehicle (FHV) company that

simultaneously challenges government regulation of the FHV market (“Taxi and Limousine

Commission,” 2014) while invading the privacy of its users (Smith, 2014), recognizes that its

business “depends on the trust of the millions of riders and drivers who use Uber” (Smith,

2014; Romm, 2014). Yet much of the trust-sharing discussion remains at this general level.

Last year, Facebook tried to change that, at least with respect to its users. After a

short survey in which a random selection of Facebook users were asked to rate how “happy”

they were with their Facebook experiences, users were also asked to respond on a Likert

scale to the question: “How trustworthy is Facebook overall?” A spokesperson justified this

question as another example of Facebook “constantly working to improve [its] service, and

getting regular feedback from people who use [the platform] is an invaluable part of the

process” (Fung, 2013). It was a remarkably banal explanation for an unprecedented question,

especially since Facebook has declined repeatedly to release the survey results.

This thesis begins where Facebook and our intuition left off: to determine whether

trust and online sharing are linked; if they are, to assess the nature and exploitability of that

1
link; and to analyze its consequences for a single, yet foundationally salient question of

privacy law: Do we retain privacy rights in information we have previously disclosed?

Most of us think of the private world as a place distinct or separate from other

people: that is, private spheres presume the existence of public spheres, but only as things

from which to detach. The right to privacy, in this way, is a right to keep others out. Samuel

Warren and Louis Brandeis (1890) referred to that as a “right against the world” (p. 195). I

disagree. Privacy is about social relationships. The right to privacy is about protecting

relationships that help create private contexts. What follows is a reorientation of privacy

scholarship around sociological principles of interpersonal trust. I argue that information

privacy is not exclusively bound up with concepts of choice, autonomy, or seclusion; rather,

private contexts are contexts of trust. In short, we share when we trust; we retain privacy

rights and interests when we disclose information in contexts of trust. And trust can be

identified by looking for the relevant cues in the entirety of the social context of a given

disclosure.

Privacy scholarship is no stranger to social theory. In one of his major works, Erving

Goffman (1963a; 1963b) lamented the “process of identification,” or how easy it is to amass

personal information about any given individual and make public his social identity. He saw

individuals as nodes at the center of several social networks that knew different things about

those individuals, thus recognizing that some personal information can be withheld, or kept

private, from the general public at the individual’s discretion (1963a; 1963b).1 And Goffman

is not alone. The sociologist Georg Simmel (1906) began his seminal article, The Sociology of

1Goffman (1963b) refers to privacy 27 times, including pages 4, 9, 10, 53, 66, 69, 86, 87, 11, 117, 128, 135, 155,
160, 165, 167, 173, n.7 (Ch. 11), 200, 209. This includes the word “private” and iterations thereof, including
“semiprivate” and “privacy.”

2
Secrets and Secret Societies, by stating that “[a]ll relationships of people to each other rest …

upon the precondition that they know something about each other,” but recognized that we

rarely, if ever, know everything about another person (p. 441-442). Our perceptions of

others, based on what we know, what we think we know, and both true and misleading

facets of personality, are true for us even if they are manipulated by a delicate balance

between secrets and disclosures: “Our fellow man,” Simmel wrote, “either may voluntarily

reveal to us the truth about himself or by dissimulation he may deceive us as to the truth.”

(p. 444-445). He may, in other words, choose to keep certain things private and choose to

make certain things public.

Public opinion polls suggest that when most people think of privacy and private

things, they think of protection, being hidden, or separation (Fox, 2000).2 The popular view

is that private things are walled off from others or limited to the very few. Some consider

certain things and places, like a diary or a bathroom, private because of the very fact that

they are not open for public consumption and separated from the public’s access. Privacy

has come to be defined by walls or property lines (Kerr, 2004) or the “loss of shared

experience” (Laufer & Wolfe, 1977).

The traditional view among privacy scholars is similar, focusing less on spaces than

on what it means to define a place or a thing as private. For many, privacy is about choice,

autonomy, and individual freedom. It encompasses the individual’s right to determine what

he will keep hidden and what, how, and when he will disclose information to the public.

Privacy is his respite from the prying, conformist eyes of the rest of the world and his

2 According to Fox, Americans show “great concern” about their privacy, including 84% of respondents stating
that they worry that “businesses and people [they] don’t know [are] getting personal information about” them
and their families.

3
expectation that the things about himself that he wants to keep private will remain so. I will

call this, generally, the rights conception of privacy to evoke the centrality of the individual,

his inviolability, and the Lockean and Kantian origins of this idea.3

Under this umbrella are two seemingly distinct strands. The first, which I will call

negative, sees the private sphere as a place of freedom from something. It includes notions of

privacy based on seclusion, separation, and private spaces, as well as conceptions based on

the sanctity of private things, like discrediting secrets or intimate information. Common to

these ways of thinking about privacy is an element of separation, suggesting that they

provide freedom from the public eye. The second liberal conception of privacy is positive.

This view retains the assumption of separation, but uses it for a different purpose—namely,

for the opportunity to grow, develop, and realize our full potential as free persons. It

conceives of privacy as affirmatively for something, as necessary for full realization of the

liberal, autonomous self.

But distinguishing between the public and private assumes, without evidence, the

normative implications of the distinction: that privacy is always going to be something

different, separate, or apart from the public.4 Nor is a public-private distinction either a

theory of privacy or particularly helpful in applying that theory to answer questions of law

and policy. A theory of privacy must, to use Andrew Abbott’s (1995) topology, explain why

certain things fall into the private sphere, why others things do not, and why society is

3 Julie Cohen (2012) has also connected conventional privacy theories to liberal political philosophy.

4We need look no further than Catherine MacKinnon’s (1989) argument that privacy law has created an
unregulated “other” sphere that endangers women for proof that the traditional public-private distinction
carries normative burdens.

4
willing to protect the former and not the latter. It must also be prescriptive and help answer

questions of privacy law, policy, and justice (p. 862).

The rights conceptions of privacy, as Dan Solove (2002) argued in his important

article, Conceptualizing Privacy, suffer from several flaws. They are at times too broad—

potentially limitless and unworkable—or too narrow—failing to account for many things we

would naturally consider private. I share some of these criticisms. I also argue that the rights

conceptions of privacy are both too simple and based on an erroneous understanding of

who we are and what we want as social actors. Economists like Alessandro Acquisti and Jens

Grossklags (2005), legal scholars like Lior Strahilevitz (2005), culture and media scholars like

Helen Nissenbaum (2004; 2010), and surveys done by the Pew Research Center5 (2015)

already show that sharing and online social life are far more nuanced, as well. Together with

my own fieldwork, their scholarship suggests that free choice is not the shibboleth of

privacy. The sociological piece is missing. That missing piece is trust.

It makes sense that we should conceive of privacy in social terms. Privacy law, I will

show, is socially constructed and participates in the social construction of new technologies

that make surveillance and observation easier. As sociologists of technology remind us,

innovations are not just engineering marvels, but rather real devices used by real people in

ways that help change, define, and cement the role of those technologies in national culture

(Pinch and Bijker, 2012; Pinch and Bijker, 1984; Kline and Pinch, 1996). Privacy law, as one

social response to technologies that allow photographers to take precise pictures from

hundreds of feet away or permit police to spy through a solid wall from across the street or

let websites track every click of our internet behavior, both participates in that process and

5 Pew produces reports exploring the impact of the Internet on families, communities, work and home, daily
life, education, health care, and civic and political life.

5
goes through a social construction process of its own: a technology destabilizes privacy

norms and legal interpretations jockey for dominance in the new order until, perhaps, some

measure of stability returns on a likely new foundation. This process of social construction is

ongoing in some areas, beginning in others, and concluded in yet others. Interpersonal trust

as a means of conceptualizing the basis for information privacy has occupied a heretofore

underappreciated yet important role in the fight for the meaning of privacy.

Trust, which I define as an expectation regarding the future actions and intentions of

particular people or groups of people (we trust x to do a, b, and c), is a social fact of

cooperative behavior. It is a sociological institution that is manifested by reciprocal

exchanges and assumptions about one’s interactional partner and it develops based on a host

of social factors—experience, overlapping networks, identity, and other cues—gathered

from the entirety of the context of a given relationship. It is, to use a phrase from the

sociologists J. David Lewis and Andrew Weigert (1985), a “functional necessity for society”

because, among other things, it greases the wheels of effective sharing: you interact when

you trust. In this thesis, I argue that in the information sharing context, spheres of privacy

mirror spheres of social trust: when we trust others, we share; when we do not trust, we do

not share. We know this because our sense of when our privacy is invaded is similar to the

sense of our trust being breached. I present empirical research using a case study of sharing

on Facebook to begin to lend credibility to this hypothesis. When sharing occurs in contexts

of trust, the law of information privacy—whether through tort, constitutional, or statutory

law—should protect that incident of sharing against subsequent misuse or wider disclosure.

The implications of privacy-as-trust are profound: it will reorient how legal scholars

think and talk about privacy, coherently explain certain aspects of current law, and also

suggest reforms that would solve several vexing problems of privacy law left unanswered by
6
the conventional wisdom. In particular, privacy-as-trust would rejuvenate privacy protections

in a networked world by invigorating the relatively moribund tort for breach of

confidentiality in American law. Privacy-as-trust would justify the protection of data known

to or in the hands of third parties in a manner strikingly similar to confidence jurisprudence

in British law. Seen in this way, privacy-as-trust could solve several vexing privacy problems.

First, if we conceived of privacy as protecting spheres of trust, we would be able to retain a

privacy interest in information disclosed to one or several people against wider, public

dissemination. This can help us address everything from the scourge of “revenge porn,” or

nonconsensual pornography, to other forms of harassment that stem from wide

dissemination of photos and images online. The evidence will also show that spheres of trust

need not be artificially limited to our intimate friends and families; trust exists among

strangers, as well. Under current law, however, we are often left with the absurdity that

sharing personal information with one person as functionally equivalent to sharing it on

YouTube.

Second, by reflecting the social construction of law and technology, privacy-as-trust

may even be able to clarify the seemingly inscrutable jurisprudence surrounding the Fourth

Amendment’s responses to new surveillance and tracking technologies. Scholars and judges

are locked in a fight over the meaning of the Supreme Court’s famous “reasonable

expectation of privacy” test in Katz v. United States (1967); privacy-as-trust is capable of

interpreting the Court’s language and may coherently explain much of the post-Katz Fourth

Amendment jurisprudence in the federal courts. And third, identifying contexts of trust as

coterminous with contexts of privacy can help draw the line between “public” and “private”

in other areas of law, particularly in intellectual property regimes that trigger rights and

obligations on whether innovations were made available to the “public.”


7
Privacy-as-trust has several advantages over its competitors. It is both pragmatic—it

is based on how we actually perceive, understand, and manipulate privacy in everyday life—

and clear—it simplifies a complex and amorphous concept and offers an explanation

common to privacy interests in all contexts. The theory also reflects real behavior, rather

than visceral whims of a public responding to biased survey questions. Trust is also tied to

overwhelmingly positive forces in society and, therefore, is a norm that should be protected

and fostered by law.

This project has a simple, but ambitious goal: to solve ongoing law and policy

problems in information privacy law by focusing the law on protecting relationships of trust

rather than longstanding theoretical and rights-based biases. It uses traditional forms of legal

scholarship, including case analysis, alongside social theory and sociological frameworks of

interpretation, including the Social Construction of Technology (SCOT). This thesis also

focuses primarily on sharing on digital platforms and the role of privacy in a networked

world for several reasons: first, privacy and technology go hand-in-hand when new

technologies alter our abilities to surveil, spy on, and know about others; and, second, the

vast majority of modern surveillance and information sharing occurs online, both with and

without our knowledge (Pasquale, 2014a). Therefore, empirical evidence is presented based

on a case study of internet-based sharing on Facebook.6 While this argument and its

attendant quantitative work begins to pave a pathway of a career’s worth of research, this

6I concede that even a simple random sample of Facebook users may not permit me to make broader
conclusions about sharing or privacy offline. Therefore, for now, I limit my discussion to the implications of
privacy-as-trust for sharing information online. That limitation is of little moment. Through the use of cookies
and web beacons, as well as voluntary and required submission of personal information on commercial, social
networking, and other websites, online interaction creates terabytes of personal data. Understanding what
encourages us to share that data and determining how the law should respond has its own value even if broader
conclusions about all sharing cannot be made.

8
thesis offers a modest proposal in several ways: First, I restrict my analysis to privacy in the

context of information sharing, which, although a significant nexus of privacy law problems,

is not coextensive with the entire world of privacy issues. Second, quantitative evidence

presented herein marks only the first step toward a rigorous demonstration of the theory.

Chapter 1 contextualizes the broader discussion of privacy as a social concept by

offering a short history on the development of the law of privacy as distinct legal field. The

Social Construction of Technology (SCOT) is used as an interpretive tool. I argue that the

story of privacy law is a social narrative bound up with the emergence of new technologies

that have allowed third parties—individuals and the government—greater access to our

personal information. We will see that what made privacy law develop along liberal, rights-

based lines were historical accidents and intellectual biases. Chapter 2 shows that despite its

social elements, privacy is not traditionally conceptualized sociologically; rather, the current

legal, sociological, and philosophical understanding of privacy is rights-based. Although

some conceptions of privacy embody a negative rights idea of freedom from intrusion and

where others reflect the positive rights ideas of autonomy and choice, most assume a public-

private distinction and rest on the primacy of the detached individual over his social self. In

this way, these conceptions reflect the Lockean and Kantian origins of liberal political

theory. I critique these conceptions of privacy as too rigid and inadequate to protect modern

sharing.

In Chapter 3, I suggest that the missing piece in privacy scholarship is full

appreciation for privacy’s social dimension. I begin by showing how even rights-based

privacy scholars concede that there is a strong social aspect to privacy and then go on to

introduce and describe the core of my argument—namely, that privacy is really about trust,

sociologically understood. Here, I go from our intuition about privacy and sharing to social
9
theory, discuss and critique several attempts to conceive of privacy socially in the nascent

literature, and ultimately argue that spheres of privacy mirror spheres of trust. Chapter 4

describes the survey of Facebook users, which is intended merely as a proof of concept and

an invitation to further research. The chapter reports quantitative data that shows, among

other things, that Facebook users tend to share more personal information in contexts of

trust and that trust can rationally develop within all types of networks when certain social

factors—strong overlapping networks, identity sharing, and transferred experience—are

evident from the entirety of the social context of the disclosure. This data only begin to lend

credibility to the conclusion that when seen as a norm dedicated to protecting and fostering

relationships of trust, privacy would protect personal disclosures in these contexts and more

effectively protect personal privacy in a digital world.

I use Chapters 5 through 7 to begin to show how the concept of trust would affect

several areas of the law. Although it is beyond the scope of this project to cover all the

myriad ways privacy-as-trust would reform current law, I present three case studies as

paradigmatic examples. Chapter 5 puts in stark relief the chief legal weapon that emerges

from privacy-as-trust—namely, the tort for breach of confidentiality—to protect personal

privacy from invasions by other private parties. Through this tool, privacy-as-trust would

protect personal privacy interests even in information known to or in hands of third parties,

an increasingly pervasive fact of modern technological life. I show how the operation of a

trust-based tort for breach of confidentiality would operate similar to confidence

jurisprudence in Britain. I then apply the tort to protecting personal information from wide

public distribution even if it has been disclosed to one or a few trusted parties. Privacy-as-

trust offers judges a clear and just way forward. Chapter 6 looks at the implications of

privacy-as-trust and the social construction of technology for Fourth Amendment


10
jurisprudence, concluding that modern Fourth Amendment jurisprudence has been

characterized by a fight for dominance among competing theories—trust among them. To

date, the social aspects of the Fourth Amendment guarantee against unreasonable searches

and seizures have been underappreciated. And Chapter 7 uses privacy-as-trust to draw the

line between the “public” and the “private” in intellectual property law and argues that, as a

socially constructed concept, public and private cannot be based on mere numbers. Rather,

in determining when a proposed invention has been in public use too long to merit a patent,

courts should look for cues of trust gathered from the entirety of the social context of the

disclosure. Doing so would track close to our intuition and sense of justice and

simultaneously fulfill the promise and goals of patent law. I conclude with recommendations

for future research.

11
CHAPTER ONE:
The Social History
The history of privacy law7 is bound up with a narrative about advancing technology,

but much privacy scholarship today offers an incomplete retelling of both. For many, privacy

legal history is dominated by what I will call the “act-react paradigm”: a new technology lets

the government surveil us better or faster, so victims turn to the courts or legislatures to

craft a response that keeps that technology from damaging personal privacy interests. In this

way, the story goes, privacy in the United States developed as an individual “right against the

world” (Warren and Brandeis, 1890, p. 195) because it emerged in response to intrusive

technologies that were trying, with each successive innovation, to encroach on more of the

traditionally private sphere.

That version of history is too simplistic for technology, let alone privacy. By

conceptualizing new technologies as exogenous shocks or independent variables to privacy,

it suggests that pieces of technology are one dimensional. But technology is as much a social

concept as an engineering one. Artifacts like cameras and telephones, not to mention

technological systems like networked computers and the internet, neither pop up out of

nowhere nor do they exist in a vacuum bracketed away from social life. Rather, they change

and impose limits on the way we interact with each other and are themselves the products of

social movements’ efforts to define technology’s place in the world. As Social Construction

of Technology (SCOT) scholars Trevor Pinch (2008), Wiebe Bijker (1995), and others have

shown, simplistic narratives like the act-react paradigm ignore the multifaceted process of

7Westin (1965) and Lane (2009) offer insightful perspectives on a more general history of the development of
privacy law in the United States. It is beyond the scope of this thesis to repeat their extensive scholarship.

12
social input, interpretation, and re-interpretation that helps technologies develop meaning

and uses in society.

If technology is socially constructed, the act-react paradigm must collapse;

technologies are no longer merely engineering innovations that happen unexplained. But the

paradigm made little sense from privacy’s perspective, as well. Privacy law, like any legal

regime, is not constructed of simple reactions to individual stimuli. Laws, judicial decisions,

and rules are part of an iterative process involving multiple stakeholders, contingent

historical accidents, different interpretations, various social forces, and an internal semi-

autonomous social culture (Lukes and Skull, 2013; Deflem, 2008; Chambliss, 1979; Bordieu,

1987). I would like to argue that the SCOT interpretive model not only describes the

development and use of potentially privacy-defeating technologies; it also provides a richer

lens through which we can explain the development of privacy law over time.

That privacy law developed in the United States as a tool to keep others out, or as a

bulwark of the individual against public and government encroachment, appears to be the

product of little more than contingent historical facts and the biases of the major players

who helped construct the laws’ foundations. But it did not have to be this way. In this

Chapter, I will sketch out the SCOT model and then show that privacy laws, rules, statutes,

and judicial interpretations not only help construct technology’s place in the world, but also

emerged as a result of its own complex social process involving social inputs and

construction. This socio-historical narrative will show that privacy’s conventional wisdom,

which, as I discuss in Chapter 2, is biased toward conceptualizing the right as an individual’s

weapon against society, is not inherent to the notion of privacy itself, but merely a product

of contingent accidents and individual prejudices.

13
Section 1.1: The Social Construction Model

Though I argue that the social construction model can help frame the legal history of

privacy, SCOT is primarily a lens for understanding the role new technologies play in society.

Technology is the stage for social interaction (Pinch, 2010). In interacting with others, we

interact through—that is, we are mediated by—technology, thus making the study of

technology a distinctly social concern (Pinch, 2008). Take, for example, a rather mundane

artifact: a door. Bruno Latour (1992; Johnson, 1989) famously illustrated how automatic

door closers mediate social interaction. As a physical barrier between two spaces, a door

allows us to behave differently on either side and assigns by fiat the default order of

interaction by either remaining open, thus requiring a door closer, or staying closed, thus

requiring effort to keep the door open. Erving Goffman (1959) also expressed this point

when he used the dramaturgical conceit of the front and back stages of a theatre to show

that social interaction differs in public than in private. Facebook is another example of a

technology that mediates our behavior with others. More than that, the “interaction order,”

the unwritten rules of how different groups behave toward each other, is embedded within

certain technologies (Pinch, 2010). Facebook is paradigmatic: it requires real names, offers a

“like” button without a “dislike” option, limits the options in pull-down menus for

“relationship status” and “gender” (Towle, 2011; Mendoza, 2014), and determines for you

what posts appear in your “news feed,” to name just a few ways platform architecture

determines our interactions. Indeed, these websites are making choices for users rather than

just mediating our interactions (Boyd, 2006). Understanding technology’s role in society and

its relationship to privacy is so bound up with social interaction that sociology is the most

natural lens for conducting this study.

14
The sociology of technology argues that the only way to conceive of technology is as

a social construct: users innovate, shape, and interact with technology on an ongoing basis

(Kline and Pinch, 1996); to ignore users’ role would be akin to ignoring a chef’s role in

cooking dinner. For example, Susan Douglas (1987) has shown that amateur radio operators

helped make the technology a medium for broadcasting rather than just one-to-one

communication. Ronald Kline and Trevor Pinch (1996) demonstrated how rural America

helped change the design and use of the car. And they are not alone (Fisher, 1992; Martin,

1991; Nye, 1990). SCOT is a method of analysis within the sociology of technology. Its

“multidirectional” look at particular technologies—bicycles, cars, cell phones, the internet—

starts by identifying relevant social groups who play a role in the development of an artifact’s

use in society (Pinch and Bijker, 2012; Pinch and Bijker, 1984; Kline and Pinch, 1996). They

are cobbled together as clusters of individuals who share a use for or perspective on the

technology.8 Together, they identify social problems with innovations in accordance with

their cluster’s interests (e.g., Facebook sells user data to third parties) (Moran, 2014) and may

propose solutions through self-help (e.g., account deletion or higher privacy settings)

(Matthews, 2014; Kosoff, 2014; Sengupta, 2013), a social movement (e.g., organizing to

oppose a privacy-related term of service) (Wortham, 2013), or rebellion (e.g., switching to

Ello)9 (Murray, 2014). Groups might be anything from engineers, consumers, advertisers,

and women, to suburban millennials, European technocrats, the media, elites, blue collar

workers, or dog lovers. And they can overlap traditionally defined demographics. To

8 Any one person may hold several perspectives of a piece of technology. Sharing different perspectives is built
into SCOT: individuals can be members of different social groups and, as such, they give clusters a richer
definition and include traditionally marginalized groups (Kline and Pinch, 1996).
9 Ello is a small, new social network billing itself as the anti-Facebook.

15
illustrate this even further, consider a sometimes controversial piece of technology: the gun.

There are many social groups involved in imbuing guns with meaning. For members of the

National Rifle Association (NRA), a vocal Second Amendment advocacy organization, or

the National Shooting Sports Foundation (NSSF), a euphemistically named trade association

for America’s firearms industry, guns are tools for hunting game or, perhaps, for the

protection of freedom against tyranny (LaPierre, 2013). For the families of victims of gun

violence, guns are weapons of murder (Richinick, 2014). Alongside other groups—women,

who tend to favor more gun restrictions than men; Democrats, who favor national gun

control legislation; and others (CNN/ORC, 2013)—these groups play a role in the push-

and-pull that ultimately turns an innovation into an artifact with meaning for our daily lives

(Pinch and Bijker, 2012).

That push-and-pull, known as “interpretive flexibility,” is the second phase of the

SCOT analysis in which the designated social groups identify problems with the new

technology and fight to have their perspectives occupy space in the artifact’s place in society

(Kline and Pinch, 1996). Those problems can be technical, moral, cultural, social, or legal

(Pinch and Bijker, 2012). For example, to many millennials, Apple’s iPhone is a mobile

computer, game platform, a media player, and only distantly a telephone (Kadri, 2012). They

might find the screen too small and respond to Apple surveys by suggesting a larger screen.

Young computer programmers might share this “meaning” of a smartphone, but, also see it

as a creative platform for writing and rewriting code. Given this perspective, they may

identify gaps or barriers in Apple’s openness to user innovation. Suburban moms might see

cell phones as offering a lifeline to their children in emergencies; teenagers see them as tools

of independence and romance (Peskin, 2013). Corporations may see the cell phone as a

revolutionary tool for targeted marketing; attorneys at the ACLU might have a different
16
perspective when they imagine that cellphone in the hands of police. Similarly, an anti-terror

task force could see a smartphone as a terrorist tool, especially if it can be used to detonate a

bomb remotely (Szabo, 2014). And a court could argue that carrying a cellphone is akin to

carrying the transcripts of every phone call or letter you have ever written (Riley v. California,

2014).

The meanings assigned to an invention may end up changing the technology. For

example, frequent use of cellphones for texting and “sexting”10 helped innovate additional

deletion features and the SnapChat app. A legal decision allowing law enforcement to gather

cell site data without a warrant could result in a new platform that gives users more control

over geolocation (In re U.S. for Historical Cell Site Data, 2013). Identifying different problems,

uses, buying habits, market power, opinions, blog posts, and armchair innovations helps

determine what the technology will look like in the future, how we talk about it today, and

the ways it gets insinuated into our lives.

Legal treatises, decisions, and rules are, of course, part of this process (Kline and

Pinch, 1996). They help define how new technologies will be seen and used in popular

culture by limiting (or expanding) their lawful uses. On a practical level, laws tell us the

permissible and impermissible uses of technology: you can use a gun to hunt pheasant, but

you generally cannot use it to kill other people. On a more macro level, various scholars have

shown that law not only coerces action, but also nudges and creates norms of conduct that

help maintain order and establish behavioral expectations in society (Lessig, 1995; Hellman,

2000). Therefore, during a technology’s piece of interpretive flexibility, law is both practically

and intellectually influencing its ultimate place in society.

10 Sexting, the portmanteau of “sex” and “texting,” is the act of sending racy messages or photos by cellphone.
It is now in the Merriam-Webster dictionary (Lynch, 2012).

17
Establishing such a place alludes to the last step of SCOT: “closure,” or a period of

stability (Kline and Pinch, 1996; Pinch, 2008). An artifact may engulf all meanings or

developers may privilege some over others: the iPhone is a computer, a sexting machine,

occasionally a telephone, and a platform for developer ingenuity, but it directs all innovation

through the App Store, imposes several development restrictions, and mines troves of data.

But as with the iPhone, closure does not mean that one product emerges above all others.

Competitors with rival technologies can exist side by side (Mozilla’s Firefox and Google’s

Chrome, or escalators and elevators) and may serve different purposes in society.

Nevertheless, closure exists when a meaning obtains dominant status, or when several

meanings converge to dominate the product in the culture (Kline and Pinch, 1996).

This model—identification of social groups and analyzing their multidirectional

impacts on a technology—is not the only way to understand the role of technology in

society. But it does have several advantages: It solves the act-react paradigm’s simplicity

problem; nothing develops in a vacuum and SCOT looks at technology from different

angles. SCOT is also flexible and latitudinal: it includes different perspectives and ensures

that marginal populations will also be counted. It emphasizes that a technology’s place in

society is not predetermined. And it places the law within a larger social context, a worthy

goal of the legal realists (Horwitz, 1992). And although SCOT, as its name suggests, was

intended as a schema for explaining the emergence of technology in society, I argue that it

can also provide a model for how and why the right to privacy developed as a right against

the world. This narrative illustrates the line of historical accidents that helped create the

current state of privacy scholarship.

18
Section 1.2: The Social History of Tort Privacy Law

Most histories of privacy usually begin, or identify as a turning point, the 1890

publication of Samuel Warren’s and future Supreme Court Justice Louis D. Brandeis’s (1890)

seminal article, The Right to Privacy, in the Harvard Law Review. In that piece, which was the

first scholarly work to conceive of privacy as a distinct legal field, the authors called for a

robust and muscular tort regime to protect personal privacy against an overzealous press. It

was at this time that newspapers had started using “instantaneous photograph[y]” in an era

of yellow journalism. The media, they said, were “invade[ing] the sacred precincts of private

and domestic life” (p. 195). But Warren and Brandeis were not objecting to the technological

innovation of the camera, a technology that had emerged decades earlier (Gernsheim, 1969).

Nor did they care much about it before it became a tool of the sensational press. Rather, The

Right to Privacy and its progeny were legal responses to how a piece of technology was being

used. Their vision of privacy was further socially constructed by judicial responses to

corporations that started using photographs of nonpublic individuals for advertisement and

commercial gain. In this way, the SCOT model not only helps describe how various

interests—elites, liberals, media, and corporate interests—influenced the social construction

of the camera and photographs, but also helped construct the right to privacy as a tool for

separating the individual from society.11

Warren’s and Brandeis’s article is traditionally understood as manifesto for the right

to privacy as a defense or shield against an increasingly intrusive public. The authors argued

for a tort regime that protected an individual’s “right to be let alone” by others (Warren and

11 A WestLaw search for law review articles focused on camera and video surveillance as threats to privacy
revealed 35 articles with the various terms in the article title and more than 1,700 relevant articles discussing the
issue at least twice. Slobogin (2002) and Schwartz (2013) offer particularly insightful commentaries on the
invasiveness of video and camera surveillance, as well.

19
Brandeis, 1890, p. 195), helping to bias privacy discourse toward pitting the individual

against society. This is what they meant by a “right against the world”: zones of privacy are

shielded, separate, and sacred. Interpreting the historical narrative through the SCOT lens

explains how the social forces involved in the fight over a given intrusive technology—this

time, the camera and the photographs it produces—ultimately informed the cultural

meanings of that technology and contributed to a conception of privacy biased in favor of

individual rights.

Before 1890, there were only 27 federal cases that even mentioned the word

“privacy,” and they ranged from libel (2), search and seizure (5), polygamy/sex (1), property

(6), fraud and contracts (4), evidence and testimony (4), bankruptcy (1), intellectual property

(1), drugs (1), and even boats (2).12 Understandably, there were more (271) cases out of the

various state courts before 1890: not only did most American law originate and get resolved

at the state level before 1900, but the common law tort claims associated with privacy had

always been the exclusive purview of the states.13 In almost every case, “privacy” referred to

the privilege one enjoyed to exclude someone from his home or his land or to some

amorphous concept of the inviolability of the person. However, there was no recognized

right to privacy. Nor did courts recognize explicit claims about the invasion of privacy: state

courts recognized and issued 42 opinions in cases that raised claims about trespass and

libel/slander before 1890, but dismissed every (7) claim for an explicit “invasion of privacy.”

12 Westlaw search: “privacy & da(bef 1890)” in “allfeds”.

13Westlaw search: “privacy & da(bef 1890)” in “allstates”. There are, of course, more than 10,000 cases before
1890 that include the word “private.” At some point, that research might be helpful as a way of understanding
how the term was used before the Warren and Brandeis article, but it is beyond the scope of this thesis.

20
All that changed after 1890, when Warren and Brandeis published The Right to Privacy.

Their expansive “right to be let alone” was never fully defined, but they did make clear that

law needed to adapt to the ways new technologies had come to be used in society: “Recent

inventions,” like the camera, they wrote, required the law to take the “next step” to

“protect[] … the person” (p. 195). But they were specifically concerned about how

technology affected elite members of society like themselves. Warren met Brandeis during

their years at Harvard Law School. Brandeis, the son of wealthy Jewish immigrants who had

settled in Louisville, Kentucky, and Warren, the son of an even wealthier paper manufacturer

of Boston, started a law firm together after graduating first and second, respectively, in their

law school class (Mason, 1946; Pember, 1972). As elites, the first social group involved in the

battle over photographing private individuals for public consumption, they had unique

interests and biases. Warren came from an old Boston Brahmin family with a long history

and an even wider elite social circle that includes the Welds, Cabots, Lowells, and Thayers

(Mason, 1946, p. 59-70). Not only was he able to secure a job in Oliver Wendell Holmes’s

law firm after graduation, but Warren’s social connections were so strong that he and

Brandeis succeeded as practicing lawyers on their own (p. 54). The two men also socialized

with intellectual, cultural, and business elites for business and pleasure (p. 62), which kept

their social orbit relatively homogeneous and segregated from the lower classes. In 1883,

Warren also married Mabel Bayard, the daughter of a senator from Delaware and another

member of the elite, and the couple hosted a series of parties at their Back Bay home that

were frequented by Warren’s social circle.

By his own reckoning, Brandeis was more academic and less practical than Warren,

so another social force involved in the development of privacy tort law must be the liberal

and progressive tradition, for which Brandeis would ultimately sit as its “acknowledged
21
philosophical father” (Baker, 1984). Liberalism at the turn of the twentieth century

emphasized the two pillars of muscular government action to solve social problems and

individual liberties. Brandeis stood astride these sometimes conflicting goals. In his dissent in

New State Ice v. Liebmann (1932), for example, Brandeis wrote a manifesto for activist New

Dealers: “There must be power in the states and the nation to remould, through

experimentation, our economic practices and institutions to meet changing social and

economic needs” (p. 311). And countless scholars have shown that he protected individual

civil liberties from overreaches by law enforcement (Currie, 1990; Guthrie, 1998; Walker,

2012; Collins and Skover, 2005; Feldman, 2008; Larson, 2011). These intellectual tendencies

may have also played a role in Brandeis’s decision to find the locus of the privacy right in the

individual.

Elites’ interest in privacy came into conflict with the media’s interest in disclosure and

corporate interests’ use of photographic images for commercial purposes. Both of these

interests viewed the privacy right differently from Warren and Brandeis. The American

media was experiencing exponential growth and change at the time Warren and Brandeis

were making a name for themselves. Between 1850 and 1900, the number of newspapers

exploded from 100 to 950, and readership grew from almost 800,000 to more than 8 million

(Scott, 1995). Although never a tame sector (Mott, 1950), it was becoming more sensational

and aggressive, especially given the freedom offered by Eastman Kodak’s new snap, or

instantaneous, camera (Mensel, 1991). With this tool, the media engaged in a pattern of

intrusive behavior to satiate the appetites of a growing and diverse readership increasingly

interested in and frustrated with the upper class. For example, the Saturday Evening Gazette,

which “specialized in ‘blue blood’ items,” became notorious for reporting on Warren’s

parties in lurid detail (Mason, 1946, p. 46). At the time, publishers felt that any right to
22
privacy conflicted with their democratic imperative to reveal the truth, whether in the form

of muckraking or detailing the excesses of the rich (Volokh, 2012; Schudson, 1981). Where

Warren and Brandeis were reacting to popular intrusions into elite culture, which may

explain why their conception of the right to privacy pitted the individual against everyone

else, the media saw privacy as a narrow concept whose importance paled in comparison to

their reporting obligations.

But it was corporations’ growing use of images of nonpublic individuals for

advertising and commercial purposes that allowed Warren’s and Brandeis’s elite vision of

privacy to piggyback its way into the courts. Two cases are illustrative. In 1902, New York’s

highest court was without recourse to help Abigail Roberson, a teenager, who sued the

Franklin Mills Flour company for “invasion of privacy” for using her likeness on thousands

of advertising flyers without her consent. Roberson claimed that the use of her image caused

her humiliation and injury, but the Court of Appeals could find no precedent for bringing a

privacy action in Anglo-American common law (Roberson v. Rochester Folding Box Co., 1902).

The decision inspired unprecedented criticism: the New York Times published 5 pieces on the

decision in the subsequent weeks, including several critical letters to the editor, and the

backlash was so sharp that one of the judges in the Roberson majority felt compelled to justify

his decision in the pages of the Columbia Law Review (O’Brien, 1902). One year later, the

New York legislature became the first to create a tort for invasion of privacy when it passed

Section 51 of the New York Civil Rights Act.

The 1905 case of Pavesich v. New England Life Insurance Company (1905) was similar to

Roberson, but in that case, the Georgia Supreme Court decided to act without statutory

instruction from the legislature. In Pavesich, an insurance company’s newspaper

advertisement included a photograph of the plaintiff used without consent. The plaintiff
23
sued, alleging an invasion of privacy, a claim, the Georgia Supreme Court said, “derived

from natural law” (p. 70). The court held that, subject to certain limitations, “the body of a

person cannot be put on exhibition at any time or at any place without his consent. … It

therefore follows … that a violation of the right of privacy is a direct invasion of a [long-

standing] legal right of the individual” (p. 70-71). In New York, Georgia, and later, in many

other states, the law was beginning to integrate an elite vision of privacy by way of a

response to corporations’ use of photographic images of nonpublic individuals for

commercial purposes. Because all of these developments arose out of intrusive acts that

publicized private individuals to the public at large—e.g., Samuel Warren and his elite

company in the pages of the Saturday Evening Gazette, or Abigail Roberson as the unwitting

spokesperson for a flour company—the right turned out to be an individual’s tool to keep

others out. By 1939, the First Restatement of Torts included a section on an individual right

to privacy: “A person who unreasonably and seriously interferes with another’s interest in

not having his affairs known to others … is liable to the other” (§ 867). By the next decade,

15 states recognized at least one privacy tort with similar language; within another 10 years,

almost every state would follow suit (Richards and Solove, 2007). Legislators were

responding to how the camera was being used, generally, and corporate use of photographs

of private individuals for commercial gain.

Years later, William Prosser (1960) would survey the development of privacy tort law

and identify four “privacy torts” that victims of invasions of privacy had been using to

obtain justice: intrusion upon seclusion (for when a photographer pushes his camera in your

face or takes pictures through your window), public disclosure of private facts (for when he

publishes a photo of you undressing), false light (for when the published photograph depicts

you as depraved), and appropriation (for when he uses a photograph of you to advertise his
24
photography services). As Neil Richards and Dan Solove (2007) have shown, Prosser’s work

put American privacy law on a particular path that suited Prosser’s—and Warren’s and

Brandeis’s (1890)—governing theory that the right to privacy was a tool to keep others out.

Consider the torts themselves: three require particularized harm from the public

dissemination of information, all require the taking of personal, closely held information.

They redressed wrongs based on what was taken from the individual and how it was done

(Gilles, 1995; Winn, 2002). Prosser wanted it this way: he was skeptical of the privacy torts

that had been developing in American common law because he found them too amorphous

and capable of dangerous expansion that could impinge on other individual rights,

particularly free speech. As a result, his article narrowed the torts’ reach by emphasizing how

“extreme and outrageous” invasive conduct needed to be and he excluded any mention of

the developing law of breach of confidence, a relationship-based tort that held liable those

who disseminated information disclosed to them (Richards and Solove, 2007, p. 151-152).

Prosser’s inclusion of the four privacy torts in his 1960 article and in the Second

Restatement of Torts,14 for which he served as the lead contributor, had the effect of

cementing these torts—and no others—as the framework for privacy law in the United

States (Richards and Solove, 2007, p. 148). This happened for two reasons. First, the social

group of legal academia played a role by elevating Prosser’s work to leading status. Prosser was

taking diverse and seemingly contradictory case law and harmonizing it in an ostensibly

neutral way in line with what G. Edward White (2003) called the “consensus thinking” of the

mid-twentieth century. Prosser’s genius, scholars argue, “was to acknowledge and identity

the various interests to be balanced, while relentlessly asserting … that the results of the

14The Restatements of the Law are sets of treatises on legal subjects that seek to inform judges and lawyers
about general principles of common law.

25
cases, on proper analysis, were … consistent examples of Prosser’s own general rules”

(Joyce, 1986). Working at a time when such harmonization was highly valued in the legal

academy, Prosser emerged as its paradigmatic and exemplary practitioner. As a result, the

entire legal world paid attention to his four privacy torts. On a more practical level, Prosser’s

version of the privacy law narrative succeeded because it had no rival. As the legal scholar

Sharon Shandeen (2006) has argued, privacy was unlike related areas of law—particularly

trade secrecy, which protected confidential business information—in that it escaped the

drive for comprehensive law reform: as chiefly concerned with personal information,

haphazard common law development of privacy rules never caught the ire of business

interests and their attorneys, the principal driving forces behind uniform model codes from

the American Bar Association or the National Conference of Commissioners of Uniform

State Laws. As such, Prosser’s particular narrative emerged as the only narrative shaping

future developments in privacy law. Absent other social groups, whether they be business

interests or any other advocates that had different views about the meaning of privacy in

American democracy, Warren’s and Brandeis’s vision as privacy tort law as a tool of keeping

others out reached closure after a period of interpretive flexibility that was stacked in favor

of individual rights.

Section 1.3: The Social History of Constitutional Privacy Law

Where tort-based privacy law emerged to regulate relationships between private

parties in an era of technological change, constitutional privacy law developed to enforce

boundaries between citizens and government use of technology. The Constitution does not

include an enumerated “right to privacy,” but like Warren and Brandeis, who found a

personal privacy right against the world running throughout Anglo-American common law,

the Supreme Court has held that a right to privacy exists behind what Justice William O.
26
Douglas called, in Griswold v. Connecticut (1965), the “penumbras and emanations” of several

amendments in the Bill of Rights. For example, the First Amendment protects the right to

speak anonymously (McIntyre v. Ohio Election Commission, 1995); the Third Amendment

protects privacy by preventing soldiers from being housed in private homes (U.S. Const.

amend. III); the Fourth Amendment protects against “unreasonable searches and seizures”

and requires police to obtain warrants before conducting most searches (U.S. Const. amend.

IV); and the Fifth Amendment’s “privilege against self-incrimination” protects individual

privacy by restricting the government’s ability to force individuals to divulge certain

information about themselves (U.S. Const. amend. V). The resulting “zone of privacy” into

which the government cannot intrude came out from the shadows of these clauses when law

enforcement started using new technologies to surveil the population in unprecedented and

intrusive ways. The resulting “right to privacy” was, as a construct from other rights meant

to protect the individual from government overreach, a right of the individual to keep others

(the government, in this case) out.

Once again, the SCOT interpretive method may shed some light on why a

constitutional right to privacy developed the way it did. Naturally, law enforcement looms large

in the social construction of sensory enhancing technologies because such tools are primarily

used as investigative aids. Take, for example, the wiretap. As a tool for intercepting

transmissions, the wiretap is a natural, though certainly not exclusive, weapon of the police.15

15 Wiretapping has been around since shortly after the invention of the telegraph in 1837 and the telephone in
1876 and, since then, it has been used for multiple purposes. In their 1959 book, The Eavesdroppers, Samuel
Dash, Richard Schwartz, and Robert Knowlton tell the story of Civil War General Jeb Stuart, who traveled
everywhere with his own personal wiretapper aid so he could intercept Confederate messages. Wiretapping has
also been used for industrial espionage, as when the San Francisco Examiner tapped the phones of its competitor,
the San Francisco Call, to intercept communications between editors and reporters. Missouri Senator Edward V.
Long (1967), the chair of a Senate subcommittee that, in the 1960s, compiled evidence of electronic
wiretapping and invasions of privacy, concluded that phones are tapped in one third of contested divorces and
27
But although wiretapping was so frequent and alarming that many state legislatures banned

the practice at the turn of the century,16 it took until the 1920s for a federal court to consider

the relationship between wiretapping and the Fourth Amendment. That’s because law

enforcement saw wiretapping as an essential tool for investigating and destroying the

organized crime syndicates that developed in the wake of Prohibition. The wiretap

developed this way and at this time for several reasons. First, Prohibition meant more federal

crimes (Simons, 2000; Friedman, 1993; Henderson, 1985), which meant more federal

investigations and more scrutiny on the conduct of federal agents. Second, as Whitfield

Diffie and Susan Landau (1998) argued, investigating something as secretive as organized

crime required wiretaps. The organizations were “tightly knit” and operated under a code of

silence (p. 167). What’s more, because the core of organized crime was supplying illegal

goods to ordinary citizens who wanted them, there was often no victim willing to report

evidence (p. 161). It is no wonder, then, that the first Supreme Court case on sensory

enhancing technologies revolved around FBI wiretaps of Roy Olmstead, a former policeman

in Seattle turned noted bootlegger.17

In Olmstead (1928), police tapped Roy Olmstead’s phone line by installing a device at

the top of a telephone pole on a public street outside his house. The Court held that because

“[t]here was no entry of the houses or offices of the defendants,” that is, no violation of

Olmstead’s property rights, there was no search under the Fourth Amendment (p. 464).

that manufacturers of eavesdropping equipment earn hundreds of millions of dollars annually by selling
equipment for voyeurism and industrial espionage (Kilpatrick, 1967).

16 California was the first to ban wiretapping telegraph lines in 1862. New York and Illinois banned telephone
line tapping in 1895 and California followed suit in 1902. By the time the Supreme Court decided Olmstead v.
United States in 1928, twenty six states had enacted bans on wiretapping (Berger v. New York, 1967, p. 45-46).

17The Ken Burns documentary, Prohibition (2011) offers a fascinating discussion of Roy Olmstead’s
bootlegging ring and his role in American history.

28
Olmstead reflected a constitutional privacy right against the world in an antiquated, yet

counterintuitively strong form: it quite literally required that the government keep out.

Brandeis, now a Supreme Court Justice, dissented. Bringing to the case the same social

history that influenced his conception of privacy as a right against the world in the tort

context, Brandeis argued that it was outdated and dangerous to see the Fourth Amendment’s

guarantee against unreasonable searches and seizures as nothing more than as a guardian

against a physical trespass. Rather, “‘time works changes, brings into existence new

conditions and purposes,’” Brandeis argued (p. 473). Referring specifically to the ways in

which law enforcement had come to use new technologies to enhance its surveillance

capabilities, Brandeis noted that

[d]iscovery and invention have made it possible for the government … to obtain
disclosure in court of what is whispered in the closet. … [And] [t]he progress of
science in furnishing the government means of espionage is not likely to stop with
wiretapping. Ways may someday be developed by which the Government, without
removing papers from secret drawers, can reproduce them in court, and by which it
will be enabled to expose to a jury the most intimate occurrences of the home.
Advances in the psychic and related sciences may bring means of exploring
unexpressed beliefs, thoughts and emotions (pp. 473-474).

The Fourth Amendment, therefore, had to be sufficiently flexible to respond to an ever

changing technological landscape that offered law enforcement better, more efficient, and

more precise means of surveillance. Brandeis’s preferred conceptualization of the Fourth

Amendment right against the government was borrowed from his 1890 article:

The makers of our Constitution undertook to secure conditions favorable to the


pursuit of happiness. They recognized the significance of man’s spiritual nature, of
his feelings and of his intellect. They knew that only a part of the pain, pleasure and
satisfactions of life are to be found in material things. They sought to protect
Americans in their beliefs, their thoughts, their emotions and their sensations. They
conferred, as against the government, the right to be let alone—the most
comprehensive of rights and the right most valued by civilized men. To protect, that
right, every unjustifiable intrusion by the government upon the privacy of the
individual, whatever the means employed, must be deemed a violation of the Fourth
Amendment (p. 478).
29
To Brandeis, then, the Fourth Amendment conferred an individual right to be generally free

of government intrusion. That right was flexible enough to adapt to new technologies, but it

would always attempt to restore the stability of personal privacy in a new world.

The Court would eventually vindicate Brandeis’s concern that new technologies

could eat away at the Fourth Amendment—but never fully adopt his right to be left alone

formulation—when it decided, in another wiretapping case called Katz v. United States (1967),

that physical trespass is not the shibboleth of a Fourth Amendment search. Rather, the

clause “protects people, not places” (p. 351). Rather than a bright-line trespass rule that

would remain impotent against new technologies that allowed intrusion without physical

invasion, the Court adopted a more flexible approach that emerged most clearly in Justice

Harlan’s concurrence in Katz: the Fourth Amendment was triggered, thus requiring police to

obtain a warrant, when the target of a search had an expectation of privacy that society was

willing to recognize as reasonable (p. 361). In these and countless other cases,18

constitutional privacy law emerged to regulate the relationship between citizens and the

government when faced with new technologies that allowed the government to surveil

deeper into personal and private lives. Katz and its progeny had both the practical effect of

limiting police use of wiretaps and the expressive effect of establishing the norm that the

Fourth Amendment would be a robust bulwark against government intrusion into the

private sphere.

But the interpretive flexibility of the Fourth Amendment has not reached closure.

Nor should it. New technologies are allowing law enforcement to know more about us and

18It would be impossible to capture every technology-based constitutional privacy law case. Many of them are
discussed in Chapter 6.

30
the increased prevalence and public awareness of technologies from the internet to Global

Positioning System (GPS) devices is necessarily changing the way we make decisions about

our privacy. Constitutional privacy law is changing, as I will discuss in Chapter 7. For now,

suffice it to say that the social history of constitutional privacy law has helped privilege the

liberal vision of keeping others out of a sacred, private sphere.

Section 1.4: The Social History of Statutory Privacy Law

Beginning in the 1960s, the problem of electronic wiretapping and bugging inspired

several television documentaries (Nelson, 2002) and a five-fold increase in newspaper articles

written about privacy from 1960 to 1970.19 It should come as no surprise, then, that around

the same time, concern over privacy expanded from state and federal judiciaries to the

United States Congress; the public was becoming more aware of the threats to privacy posed

by new technologies.20 These issues received so much attention that, as Priscilla Regan

(1995) observed, both Houses of Congress were moved to hold almost thirty days of

hearings between 1967 and 1973 on “the invasion of privacy by computers” (p. 82). This

concern emerged from the increasing use of computer technology to collect, aggregate, and

analyze information about individuals. In this way, statutory privacy law, like tort and

constitutional law, developed as a social construct alongside the uses of new technologies,

but always as a tool for the individual to keep others out.

As Dan Solove (2004) has noted, the rise of the administrative and social welfare

states created a thirst for data collection and analysis: Social Security, which was

19Based on a ProQuest historical newspaper search for newspaper articles in 1960 with the search term
“invasion /3 privacy” compared to 1970.

20Scholarship on privacy exploded during this time period, helping to raise awareness and inspire action among
activists and policymakers. Alan Westin (1967), Arthur Miller (1971), and Vance Packard (1964) were among
the scholars of this era.

31
accompanied by taxes and forms, offered the government a convenient number

identification system. New Deal redistribution programs also required individuals to report

information in order to qualify (p. 14). Government bureaucratic interests, therefore, represented a

strong social group in the development of computer technologies that could pose dangers to

personal privacy. The military and law enforcement were also keen on making their interests

prevail. Advances in missile technology, upgrades in air defense, and Cold War fears of a

Soviet “first strike” highlighted the military’s need for a nuclear-proof command-and-control

system and a fast, nation-spanning radar network (Ryan, 2013, pp. 11, 23-24, 45). That is, the

military needed a network that would allow it to talk to its missiles in the event that a Soviet

first strike decimated the Pentagon and, to protect against enemy aircraft, a way to gather,

analyze, and translate real-time information from a multinodal array of radars (pp. 13-14, 46).

Cold War fears—real or imagined—of infiltration and spying contributed to widespread

misuse of wiretapping, which also became FBI Director J. Edgar Hoover’s favorite tool for

blackmail and “egregious” abuses of power (Solove, 2004, pp. 199-200).

These and other social and historical developments pushed Congress to act. When

the public’s concern turned to potentially invasive computer technology, Congress passed

several privacy-related statues, including, but not limited to, the Fair Credit Reporting Act of

1970, protecting information in the hands of credit reporting agencies; the 1974 Privacy Act,

safeguarding certain information held by the federal government; and the Electronic

Communications Privacy Act of 1986 (ECPA), which updated the rules for getting warrants

or subpoenas to intercept or search stored electronic communications. The 1988 Video

Privacy Protection Act, which protects the privacy of videotape rental information, was

passed after a newspaper published a list of Judge Robert Bork’s movie rentals during his

Supreme Court confirmation hearings. Bork, an arch-conservative jurist and strict


32
constructionist, did not believe that a right to privacy existed in the Constitution, so a

reporter, Michael Dolan, who frequented the same Blockbuster video as Judge Bork, asked

the assistant manager for Bork’s rental history, obtained the full list from the computer, and

printed it (Pearson, 2013). The privacy protections codified in these laws were individualistic:

like Prosser’s four “privacy torts,” which focused on the nature of the information and any

particularized harm caused by public dissemination, these laws ensured that an individual

would be able to keep his information from others if the information was sufficiently

personal and if disclosure could cause damage.

Privacy concerns have only grown since the 1980s, especially as computer-based

invasions of privacy have evolved into their cloud- or internet-based counterparts. This

happened for three reasons. First, the internet itself is an unprecedented information

gathering tool. Websites can amass user data through cookies, web beacons, and required

disclosures (Solove, 2004, pp. 167-168). Second, whereas government records, for example,

were always available,21 accessing them required considerable work and effort, perhaps even

travel, written permission, time, money, and a persevering will. Finding such information

online requires an access code. Third, where record retention was once, at least in part,

dictated by storage space, the cloud eliminates that natural limitation and allows government

agencies and private companies to retain our information indefinitely.

Judge-made tort law, constitutional interpretation, and Congressional statutes have

developed alongside the increased use of internet and digital technologies to collect, store,

and aggregate information about individuals. For example, American Express cardholders

21 They are available through a Freedom of Information Act (FIOA) request. FOIA is a sunshine law that
allows individuals to request disclosure of government held information unless the government can articulate a
specific reason for secrecy.

33
tried to argue that the credit card company intruded upon customer seclusion when it sold

aggregated customer information to third-party marketers and retailers (Dwyer v. American

Express, 1995). Others looked to the Constitution’s due process right to privacy to guard

against the collection of data that, when pieced together, could reveal to employers sensitive

information about their employees (Doe v. SEPTA, 1995). Still others have tried to use

ECPA to challenge internet-based platforms’ use of cookies and web beacons to track the

online behaviors of those who visited certain websites (In re Pharmatrak, Inc. Privacy Litigation,

2002). Like the underlying statutes, some of these lawsuits were more successful than others

at protecting personal privacy. Suffice it to say, however, that the development of the law of

privacy has been bound up with the development and use in society of technologies that

make it easier to pry into our personal lives. That is, law developed around and helped define

technology as things that destabilize our expectations of privacy. Given the back-and-forth

between groups of varying interests during privacy law’s ongoing period of interpretive

flexibility, it is unsurprising that the right to privacy would develop as an individual right

against an intrusive world. But, as I will argue, it does not have to be that way.

34
CHAPTER TWO:
The Scholarship
The conventional wisdom in privacy scholarship is that a definition of privacy is

elusive. As Dan Solove (2002) has argued, the widespread agreement about the need for

privacy exists in a world where the word “privacy” seems to mean different things to

different people (p. 1088-1090). I argue that the disagreement is only skin deep. Outside of

the ancient concept of privacy as, literally, privation (Arendt, 1958), there is actually

widespread agreement about the classical rights-based, individualistic assumptions

underpinning the ways we understand privacy. Consider the philosopher Howard B. White’s

(1951) list of what privacy means:

A ‘right to be let alone,’ as Warren and Brandeis called it, means more than to have
one’s papers secure from official scrutiny or one’s photographs reserved for one’s
friends. It means a right to choose a way of life in which sequestration is possible,
and it means that the choice is in some way acceptable to liberal society, a good
choice. It means the association of what may be distinct things: the private sphere as
against publicity, the private life as against the public life, and a private task as against
the public task. (p. 171-172).

All of these ideas—the right to be let alone, a right to secrecy, autonomy, and the separation

of the personal and the public—are rights-based: they reflect the Lockean and Kantian ideal

of the primacy of the individual over society. These conventional philosophical foundations

were expressed in Warren’s and Brandeis’s (1890) article and in William Prosser’s (1960)

interpretive scheme for privacy tort law. They have governed privacy law in the United

States ever since.

Despite the differences between Lockean and Kantian theory, they are united by the

respect they offer the individual and individual rights (Smith, 1990; Milton, 1999; Tully,

1993; Korsgaard, 2004; Sandel, 1998). And given the pervasiveness of both philosophies in

the American legal tradition (Ely, 2008, p. 28-29; Mossoff, 2002, p. 155; Sandel, 1996, 43-

35
119), it is no surprise that the conventional theories of privacy also reflect these ideals. Much

privacy theory is focused on individual freedom and not only sees the individual as the locus

of privacy rights, but also sees the protection of individual freedom as the ultimate goal of

privacy. This Chapter argues that this rights-based foundation underlies all of the

conventional theories of privacy. These theories can be divided into two categories. Some

theories of privacy concern negative rights, or freedom from something, whether it is

freedom from others, from conformity, or from publicity, for example. Other theories

concern positive rights, or the freedom for something, including full autonomy, the

formation of ideas, or the development of a rich conception of personhood. In all cases,

these theories reflect quite a bit of agreement. But, as I will argue, this general agreement

gets us no closer to a coherent, workable understanding of privacy that reflects behavior in

everyday life and can be used by judges and policymakers to answer information sharing

problems of modern privacy.

Section 2.1: Privacy as Freedom From

A central pillar of liberal theory is negative freedom, the freedom from intrusion,

encroachment, or violation from the state or other people (Dworkin, 1977). I argue that

several conventional ways of thinking about privacy reflect the notion that privacy offers

freedom from others. For example, many scholars think about privacy as offering a retreat,

respite, or separation from the world. They sometimes buttress those theories with spatial

analogies, suggesting that there is something special about private versus public spaces.

Though this idea has deeply penetrated the privacy literature, it has actually served to limit

privacy rights, fails to adequately account for modern technological developments affecting

privacy, and reflects a cursory understanding of the literature. In place of a simple theory of

separation, some scholars shift from a focus on the act of sequestration to the underlying
36
thing being sequestered, understanding privacy as something inherent in the concepts of

secrets and intimacy. However, this subjective idea is too often bound up with a normative

moral judgment that secrets are discrediting or, to use the sociologist’s term, deviant, that it

fails to capture much of the privacy space. In all cases, though, these conceptions of privacy

reflect rights theory’s primacy of the individual because they involve the individual’s power

to separate from the world and decide for himself what is and what is not private.

Section 2.1.1: Separation, Sequestration, and Exclusion

If privacy is conceived as freedom from others or the state, then it makes sense that

much of the literature would focus on seclusion, separation from the public eye, and the

exclusion of others from certain aspects of personal life. These conceptions align closely

with Locke’s theory of property and individual rights and yet do not adequately protect

privacy.

Warren and Brandeis (1890) began by conceiving of privacy as some form of

separation when they argued that modern technology had made “solitude” and “retreat from

the world” more necessary than ever (p. 196). Anita Allen (2001) explained her vision of

privacy by listing examples that involved seclusion: “4,000-square foot homes nestled among

mature trees in bucolic suburbs,” “vacation[ing] at remote resorts,” and “spend[ing] an hour

alone with a book behind closed doors (p. 301).” She was suggesting that any every day and

theoretical concept of privacy had to include some measure of aloneness or separation

because otherwise, the public had access to us. Public access, then, was the opposite of

privacy. David O’Brien (1979) echoed this seemingly symbiotic relationship when he called

privacy “the existential condition of limited access” brought on by the condition of being

alone (p. 15-16). For Sissela Bok (1983), privacy was “the condition of being protected

from” others (p. 10-11), a point noted decades earlier by Edward Shills (1966): the life we
37
live in private is “a secluded life, a life separated from” society (p. 283). And Howard White

(1951) stood on Warren’s and Brandeis’s shoulders when he similarly described privacy as a

“right against the world,” or a right that makes sequestration possible and keeps us free from

all manner of intrusions by others (p. 171-172). It seems, then, that the separation idea has

taken hold in the legal, political, and philosophical literatures on privacy.

This understanding is common among social theorists, as well. Donald Ball (1975), a

sociologist, defined privacy as “the ability to engage in activities without being observed” (p.

260). The psychologists Robert Laufer and Maxine Wolfe (1977), who studied notions of

privacy among youth, understood it to be the process of separation of an individual from his

environment (p. 26-27). That separation could be physical—literally hiding away in a

space—or psychological—denying warmth, whispering, showing emotional distance. But in

each case, a personal zone was created. Raymond Williams (1985), a cultural critic and

historian, understood privacy to be “the ultimate generalized privilege … of seclusion and

protection from others (the public)” (p. 243). Notions of seclusion and protection

necessarily take on a “me against the world” bias, privileging the individual as the locus of

privacy rights.

They also have distinct spatial overtones (Nissenbaum, 2004, p. 111-113). Much of

the social science literature conceiving of privacy as sequestration uses the rhetoric of spaces,

territories, walls, and other indicators of literal separation to support theoretical arguments.

For example, Joseph Rykwert (2001), an historian of the ancient world, argued that there was

a direct correspondence between ancient conceptions of privacy and the women’s rooms in

the home, on the one hand, and public behavior and the men’s rooms, on the other (p. 34).

The distinction in the home was literal. In his work on secret societies, Georg Simmel (1906)

not only argued that “detachment” and “exclusion” were necessary for the success of a
38
secret organization, but analogized the role of the secret to a wall of separation: “Their secret

encircles them like a boundary, beyond which there is nothing” (p. 484). And when the

sociologist Robert Maxwell (1967) wanted to study sexual intimacy in pre-industrial societies,

he chose to study wall construction, material permeability, and hidden spaces to determine if

there was a relationship between intimacy norms in the greater society and private behavior.

For other scholars, the evidence is in the rhetoric they use to explain their views on

privacy. Jeffrey Rosen (2001) talked about Hillary Clinton’s decision to tolerate her

husband’s extramarital affair as a decision “shielded” by privacy (p. 217). Milton Konvitz

(1966), a legal theorist, argued that privacy is a “sphere of space” that the public cannot enter

or control. For yet others, privacy requires “boundaries” and a “territory” all our own that

was “insulated” from the rest of the world (Simmel, A., 1971).

An admittedly cursory reading of the work of Erving Goffman (1963a, 1963b, 1972)

echoes the privacy-as-sequestration idea with similar spatial analogies. Goffman (1963b)

defined private places as “soundproof regions where only members or invitees gather.” They

are regions physically bounded (p. 132-133) by walls or doors (p. 151-152) that offer physical

separation between people and between different kinds of social interaction. Stalls are the

perfect examples (Goffman, 1972, p. 32-33). Clothing, personal possessions, and spaces that

you own also provide individuals with a certain amount of spatial privacy, allowing total

exclusion of others (p. 38).

Goffman’s back stage/front stage distinction is the best analogy for a spatial theory

of privacy. In The Presentation of Self in Everyday Life, Goffman (1959) analyzes social

interaction through an extended theatrical conceit, comparing individuals to actors on a

stage. He separates the front stage, where the performance of social interaction occurs, and

the back stage, where individuals can drop the façade of performance. And he describes
39
them as places, or “setting[s]” (p. 107). The back stage is a place of hiding, so that devices

like telephones, closets, and bathrooms “could be used ‘privately’” (p. 112-113). It is also cut

off from the front stage by a partition, passageway, or curtain. The backstage, then, is

defined by providing the performer with a private space—like a home, a green room, or a

bathroom—to do certain necessary things away from an audience.

The legal implication of this theory is to conceive of a right to privacy as a right to

exclude, which reflects the Lockean origins of the argument. Ruth Gavison (1980) defined

privacy as a “limited right of access” by others to our private spaces (p. 421). She was not

alone in making that argument (Bok, 1982; Jourard, 1966; Van Den Haag, 1971; O’Brien,

1979; Gross, 1967). While calling for greater social research into the area, Alan P. Bates

(1964), a sociologist, considered the minimal social science literature on privacy and defined

the concept as “a person’s feeling that others should be excluded from something which is

of concern to him” (p. 429). That is, much like the law of trespass, a tort for unauthorized

encroachments onto another’s land, a theory of privacy based on space and separation

necessarily includes the attendant right to exclude others and to determine who should gain

entry. And this right to exclude reflects the Lockean liberal tradition. Locke (1689/1980)

believed that we own ourselves and, therefore, own the fruits of our labor (§§ 25-27). We

can exclude others from our property (§ 123), and so can a theory of privacy based on

sequestration and analogized to spaces and territories allow us to exclude others from our

private sphere.

Warren and Brandeis (1890) understood this when they used Lockean ideas of

personal ownership to argue that our “inviolate personality” mandated legal protection from

intrusion by government and private actors (p. 205). Common law intellectual property laws

allowed individuals to control the publication of their cultural creations. They offered
40
protection of profits and the ability to prevent publication at all (p. 200). But the authors felt

that this basic concept of personal property could not solely be based on the creative or

innovative aspects of the underlying artifact (p. 202-203). After all, one could have a

collection of coins that he would like to keep private and it would be unjust to allow another

to publish a catalogue of those coins even though the coins could not be considered

intellectual property in any sense (p. 203). Rather, the “protection afforded to thoughts,

sentiments, and emotions expressed through the medium of writing or of the arts, … is

merely an instance of the enforcement of the more general right of the individual to be let

alone” based on the Lockean principle that we own ourselves (p. 205). The same principle

animated Jeffrey Reiman’s (1984) view that privacy “confer[s] title to one’s existence” and

allows us to claim ownership over our thoughts and actions because the private world is

entirely our own (p. 310). Similarly, Larry Lessig’s (2002) conception of privacy-as-property

is based on the same notions of separation and Lockean personal ownership.

But although they retain fidelity to individual rights, principles of separation and

exclusion do more harm than good.22 The attendant spatial analogy has become so pervasive

in law that, at times, it has limited personal privacy. It used to be the case that violations of

the Fourth Amendment, which guarantees freedom from unreasonable searches and seizures

at the hands of the government (U.S. Const. amend. IV), depended upon a physical invasion

of a private place, like a home. In Olmstead v. United States (1928), for example, the Supreme

Court rejected a Fourth Amendment challenge to a warrantless wiretap because the tap, by

22Daniel Solove (2004) offers a powerful critique of privacy-as-property: “When personal information is
understood as a property right, the value of privacy is often translated into the combined monetary value of
particular pieces of information. Privacy becomes the right to profit from one’s personal data, and the harm to
privacy becomes understood as not being adequately paid for the use of this ‘property’” (p. 88-89). Professor
Solove goes on at some length to discuss the difficulties with this theory.

41
virtue of the fact that it was installed on the outdoor phone line and did not require entry

into the suspect’s home, could not constitute a search: “There was no searching. There was

no seizure. The evidence was seizure by the use of the sense of hearing and that only. There

was no entry of the houses or offices of the defendants” (p. 464). Where there was no entry,

or no intrusion into the private space, there was no search.23 Although Olmstead has been

overturned, the idea of private spaces that animated Olmstead still threatens to limit privacy

protections. In California v. Greenwood (1988), for example, the Court found no privacy

interest in garbage when placed at the curb of a home: after all, if we “deposit[] … garbage in

an area particularly suited for public inspection and … public consumption, for the express

purpose of having strangers take it,” we cannot reasonably expect to maintain privacy in that

discarded trash (p. 37). As Katrin Byford (1998) has noted, a spatial theory of privacy will

undermine privacy online, where physical spaces, as such, do not exist: “A territorial view of

privacy, which associates the concept of privacy with the sanctity of certain physical spaces,

has no application in a realm in which there is no space” (p. 40). This not only has the effect

of erasing privacy from the virtual world, but also, as Mary Anne Franks (2011) has argued, it

implies that internet life, and any injuries that occur in it, are less real and less worthy of

protection or redress. She calls this phenomenon “cyberspace idealism,” and it is a

dangerous, perhaps unintended effect of conceptualizing privacy around sequestered spaces

(p. 226).

23 Justice Brandeis, of course, famously dissented, arguing that the right to be let alone that he and Warren
articulated decades earlier meant that a physical invasion was not required for an act of intrusion to constitute a
privacy violation (Olmstead, 1928, p. 470-485). Brandeis wrote that “[t]he protection guaranteed by the
amendments is much broader in scope. The makers of our Constitution undertook to secure conditions
favorable to the pursuit of happiness. They recognized the significance of man’s spiritual nature, of his feelings
and of his intellect. They knew that only a part of the pain, pleasure and satisfactions of life are to be found in
material things. They sought to protect Americans in their beliefs, their thoughts, their emotions and their
sensations. They conferred, as against the government, the right to be let alone-the most comprehensive of
rights and the right most valued by civilized men” (p. 478).

42
More broadly, conceiving of privacy as detachment or separation and using a spatial

analogy to make sense of it has logical limitations. It ignores the fact that people can find

privacy in public places. It also tells us little more than the mere fact that there are private

places and public places and, therefore, cannot describe the contours of either. We are left

with either no clear path to understand privacy or one so absolute yet narrow that we start

treating invasions of privacy like trespasses onto land.

Section 2.1.2: Intimacy, Secrecy, and Deviance

If a theory of privacy based on separation, exclusion, and self-ownership seems too

rigid and unrealistic, some privacy scholars avoid the spatial analogy and its attendant

difficulties by looking to what things are private, not where they are kept. Private things, like

secrets, can go anywhere and retain their private nature. Conceiving of privacy this way also

comports with the common understanding that intimate information—sexuality, medical

diagnoses, personal histories—are central to what we consider private. But while these

theories retain the Lockean and Kantian presumption of individual inviolability and are

reflected in Supreme Court jurisprudence, they too narrowly circumscribe privacy and are

often burdened by normative judgments about concealed information.

Much of the literature on privacy centers on intimacy even when it overlaps with

theories of separation and exclusion. For example, to explain his theory of public versus

private, Howard White (1951) offered examples of privacy intrusions to which he expected

we can all relate: a question about a military cadet’s sexual orientation (p. 180),24 an inquiry

into why parents only had one child, and questions from Kinsey (1948, 1953) reporters.

24Before its final repeal in 2011, the armed services’ so-called “Don’t Ask, Don’t Tell” policy banned gay
service members from serving openly and admitting their sexual orientation. Professor White (1951) is arguing
that the question would still be considered an intrusion into the private sphere regardless of the law.

43
Robert Gerstein (1984) and Jeffrey Rosen (2000) both argued that intimate relationships

need privacy to function and flourish. And despite the fact that they both concluded that

individual privacy includes some measure of control over information dissemination, Jean

Cohen’s (2001) and Julie Inness’s (1992) conceptions of privacy are bound up with intimacy.

To Professor Cohen (2001), privacy is about choice, but the choice is about “whether, when,

and with whom one will discuss intimate matters” (p. 318-319). For Professor Inness (1992),

privacy is the “state of the agent having control over the realm of intimacy, which contains

her decisions about intimate access to herself … and her decisions about her own intimate

actions” (p. 56-57). In other words, what links all areas to privacy is the common

denominator of intimacy, which draws its value from an individual’s sense of love, caring,

and liking (p. 78). To these scholars, intimacy is the “chief restricting concept” in the

definition of privacy (Gerety, 1977, p .263).

It also reflects the same Lockean and Kantian concepts of personal inviolability as

other theories of privacy. If, according to Locke, we own ourselves, then the pieces of

ourselves we keep closest to our hearts—namely, intimate details—are at the core of what

society is meant to protect. Similarly, we could analogize intimate information to that which

defines us in Kant’s purely rational and autonomous realm. If our inclinations, wants, and

desires make us all fungible subjects in the physical world, it is the world of pure autonomy

that defines who we are as individuals. The same could be said of intimate information, thus

elevating intimacy to the center of a right to privacy protected by society.

One advantage of privacy-as-intimacy is that the concept is already reflected in

various federal statutes and in Supreme Court decisions on due process, ranging as far back

as 1923. The Family Educational Rights and Privacy Act (1974) protects information about

students, the Right to Financial Privacy Act (1978) guarantees secrecy over certain financial
44
holdings, and the Health Insurance Portability and Accountability Act (1996) provides some

security for our health data. All of the information covered by these statutes—about our

children, our money, and our health—has traditionally been considered among the most

private because of its intimate nature. Control over intimate parts of our lives has also been a

long-running theme in the Supreme Court’s due process jurisprudence. Though the Court

never mentioned the word “privacy,” its decisions in Meyer v. Nebraska (1923), which struck

down a law prohibiting the teaching of foreign languages in elementary schools, and Pierce v.

Society of Sisters (1925), which struck down a law requiring that all children attend public

schools, suggest that there was something special, or intimate, about the parent-child

relationship and the family unit. Both laws at issue in Meyer and Pierce intruded into the

parents’ process of raising their children as they saw fit. Furthermore, cases like Griswold v.

Connecticut (1965), Roe v. Wade (1973), and Lawrence v. Texas (2003) reflect the Court’s concern

for the protection of intimacy, whether through a constitutional right to privacy or a more

general principle of liberty. Griswold used the penumbras of several guarantees in the

Constitution to argue that a right to privacy protected a married woman’s access to

contraception. Justice Douglas concluded his opinion by connecting the intimacy of the

marital union with the right to privacy:

We deal with a right of privacy [in marriage]… . Marriage is a coming together for
better or for worse, hopefully enduring, and intimate to the degree of being sacred. It
is an association that promotes a way of life, not causes; a harmony in living, not
political faiths; a bilateral loyalty, not commercial or social projects. Yet it is an
association for as noble a purpose as any involved in our prior decisions (p. 486).

In Roe (1973, p. 169-170), the Court enshrined a woman’s right to decide to terminate a

pregnancy on similar privacy grounds. And in Lawrence (2003), the Court struck down a state

anti-sodomy law on the ground that gay persons, like all others, enjoy a liberty interest in

intimate association: “When sexuality finds overt expression in intimate conduct with
45
another person, the conduct can be but one element in a personal bond that is more

enduring. The liberty protected by the Constitution allows homosexual persons the right to

make this choice” (p. 567). In all three cases, the intimate and personal nature of the act in

question—contraception and family planning, birth and pregnancy, and sodomy and sex—

was at the center of the Court’s rhetoric and substance.

But it is not clear what limits intimacy. For the Court, intimate conduct was

something personal, perhaps sexual or familial, but it offered no clear limiting principle.

Professor Inness felt that intimacy includes a heart-felt emotional component; to Tom

Gerety (1977), intimacy was a state of “consciousness” where you have access to your own

and others’ bodies and minds (p. 268). Charles Fried (1968) defined intimacy as sharing

personal information with a select few close associates, which is a narrower conception of

intimacy than those of Professors Inness and Gerety. Therefore, limiting privacy to intimacy,

variously defined, is unhelpful.

The sociologist’s conception of intimacy does not suffer an indeterminacy problem;

it is almost universally bound up with individual or group secrecy. In his seminal article, The

Sociology of Secrecy and of Secret Societies, Georg Simmel (1906) concluded that privacy is a

“universal sociological form” defined by hiding something (p. 463). It is universal in that we

do it all the time: If all relationships between people are based on knowing something about

each other, keeping certain facets of ourselves hidden can define those relationships. This

does not necessarily mean that the person who knows more about us is more correct in his

assessment of who we are; rather, different pictures of us are true for different people (p.

443-445). Secrecy, therefore, allows us to do things and maintain relationships we would not

otherwise be able to in a world of complete knowledge.

46
Simmel’s theory has one distinct advantage over any conception of privacy based on

separation and exclusion: his discourse on secret societies can help us understand when a

secret has ceased to become private. Privacy-as-separation fails in part because it is too

strict—privacy can be eroded when one other person gains access. For Simmel, a secret can

maintain its private nature, its inherent secrecy, throughout a group of people when keeping

the secret is part of the identity of that group. Members of secret societies “constitute a

community for the purpose of mutual guarantee of secrecy” (p. 447). They define

themselves by engaging in rituals and through separation from the rest of society (p. 484, p.

485). This does not just happen in cults; social cliques turn their backs on others or deny

conversation to outsiders and groups of friends maintain each other’s secrets all the time. In

all cases, the group is defined by what it knows and it expresses its privileged status by

closure.

The sociologist Diane Vaughan (1990) connected this conception of secrecy with

intimacy in her study of how couples separate. “We are all secret-keepers in our intimate

relationships,” Professor Vaughan argues (p. 11). Secrets can both enhance relationships, by

smoothing over differences or by creating the intimacy of co-conspirators, and contribute to

their collapse, by allowing plans to be developed without open inspection, intrusion,

consent, or participation from others (p. 13). And Erving Goffman (1959) would agree that

this type of secrecy is an important element of privacy. “If an individual is to give expression

to ideal standards during his performance,” Goffman writes, “then he will have to forgo or

conceal action which is inconsistent with these standards” (p. 41). In this view, privacy is the

concealment of things that contradict an individual’s public façade: the “private sacrifice” of

some behavior will permit the performance to continue (p. 44). This is what the back stage is

really for. It is not, as a spatial theory of privacy would suggest, a room, stall, or secluded
47
place; rather, it is the locus of private behavior, of secrets. For example, servants use first

names (p. 116), workers laugh and take breaks (p. 114), and management and employees may

eat together and converse informally (p. 116). In some cases, this culture is associated with a

space;25 but it is what we do in the backstage, the secrets we hide there, that defines it. The

idea of privacy as based on secrecy was echoed by Judge Posner (1981): “[T]he word

‘privacy’ seems to embrace at least two distinct interests[,] … [including] concealment of

information, [which] is invaded whenever private information is obtained against the wishes

of the person to whom the information pertains” (p. 273).

But there are two central failures of understanding privacy as a means of keeping

secrets. First, as Dan Solove (2004) has argued, American privacy law has adopted a rigid

and uncompromising form of privacy-as-secrecy: once the secret is out, even to one other

person, both the secret and its attendant privacy interest are extinguished. Professor Solove

called this the “secrecy paradigm” and lamented its domination of our approach to privacy,

especially given the pervasiveness of modern technologies that require us to reveal

information to third parties. Second, we have a tendency to conceive of secrets as

discrediting, embarrassing, or, to use the sociologist’s term, deviant. Deviance refers to

behavior that violates the norms of some group (Vaughan, 1996, p. 58).26 A tilt toward

25Consider, for example, the British television series, Upstairs-Downstairs, and the PBS Masterpiece Classic,
Downton Abbey. Both of these series depict the behaviors of servants, who live “downstairs,” and their
aristocratic masters, who live “upstairs.”

26Ball (1975) has a similar definition: “deviance occurs when one engages in activities which are recognized as
infractions of collectively held rules or norms to which are attached varied punitive sanctions as social control
mechanisms (p. 260).

48
deviance, in turn, places a severe limitation on using secrecy to justify a legal right to privacy:

if our secrets are so discrediting, society would rarely, if ever, see a need to protect them.27

Much of the sociological discourse on secrecy and intimacy as it relates to privacy

devolves into a normative moral judgment about those secrets. Despite the fact that he

professes to make no such judgments, Goffman’s (1963b) spends ample time listing secret,

hidden behaviors that make us vulnerable vis-à-vis others.28 The back stage is littered with

“dirty work” (Goffman, 1959, p. 44) and “inappropriate” conduct done in “secret” if it was

fun or satisfying in some way (p. 41). From this introduction of the back stage, Goffman

only further burdens it with a normative twist. People “lapse” in the back stage (p. 132),

drifting toward indecorous behavior (p. 108). They laugh at their audience, engage in mock

role-playing, and poke fun through “uncomplimentary terms of reference” (p. 174). They

derogate others and brazenly lie (p. 175) and keep “dark” secrets (p. 141). Behind

involvement shields, individuals do “sanctionable” or “unprofessional” things, like nurses

smoking in a tunnel or adolescent horseplay outside of the view of others (Goffman, 1963b,

p. 39). Goffman (1963b) also points to the little misbehaviors—activities he calls “fugitive

involvements,” no less (p. 66)—that you can engage in when outside the public view:

While doing housework: You can keep your face creamed, your hair in pin curls; …
when you’re sitting at the kitchen counter peeling potatoes you can do your ankle
exercises and foot strengtheners, and also practice good sitting posture. … While

27Based on Justice Harlan’s concurring opinion in Katz v. United States (1967), privacy rights in the United States
have been based on a subjective expectation of privacy that society is willing to recognize as reasonable.
28In fact, he echoes Durkheim when he uses the word “profane” 7 times to describe activities in the private
sphere in The Presentation of Self in Everyday Life and in Behaviors in Public Places. Surprisingly, the word was never
used in Stigma. For Durkheim (1912/2001), the profane was the opposite of the sacred; it was the everyday, the
dirty and mundane activities of life that would destroy the sanctity of sacred things if they ever touched: “the
only way to define the relation between the sacred and profane is their heterogeneity … [which] is absolute” (p.
36-38). The same could be said for private activities in the back stage because if any member of the audience
saw what went on beyond the performance (the profane), the façade of the performance (the sacred) would be
destroyed.

49
reading or watching TV: You can brush your hair; massage your gums; do your ankle
and hand exercises and foot strengtheners; do some bust and back exercises; massage
your scalp; use the abrasive treatment for removing superfluous hair (p. 65).

While I do not argue that Goffman was attaching moral opprobrium to the back stage, a

conceptualization of privacy based on hiding stigmatizing secrets, then, is in danger of

becoming about concealing bad things, not just concealment in general. The anonymity

provided by privacy would not merely allow someone to do something different; rather, it

would allow him to “misbehave,” to “falsely present[] himself (p. 130), or do the

“unattractive” (p. 66) things inappropriate in the public sphere.

One of Goffman’s major works, Stigma, is entirely concerned with negative or

inappropriate behavior. That may sound like an uninspired conclusion given the title, but

what is most telling is not the mere recitation of stigmatizing activities and things, but rather

the implication that the private sphere is defined by stigma. Stigmas are “discrediting”

(Goffman, 1963a, p. 41), “debasing” (p. 43), and “undesirable” (p. 64). They are “secret

failings” (p. 65) that make us “blameworthy” (p. 78) and “shameful” (p. 140). This moral

judgment pervades the legal, philosophical, and social science literature, as well. For Alan

Bates (1964), privacy does not simply protect against disclosures, but rather against

“humiliating and damaging” ones about which others would “disapprove[]” (p. 433). The

sociologist David Diekema (1992) follows in a similar vein: privacy shields “improper”

behaviors, “transgressions or nasty habits” (p. 487). And Richard Posner (1976) argues that

privacy protections grant people a right to conceal “legitimately discrediting or deceiving

facts” (p. 25). It should come as no surprise, then, that several sociologists define private

spaces as an outlet for deviant, discrediting behavior (Ball, 1975, p. 270; Lofland, 1969, p.

68).

50
It is hard to deny the moral dimension to this discussion of private behaviors,

activities, and symbols. They are stigmatizing, at worst, or dissonant with normal social

interaction, at best. In either case, there is a moral dimension that burdens privacy with an

attendant profanity; if the private sphere is characterized by dark secrets, or behaviors and

activities that society refuses to tolerate, it is unclear how a right to privacy could ever exist.

Section 2.2: Privacy as Freedom For

The previous theories of privacy reflected the individual’s right to seclude himself

and exclude others from certain aspects of his life, whether intimate, deviant, or not. They

appreciated privacy as guaranteeing freedom from something: private places and private

things were so called because they belonged to the individual, who had the power to control

dissemination. But as we have discussed, these theories are too rigid or too burdened by

moral judgment to adequately capture what we mean by privacy and justify state protection

for a right to privacy.

Several other theories take the same mantle of individual freedom and look forward,

viewing privacy as a necessary condition for generating the ideals of independence and

autonomy. The argument that privacy protects personhood, or that which constitutes our

essence, emerges directly from Locke’s notion of self-ownership and Warren’s and

Brandeis’s (1890) derivative theory of “inviolate personality” (p. 205). Conceiving of privacy

as essential to the concepts of autonomy and free choice also stems from liberal theory. But

although self-realization and autonomy are important values and reflected in some Supreme

Court jurisprudence, they offer no pathway toward a workable theory of privacy. Like other

rights-based conceptions of privacy, they could be limitless and harmful.

Section 2.2.1: Individuality, Independence, and Personhood

51
Like Kant, whose metaphysics demanded that individuals be treated with dignity

rather than as subjects of others, some scholars argue that respecting privacy is a necessary

element of valuing individuals as ends in themselves. Alan Bates (1964) channeled Kant

when he argued that privacy only has meaning in terms of a rational, autonomous self that is

capable of self-consciousness: “privacy refers to an important dimension of [an individual’s]

distinction between … that which is crucial to self and that which has negligible importance”

(p. 432). He could have been talking about intimate information, but he takes as given the

fact that we do not accord privacy rights to children. This suggests that the crux of privacy is

the reasoning and self-awareness that comes with maturation and not necessarily the subject

matter of any secret. Stanley Benn (1971) and Edward Bloustein (1964) express a similar

idea. For Benn, individuals resent being watched because it makes them feel like tools in

someone else’s hands and not as free individuals “with sensibilities, ends, and aspirations of

their own, morally responsible for their own decisions, and capable, as mere specimens are

not, of reciprocal relations” with others (p. 7). Bloustein (1964) adds that privacy invasions

have effects far beyond any physical encroachment or injury: one who is subject to

intrusions is “less of a man, has less human dignity” precisely because his privacy, a

manifestation of his free self, is at risk (p. 974). This view evokes both Kant’s mandate to

treat everyone as ends in themselves and Locke’s notions of self-ownership and his

explanation for creating government out of the state of nature. In both cases, the lack of

individual rights and protection for the person’s life, liberty, and property does violence to

his sense of self and his entitlements as a free, autonomous person.

One of those entitlements is the protection of individuality and free thought and

many scholars argue that privacy plays an essential role in making such independence

possible. In The Spirit of the Laws, Montesquieu (1900) admired British liberty for its
52
protection of free and independent thought: Britain was likely to create the best scholarship

because the rule of law allowed British thinkers to think alone, beyond the conforming and

biased eyes of the state and others (p. 27). Modern privacy scholars jumped on

Montesquieu’s admiration. Alan Bates (1964), for example, believed that privacy allowed

individuals to process information before speaking (p. 432) and the philosophers Mark

Alfino and Randolph Mayes (2003) argue that a person requires privacy in order to reason

about his choices (p. 1). That intellectual space both defines the individual and would be

damaged by any interference from the state or society.

A close corollary to this conception of privacy is the notion that privacy provides us

the space necessary to craft and edit ideas before public consumption. This idea, what Julie

Cohen (2003) refers to as “intellectual privacy,” combines interests in personal autonomy

and private spaces (p. 576-577). It offers us the freedom to “explore areas of intellectual

interest” that we might not feel comfortable discussing around other people (p. 579),

including unpopular ideas, deviant ones, or, more importantly, incomplete ones. As Ruth

Gavison (1980) noted, privacy gives us the opportunity to express unpopular ideas first to

sympathetic audience and then, “after a period of germination, [we] may be more willing to

declare [our] unpopular views in public” (p. 450). It is, therefore, an essential part of our

rights of self-determination (Cohen, 2003, p. 577).

The primary advantages of this theory of privacy and personhood are its rhetorical

strength and its ability to move beyond the limited vision of privacy inspired by detachment

and intimacy. If privacy is essential to who we are as free selves, then a right to privacy need

not wait for a physical intrusion into a private space or a revelation of a stigmatizing private

fact. Surveillance, for example, can cause two additional types of injuries. First, as the

philosopher George Kateb (2001) has argued, simply being watched could constitute an
53
injury because it demeans you as a person (p. 272). As a subject of surveillance, you are

stripped of your entitlement to freedom as a self-aware individual in a free society; you are

“oppress[ed],” “degrade[ed]” (p. 275) and made the subject of others. Channeling Locke and

Kant, he argues that privacy allows us to truly own ourselves and treat ourselves as

autonomous and “inviolable” (p. 277-278). Stanley Benn (1971) explained that you begin to

see yourself in a new light, “as something seen through another’s eyes,” which “disrupt[s],

distort[s], or frustrate[s]” your ability to think and act on your own (p. 7). Second, Jeffrey

Rosen (2000) implied in The Unwanted Gaze that being watched, surveilled, and studied can

lead to discrimination. For Rosen, privacy protects us from “being misdefined and judged

out of context in a world … in which information can easily be confused with knowledge”

(p. 8). Data aggregators used by private companies and government agencies can take

incomplete or inaccurate information about us and categorize us in ways that limit our

opportunities (Pasquale, 2014a; 2014b). Sometimes this is relatively innocuous, like when

Google uses the information in an Orthodox Jew’s emails to suggest a banner advertisement

for ChristianMingle.com. In other cases, it can be devastating: a health care company, for

example, denied coverage to an individual applicant when it found antidepressants in her

prescription history and assumed (incorrectly) that she had a severe neurological disorder

(Terhune, 2008).

Dan Solove (2002) has pointed out that this rich concept of personhood is already

reflected in long-standing Supreme Court jurisprudence on privacy and liberty (p. 1117). In

the 1891 case Union Pacific Railway v. Botsford, the Court held that a party in a civil case could

not be compelled to submit to a medical examination because man has the right “to the

possession and control of his own person, free from all restraint or interferences” (p. 251).

Later, when the Court had occasion to rule on a woman’s right to choose, it explained the
54
importance of decisions like contraception, family planning, sex, and terminating a

pregnancy: “At the heart of liberty is the right to define one’s own concept of existence, of

meaning, of the universe, and of the mystery of human life. Beliefs about these matters could

not define the attributes of personhood were they formed under compulsion of the State”

(Planned Parenthood v. Casey, 1992, p. 851). Granted, activities we could consider “intimate”

were at the center of these cases; but the freedom to make those decisions is about more

than their sexual nature. Rather, the Court seemed to suggest, these decisions defined what it

means to be treated with dignity as an autonomous individual in a democratic society.

This theory of privacy seems to inspire the most lyricism and poetry from scholars

and the courts, but it also appears completely boundless. Professors Benn, Bloustein, Kateb,

and others never explain what they mean by “personhood” other than by reference to

amorphous philosophical concepts. Nor do they attempt to move beyond using the theory

to explain why we should value privacy to how to use those values in the courts. Therefore, it

cannot help judges articulate a workable solution to practical questions of privacy law.

Section 2.2.2: Autonomy, Choice, and Control

Existing alongside all of these theories of privacy are the concepts of autonomy and

choice: the choice to disseminate information or the choice to marry a same-sex partner, for

example, and the correlative right to control what others know about us. Seen in this way,

privacy is about the freely choosing self, exercising his liberty in a democratic society. But

like other theories of privacy, privacy-as-choice or control either threatens too broad a reach,

providing judges with no adjudicative path and pushing scholars toward intellectual

confusion, or actually injures personal privacy.

Autonomy and choice are central to both Locke and Kant, as both agree that the

freedom to choose defines man. Locke (1689/1980) sees the state as a servant of individual
55
rights because man, while in a state of pure equality in the state of nature, chooses to join

together in government (§ 123). For Kant (1785/2005), autonomy and choice is part of

man’s transcendental rational nature: true freedom is only possible in an intelligible realm

detached from the things that hold us back as humans (p. 71-72). Neo-Kantian liberalism

takes the freedom embodied by pure rationality in the intelligible realm and argues that

freedom is the right to choose one’s own ends free of state interference (Rawls, 1971;

Nozick, 1974). As John Rawls (1971) stated in A Theory of Justice, “a moral person is a subject

with ends he has chosen, and his fundamental preference is for conditions that enable him to

frame a mode of life that expresses his nature as a free and equal rational being as fully as

circumstances permit” (p. 561). Choice, therefore, is at the core of the liberal ideal.

This choosing self is evident in the conventional understanding of privacy as the

individual’s right to choose what the public will know about him. Jean Cohen (2001) argues

that privacy is the “right to choose whether, when, and with whom” to share intimate

information (p. 319). Charles Fried (1968) suggests that different groups of friends exist

because we actively choose to share more with intimate friends and less with acquaintances

(p. 484). This free choice gives us the right to control public knowledge of our personal

selves. Privacy, then, “is the claim of individuals, groups, or institutions to determine for

themselves when, how, and to what extent information about them is communicated to

others” (Westin, 1967, p. 7). It is, to Julie Inness (1992), the idea that an individual has

“control over a realm of intimacy” (p. 56) and, to Jonathan Zittrain (2000), control over our

information, in general (p. 1201). For the philosopher Steve Matthews (2010), exercising

privacy is making the “choice” to “control and manage” the boundary between ourselves

and others (p. 351). The common denominator is free choice and control, both of which are

central to the rights ideal.


56
In his compelling text, The Digital Person, Dan Solove (2004) argued that the salient

problem with private intermediaries and governments amassing digital dossiers about

citizens is the loss of individual control over personal information (p. 90). Collecting data

that are already available or required for doing business, Solove argues, does not injure

personal privacy in the conventional sense; that is, there is no “discrete wrong” that occurs

through the behavior of some “particular wrongdoer[]” who, say, discloses personal

information to the media. Rather, the problem is structural. Data are collected without

sufficient controls, so Solove recommends a new architecture of data collection that “affords

people greater participation in the uses of their information” (p. 102). He recommends

starting at the Fair Information Practices, a series of recommendations from the Department

of Housing, Education, and Welfare in 1973 that are predominantly focused on ensuring

individuals have control over their personal data. The guidelines include recommendations

for no secret record-keeping, a pathway for individuals to read their records, a way for

individuals to prevent his information from being used in different ways, and a method of

correction and amendment (p. 104, p. 152). At their core, these recommendations aim at

shifting control over data from the collector (an intermediary or a government agency) back

to the source of that information (the individual). Professor Solove’s innovative proposals

have revolutionized our discussion of digital dossiers. For now, it seems that his theory is

based on a conception of privacy that, at least in part, assumes that individual control over

personal information is part of a just privacy regime.

This, however, is a problematic way of understanding privacy for four reasons. First,

it can be too broad. If privacy is all about choice, its exercise becomes entirely subjective,

limited only by an individual’s personal choice of what to reveal and when (Tverdek, 2008, p.

64). A rule based on this theory would leave everything up to the individual and offer society
57
no opportunity to value other concerns over personal privacy. This is not only unworkable,

but also dangerous: online harassers who target their victims behind a veil of pseudonymity

are choosing not to disseminate their identities; it is difficult to see how a theory of privacy

based on choice and control alone could honestly argue against an absolute right for them to

remain pseudonymous.

Second, this conception of privacy may undermine itself. Privacy as choice, control,

or management over what others know damages privacy rights because it turns all revelation

into a conscious volitional act. Courts have run with that presumption and have concluded

that individuals assume the risk that any disclosures to third parties could result in wider

disclosure to others or the government, thus extinguishing privacy interests in all previously

revealed information. A telephone user, for example, “voluntarily convey[s] numerical

information to the telephone company … [and] assume[s] the risk” that the telephone

company would subsequently reveal that information (Smith v. Maryland, 1979, p. 744). A

bank depositor has no legitimate expectation of privacy in the financial information freely

given to banks because the depositor “takes the risk, in revealing his affairs to another, that

the information will be conveyed by that person to the Government” (United States v. Miller,

1976, p. 443). And this doctrine has been extended to the Internet. Several federal courts

have held that since any information conveyed to an online service provider in order to

access the Internet is “knowingly revealed,” there could be no invasion of privacy when an

Internet service provider (“ISP”) gives that information to someone else (United States v.

Hambrick, 1999; United States v. Kennedy, 2000). Therefore, although the ideals of autonomy

and free choice appear to empower the individual with all powers of disclosure, it logically

leads to an evisceration of personal privacy rights.

58
Third, it is not at all clear that greater individual control over personal information

would create a better, more just regime. Even Professor Solove (2004), whose

recommendations for creating a more just privacy regime are, in part, dedicated to giving

greater control to individuals, admits that control will not always do much good: “people

routinely give out their personal information for shopping discount cards, for access to

websites, and even for free,” he concedes (p. 87). Citing Julie Cohen (2000), Solove notes

that individuals are incapable of exercising adequate control over each individual piece of

information because they cannot comprehend the enormity of the value of the sum of those

pieces. And, as Alessandro Aquisti (2005) has shown, individuals are willing to give up their

information for exceedingly meager rewards (p. 24). Therefore, a privacy regime based on

control over our data runs up against a brick wall.

Fourth, and finally, all of these conceptions of privacy are based on a flawed liberal

or neo-Kantian assumption of the ideal self as a fully autonomous agent of choice (Sandel,

1996; Sandel, 1998). This fails to capture the social reality of our online experience where we

are mediated by intermediaries and tethered to myriad ties and communities that form a sui

generis society. We are, then, not liberal agents, but social, or Durkheimian, ones. A Kantian

conception of the self, upon which rights-based theories of privacy are based, implies that

ideal online society would be one of pure autonomy and freedom. That ideal is impossible

online. Like the Durkheimian man born into and coerced by social norms, the virtual self is a

mediated self, never truly autonomous. He has only second-hand control over the content

he sees, as all content and all online interactions occur over platforms run, organized and

censored by private companies like Facebook, Google and Yahoo. Like a man situated

within society, where social norms govern and mediate his experiences, the virtual self’s

59
online experience depends upon his relationship with Internet intermediaries and the

bilateral obligations between them.

The virtual self is not a free and autonomous agent of choice, either; rather, he is

mediated in two related ways: First, every online interaction is governed by an intermediary

that helps determine what content is available. Second, by identifying preferences and

interests, the virtual self allows intermediaries to “push” tailored content toward him, further

limiting the orbit of speech at his disposal toward that which he has previously expressed a

related interest. Both of these facts suggest that the virtual self is bound up with the social

space intermediaries create for her.

An online intermediary “facilitates” interactions among third parties on the Internet:

they include Internet service providers (ISPs), like Comcast, Earthlink or Netzero; web

hosting providers, like Go Daddy; search engines, like Google or the erstwhile AltaVista; e-

commerce platforms, like eBay; Internet payment systems, like PayPal; and participative

networking platforms, like blogs and wikis (Organization for Economic Co-Operation and

Development, 2010). Every online interaction is filtered through some intermediary. David

Ardia (2010) explains the pervasiveness and essential role of online intermediaries through a

seemingly simple example: uploading a video on to YouTube. First, the user goes to

www.youtube.com using, say, Google Chrome. That process already involved numerous

intermediaries:

All Internet communication is accomplished by splitting the communication


into data packets that are directed by specialized hardware known as routers,
which are operated by intermediaries throughout the network. These routers
identify computers on the Internet by their Internet Protocol (IP) addresses,
… [T]he domain name system (DNS) allows mnemonic names to be
associated with IP addresses. When an Internet user enters one of these
domain names into her web browser, for example YouTube.com, her
computer sends a request to a DNS server, typically operated by her Internet

60
Service Provider (ISP) or another intermediary that maintains a lookup table
associating the name with a specific IP address (p. 385-386).

Once at the YouTube website, the user signs on and uploads the video. But the video does

not go directly to YouTube; rather, the video goes from the user’s computer onto a network

run by an ISP, which in turn sends the data via “multiple intermediaries that provide ‘peering

connections,’ to the network owned by the ISP that services YouTube” (p. 386). In other

words, the user’s ISP sends data through fellow, or “peer,” ISPs to the provider that runs

YouTube. From there, the data go to YouTube’s servers, which will host the video. And

when someone else wants to view this video, the sequence is reversed: data go from

YouTube’s servers through to YouTube’s ISP and through peers until it reaches the viewer’s

ISP and, ultimately, the viewer’s desktop, laptop or mobile device.

These intermediaries provide two important functions that mediate the virtual self’s

online experiences. First, intermediaries control unwanted content, such as spam and

malware, and unwanted attacks, such as viruses and Trojan horses (Yoo, 2010). Second,

intermediaries not only block bad content, but they help users identify the content they

want. It would be impossible for the average user to sift through an unorganized multitude

of data to find the particular information he needs, so he depends on a variety of “content

aggregators,” such as blogs, search engines, and bulletins to identify and retrieve content (p.

707). The most effective aggregators are adaptive, or those that learn from their users’ habits,

preferences and previous searches to help them find future content that would likely want

(Lastowska, 2008). This is why Google has generally supplanted every search engine

competitor: its search algorithms are the best at identifying content users prefer.

61
Section 2.3: Moving Away From Rights

I have argued that the conventional theories of privacy are based on notions of

personal inviolability and individual rights, which means that the goal of state or judicial

intervention has been, traditionally, to protect those rights. I have also critiqued these

theories as either limitless or inelastic or, counter intuitively, damaging to personal privacy

interests. What’s more, even though there is a significant difference between seeing privacy

as a negative or positive right, both views fall back on the same assumptions: the private

world as separate, apart, and in opposition to the public world, and the individual as the

locus of the purposes of privacy. This feedback loop might explain why scholars are all over

the place when discussing privacy. They use the rhetoric of autonomy when arguing for

privacy-as-separation; they see privacy as controlling dissemination of information, but seem

to think of the information being disclosed as necessarily intimate; and they talk of

personhood and choice when considering deviance and secrecy.29 The end result is the same:

privacy law has been predominantly focused on protecting an individual right to control

dissemination and separate from the public.

Perhaps these rights-based concepts of privacy are simply incomplete; their

problematic implications may just be missing pieces in a larger puzzle. If so, we have two

options. We could give up on privacy or we could recognize that the rhetoric and substance

of rights only gets us so far. Several leading thinkers have taken the first route. According to

Howard White (1951), Rousseau had little positive to say about privacy, finding it

anathematic to the social contract and to a well-functioning state. And Edmund Burke

29Another explanation is that there is quite a bit of overlap among the various conceptions of privacy. This, of
course, is true. But I argue that the overlap exists because all of the conventional conceptions of privacy are
based on the same liberal ideal and liberal assumptions.

62
thought that privacy could lead to a breakdown of society because it tended to make men

restless, selfish, and too inwardly focused to care about the common good (White, 1951, p.

190-191). Modern critics see a similar antagonism between protecting individual privacy and

a functioning society. Richard Posner (1978) finds it incongruous for the state to rightly pass

laws preventing sellers from making false or incomplete representations about their goods

but to allow an individual to lie or conceal facts to give himself a personal advantage. Amitai

Etzioni (1999) thinks that our obsession with privacy is endangering public health and safety,

preventing us from protecting sexual abuse victims and from keeping children healthy, and

privileging criminality over the common good. And Catherine MacKinnon (1989) argues

that privacy laws codify the liberal principle of non-interference, which has the attendant

effect of enforcing the hierarchical sexual status quo in the unregulated sphere. Privacy, then,

allows men to get away with abuse.

Stringent rights-based privacy protections can indeed lead to worrisome, perhaps

unintended negative effects, as Professors Etzioni, MacKinnon, and many others have

suggested. But we need not give up on privacy. A second approach would recognize that

privacy has value and try to rescue the concept from indeterminacy, inelasticity, and whim.

This was Helen Nissenbaum’ s (2004) goal in her groundbreaking work on privacy as

“contextual integrity” (p. 102). Like me, Professor Nissenbaum finds conventional

understandings of privacy incapable of explaining why certain modern technological

developments strike us as invasive (p. 102-104). She identified three principles that have

governed much of privacy law to date—protecting individuals from intrusive government

agents, restricting access to intimate information, and curtailing intrusions into private spaces

(p. 107-112)—but found them unhelpful when it came to the “grey areas” posed by vexing

legal questions. Professor Nissenbaum took a ground-up approach and identified the social
63
science concepts of appropriateness and information flow as the factors that, when

breached, determine our unease about privacy invasions (p. 118-125).

This was also Dan Solove’s (2002; 2006) project in Conceptualizing Privacy. In short,

Professor Solove wanted the legal academy to take a pragmatic approach and remain open to

the revolutionary concept that privacy may not be reducible to one common denominator.

In this context, pragmatism alludes to a bottom-up, context-based study of privacy that

learns from specific examples of intrusions into privacy rather than a top-down, universalist

approach. He asks us to act like “cartographers, mapping out the terrain of privacy by

examining specific problematic situations” (2002, p. 1127) and takes the pragmatist John

Dewey’s advice to begin philosophical inquiry with experience, not abstract principles (p.

1091). In other words, we should become experts in the problems of everyday life and adapt

theory to social change. Professor Solove would like us to be sociologists, and I would like

to take his advice.

64
CHAPTER THREE:
The Theory
I have so far argued that privacy scholarship has, for the most part, been founded on

rights-based principles. I have also shown the limits of a rights-based approach: if they ever

fully captured what we mean by privacy, rights-based theories are in any event incapable of

comprehending the role privacy plays in a modern world in which terabytes of data, gleaned

from cookies and web beacons, can predict our behavior, categorize our interests, and help

web-based platforms tailor our internet experiences (Pasquale, 2014a). I would now like to

take up Professor Solove’s sociological challenge and show that privacy, at least in the

information sharing context, is really a social construct based on trust and that the salience

of trust in our sharing behavior is empirically observable.

Trust is broader than just our intuitive conception of confidentiality: it is a social

norm of interactional propriety based on our expectations of others’ behavior. It is this

conception of trust that animates our current and future sharing and disclosing behavior.

Therefore, privacy law—the collective judicial decisions, legislative enactments, and

supporting policy arguments regulating disclosures, searches and seizures, data aggregation,

and other aspects of informational knowledge about us—should be focused on protecting

relationships of trust. This Chapter begins by assessing four nascent attempts to develop a

truly social theory of privacy, both of which represent significant steps forward from the

conventional rights-based theories discussed in Chapter 2. I then make a theoretical and

intuitive argument connecting privacy and trust and discuss in detail what I mean by the

term. I then provide a review of the current social science literature and glean a set of factors

for evaluating whether trust exists in a given situation of disclosure, thereby providing judges

65
and policymakers with a guide for answering ongoing privacy law problems. In the next

Chapter, I put these theories to the test through a survey of Facebook users.

Section 3.1: Social Theories of Privacy

To speak of a sociological theory of privacy seems counterintuitive. Social things are,

according to Durkheim (1912/2001), “collective representations that express collective

realities” (p. 11). Whereas social life involves assembled groups and is a manifestation of

collective thought, privacy law’s traditional focus has been the individual. But privacy

involves our relationship to society, not our departure from it. That it has been interpreted,

predominantly, in one way reflects convention, not insight. This is evident in two ways. First,

there are those that admit that privacy is socially constructed but immediately assume that it

is for an individualistic purpose. Jeffrey Reiman (1976) admits that privacy “is an essential

part of the complex social practice by means of which the social group recognizes—and

communicates to the individual—that his existence is his own.” Alan Bates (1964) suggests

that privacy is indeed “a set of norms, sometimes embodied in roles attached to population

categories,” but never teases out what that means and, instead, switches among nearly every

liberal conception of privacy defined in Chapter 2 (p. 432). Social scientists fall into this trap,

as well. Laufer and Wolfe (1977) admit that privacy is “an interpersonal concept” and even

push back on the notion that privacy can be bound up with a place that provides detachment

and sequestration. Privacy, they argued, is neither about hiding nor separation, but

“manag[ing] interaction” with different categories of persons—namely, parents or siblings

(p. 33-34). Their data also suggested that adolescents in different environmental

surroundings (urban versus suburban, for example) interact with others in different ways,

with some choosing varied privacy techniques around separate categories of people. This

implies, at a minimum, that privacy has an interactional or social element. But Laufer and
66
Wolfe (1977) end up denying this, preferring the traditional vision of privacy as a “form of

noninteraction with specified other(s)” (p. 34). Dan Solove (2002) admits to the social

origins of privacy, but never presses the point. Personal information, he concedes, “is

formed in relationships with others” and may only have value as part of the sharing,

consolidation, and categorization of that information (p. 1113).

Other scholars take the next step and note that privacy serves social purposes.

Howard White (1951) argued that Plato considered privacy valuable when it was used in the

name of the polis: “The statesman in power follows the man of science. This man of science

is a private man in the sense that he fulfills a private task, the task of contemplation of the

political life” (p. 197-198). Donald Ball (1975), when discussing the work of other

sociologists on utopian and other close-knit communities, found that having a private area

for deviant behavior was essential for the success and continued existence of the community.

And Jeffrey Rosen (2000) argued that privacy allows individuals to share personal

information with intimate friends. In all these cases, scholars argued that privacy served a

greater purpose beyond the limits of the individual. Arguably, then, if privacy is socially

constructed and fulfills social and community-based purposes, it makes sense to conceive of

privacy as an essential element of social interaction rather than as an individual’s weapon

against society. Most of these scholars declined to take that next step.

Once we become amenable to the social origins and goals of privacy, our next

project is to determine the nature of those goals. They are what society, the state, and the

judiciary must be marshaled to protect when under attack. As discussed in Chapter 2, a right

to privacy has traditionally been based on protecting personal autonomy and the choice to

separate from society, and to maintain secrecy, independence, and freedom of thought.

Those are important goals in any progressive society. But alone, they shrink privacy and
67
leave the rump open to erosion. Reorienting the right to privacy to protecting relationships

ameliorates these problems by expanding privacy to include precisely how sharing personal

information operates in real life, the social good of such sharing, and its clear and articulable

boundaries and goals.

There has been a smattering of attempts to craft social theories of privacy, but

although they all share the goal of filling the gaps left by the rights-based understandings of

privacy discussed in Chapter 2, they remain either incomplete or subject to fatal criticism. I

will discuss four here. The first model is what I will call a pure relationship model, where

privacy is determined based solely on the relationship, or lack thereof, between an individual

and someone with access to his or her personal information: something is public when it is

known by those, like strangers, presumably, with whom we have no special relationship, but

still private when it is only known to intimates. The relationship model explains privacy

within defined special relationships like fiduciary and trustee, attorney and client, or doctor

and her patient. The philosopher James Rachels (1984), who defined privacy as a right of

control and access, nevertheless saw relationships as essential; in fact, our ability to

“maintain different sorts of social relationships with different people” was the central goal of

privacy (p. 292). For Rachels, public and private exist on a scale in parallel with a continuum

of relationship closeness: intimates are such because they know personal information about

us, whereas strangers do not. The private world, then, is an intimate world of friends, lovers,

spouses, and close colleagues.

This relationship model is distinctly social: its interpretive tool—the relationship

between us and others—lies beyond the individual and ignores the substance of the

information. In this way, it does not face the absolutist and normative critiques plaguing

conceptualizations of privacy based on autonomy and choice, respectively. It also may rescue
68
us from the erosion of privacy wrought by Dan Solove’s “secrecy paradigm” because its

relationship-oriented approach presupposes that information can be shared with others—

family, friends, and intimates—and still be considered private. But it nevertheless fails as a

governing understanding of information privacy for several reasons. First, by focusing

exclusively on relationships, the model makes information irrelevant. But that cannot be the

case. Individuals may not be inclined to share embarrassing or stigmatizing information with

intimates and feel perfectly comfortable sharing them with strangers and yet still feel that

this information is private in some sense. Second, the model seems to imply a proportional

and linear relationship between closeness and information shared. But maintaining different

relationships with different types of people, as Rachels suggests is embodied in his model,

does not necessarily require that those closest to us know the most about us. Third, the

model falls back on the assumption, held by many of the rights-based theories discussed in

Chapter 2, that information shared with strangers cannot ever be private. In this way, we

have still not escaped the “secrecy paradigm” trap because anything shared with even one

stranger is considered public under the pure relationship model.

Edward Tverdek’s (2008) modified relationship model consciously picks up where

Rachels left off. Tverdek acknowledges that the public-private divide varies based on an

individual’s relationships with certain others, but tries to take into account the failings of

rights-based and pure relationship models by including variations in information into the

mix. For Tverdek, there are two types of personal information: that which creates “esteem-

based interests” in how we are regarded by others and that which creates “an interest in

preventing practical harms” that could occur if others knew it (p. 71). Those interests only

arise when certain types of interaction partners are involved. Tverdek argues that we may

prefer to hide a stigmatized sexual fantasy from those closest to us, but have few qualms
69
talking about it to a stranger online (Collier, 2013).30 Further, he suggests that we may barely

safeguard our Social Security Numbers around our spouses, but worry what would happen if

strangers got their hands on them. Tverdek’s is an improved taxonomy, if only because it

recognizes that not all information is fungible and responds to Rachels’s problematic

proportional correlation between closeness and information. But it cannot be an accurate

conceptualization of privacy for several reasons. First, his esteem versus practical distinction

does not fit Rachels’s closeness continuum as neatly as he suggests. Many people might not

be so cavalier about their Social Security Numbers, and most would arguably guard it around

their friends and acquaintances, if not their spouses. And esteem-based interests do not

disappear as intimacy declines, as the son or daughter of a clergyman or local politician

would understand. Second, there is no place for strangers in Tverdek’s taxonomy, leaving us

once again victimized by the “secrecy paradigm.” Third, both Tverdek’s and Rachels’s

models are focused on individual pieces of information, e.g., an identification number, a

stigmatizing illness, a salary. As Frank Pasquale (2014a) and Dan Solove (2004) have noted,

privacy problems in a networked world extend far beyond our concern for the disclosure of

discrete bits of data; rather, it is the aggregation, analysis, and categorization of terabytes of

data about individuals that any theory of privacy must also address. The more analog

relationship models, then, leave us ill-equipped to handle some of the most vexing questions

of modern privacy law.

Although the relationship models take a step toward a sociological theory of privacy,

they do so rather tentatively. They focus on relationships and social interaction, but neglect

30As Anne Collier (2013) has reported, researchers have found that “taking and sharing nude images is an
established courtship practice within many parts of the gay community and that apps such as Grindr have
popularized the practice considerably.”

70
the fact that privacy is a social phenomenon not merely because other people exist, but

because privacy is about the social circumstances in which information flows from one party

to another. There are two information flow models in the privacy literature, both of which bring

us closer to filling the gaps left by rights-based theories and addressing modern problems of

privacy law. They remain, however, incomplete.

In an article in the University of Chicago Law Review, Lior Strahilevitz (2005)

suggested that privacy hinges on how information flows among our interaction partners.

Based on ongoing research in social network theory, Strahilevitz eschewed the linear and

proportional correlations in the relationship models and suggested that the nature of the

information and with whom it is shared can determine when a piece of shared information is

so likely to get out of its original circle of recipients that it cannot be the basis of an invasion

of privacy claim when it does. More specifically, the more “interesting” or unusual,

surprising, revealing, or novel a piece of information is, the more likely it will be

disseminated through a network (p. 972). Complex or aggregate information, the sum total

of pieces of data about a person, is not likely to be known outside of close-knit groups and,

therefore, highly likely to stay confidential. But when information is disclosed to a group that

includes highly connected, socially active individuals who are situated in multiple social

networks, the information is likely to be disseminated further beyond the initial group.

Therefore, Strahilevitz argues that if everyone I know, plus several I do not, know something

about me, that information is likely to move through the network and into other networks.

That piece of information is public. But if just my friends know a fact, “but not any

strangers,” then I can expect it to remain with its intended recipients (p. 974). Combining

these factors together, Strahilevitz concludes, allows a judge to see whether the information

71
originally disclosed was likely to have become “public” regardless of any subsequent

disclosure. If it was, it cannot be the basis for an invasion of privacy claim.

This is a dynamic and powerful idea. Privacy scholarship is richer for Professor

Strahilevitz’ s sociological contribution. However, the role of strangers in the calculus is

problematic. Strahilevitz appears to have replaced a draconian bright line rule that

extinguishes privacy rights upon any disclosure with an apparently softer, contextual

sociology that nevertheless retains a draconian bright line rule that extinguishes privacy

rights upon certain disclosures regardless of context, intent, or the presence of trust. He has,

in other words, simply moved the line of Dan Solove’s (2004) “secrecy paradigm” a little

further down the road. Under Strahilevitz’ s social network theory, the mere fact that a

recipient of information is a stranger—namely, someone with whom you do not have

personal, face-to-face, offline experience—excludes the possibility that you can retain a

privacy interest in that datum. What’s more, applying the theory requires making several

arbitrary choices that may not reflect the reality of a particular social network. What may be

an unusual or rich secret to Professor Strahilevitz or a judge may be rather mundane among

a different group of people. The social network theory of privacy would invite a judge to

impose his or her normative interpretations on someone else’s potentially different social

network. This has the unique potential to damage marginalized groups with stigmatized

identities whose network peculiarities might be wildly foreign to a mainstream judiciary, a

problem Professor Strahilevitz did not discuss.

Helen Nissenbaum’s (2010) theory of privacy as contextual integrity also focuses on

the flow of information among social actors. Under this theory, privacy is about “context-

relative informational norms” (p. 129) that “govern the flow of personal information in

distinct social contexts (e.g., education, health care, and politics)” (p. 3). In other words,
72
privacy is about what is appropriate for different groups to know about us given the nature

of the information and the context in which it is shared. An invasion of privacy, then, “is a

function of several variables, including the nature of the situation, or context; the nature of

the information in relation to that context; the roles of agents receiving information; their

relationships to information subjects; on what terms the information is shared by the

subject; and the terms of further dissemination” (Nissenbaum, 2004, p. 155). As a governing

theory of privacy, contextual integrity is far superior to rights-based theories discussed in

Chapter 2 and the nascent social theories described above. Professor Nissenbaum’s work

retains the core presumption of a social theory—that privacy must account for information

exchange among social actors—and eschews problematic reliance on relationship categories

that could arbitrarily limit our privacy interests. Although Nissenbaum’s work is the latest

and most profound attempt to bring social theory to our understanding of privacy, there

remain gaps in the theory. Nissenbaum’s reliance on the terms of a social interaction

threatens a formalistic misapplication of the theory: not all social interactions have terms and

including them in a list of contextual factors could elevate formal written agreements over

other, equally important elements. What’s more, asking us to analyze the social context of a

given incident of disclosure neglects to tell us what kind of context we should be looking for.

Nissenbaum’s work, then, begs the question: if privacy is determined in context, what is a

“private context”?

Section 3.2: Breaches of Privacy as Breaches of Trust

I argue that a private context is a trusting context. But the sociologist’s vision of trust

is far broader than the everyday trust we have in our families, loved ones, and friends. It is

an aggregation of particularized faith in others and the predictability of future actions. It

exists among friends as well as among strangers. Trust reflects a behavioral exchange
73
between two people or among several people or groups. As an exchange—an implied social

deal—trust is expressed whenever there is social interaction. And for any interaction that

involves sharing some piece of information about ourselves, trust and privacy go hand in

hand. That is, we retain privacy rights in contexts of trust.

To see how this is the case, consider the following examples gleaned from scholars’

assessments of what constitute invasions of privacy. Let us analyze each example in turn.

Barging into a bathroom and reading a diary are popular examples (Goffman, 1959). Yet that

generally accepted view cannot be based on something inherent to a bathroom—the stall’s

walls or the bathroom door—or a diary—its lock or its owner’s name embossed on the front

cover; otherwise, privacy would be limited to when we are enclosed by walls or within our

property boundaries.31 It would also ignore the invasion, manifested by a sense of being

startled, by someone’s mere presence where we do not expect, inside a bathroom or out.

Howard White (1951) was correct when he suggested that a simple question can be

an invasion. Before the repeal of the military’s “Don’t Ask, Don’t Tell” policy, military

recruiters were ostensibly prohibited from asking an applicant’s sexual orientation. We find

such questions invasive even though one person may be proud of his sexual orientation and

have no qualms about revealing a detail that might seem intimate to others. But the crux of

the invasion cannot be inherent in the question itself because intimacy and privacy are

different things for different people.

Revealing someone else’s secrets is another prototypical invasion of privacy (Bates

1964), but the invasion must be based on more than the mere fact of revelation. Spouses can

reveal their friends’ secrets to each other; indeed, there is anecdotal evidence that many

To be sure, privacy based on property has a long history in American law, as evidenced by cases like
31

Olmstead v. United States (1928) and the scholarship of Orin Kerr (2004).

74
people expect that to happen.32 What’s more, if the existence of privacy rights hinged on the

on-off switch of revelation versus secrecy, there would be precious few private things left in

a world of rampant voluntary and involuntary disclosure.

Some feel that private companies and government agencies that aggregate all

available information about groups of individuals and categorize us based on that analysis

invade personal privacy even if that data never leak and are never used to any effect

(Nissenbaum, 2004; Solove, 2004). But, again, there is nothing inherently private, in the

conventional sense, about that data, which could range from the last book you purchased on

Amazon to your prescription drug history: all of it was given to a third-party intermediary at

some point. As Professor Solove (2004) has argued, analyzing and aggregating information

you already disclosed to a third party could not be considered an invasion of privacy if our

conception of privacy is based on strict secrecy.

Listening to a conversation between two other people at a party, in a restaurant, or

an otherwise public place could amount to a privacy invasion (Diekema, 1992; Nissenbaum,

1998; Allen, 1998). But if the invasion hinges on the mere fact of overhearing, that

conception of privacy transforms everything that passes through one’s audiovisual attention

field into eavesdropping.

Finally, the mere fact that certain governments are moving to put their public

records—real estate matters, deeds, and licensing submissions, for example—online has

struck some stakeholders as particularly invasive. However, as Helen Nissenbaum (2004)

32 During a 1995 episode of “Seinfeld” entitled “The Sponge,” George and his girlfriend Susan get into a fight
about sharing secrets, with Susan arguing that it is assumed secrets will be shared between boyfriends and
girlfriends. George eventually reveals to Susan that Jerry took a woman’s number off an AIDS Walk list. Jerry
later resists sharing another secret with George because he assumes George will share again. Seinfeld, Season 7,
Episode 9, “The Sponge,” http://www.imdb.com/title/tt0697783/.

75
noted, those records are already public; putting them online is a mere administrative

convenience, not an invasion of privacy on any traditional understanding of the term.

Rather than being unified by one of the traditional rights-based theories of privacy

discussed in Chapter 2, Dan Solove (2002) has argued that there is no single common

denominator unifying all privacy problems under one schema. Rather, invasions of privacy

have certain “family resemblances” with some overlap in some contexts. I decline the give

up the ghost of coherence so easily. Indeed, each of these examples have a behavioral,

interactional element that affects both parties, and each behavior becomes an invasion of

privacy because it violates the trust expected to exist in the given relationship. The privacy

invasion, that is, stems not from anything special about the information or the space, but

from the erosion of the behaviors expected to the particular interaction. Unannounced entry,

whether into a bathroom or onto another’s blanket on an empty beach, breaches the trust

and discretion implied by occupying a space. Reading a diary or thumbing through a

personal library of John F. Kennedy biographies or taking a cookie from the cooling tray

without asking violates the discretion owed to others and their things and vitiates the trust

that allows us to display our favorite books and step away to multitask while our cookies are

cooling. A person with whom you discuss work, sports, love, mortgage payments, and your

daughters’ dirty diapers33 may be able to ask about a recent sexual dalliance, but the same

question from a casual friend at work34 may not only strike you as invasive, but could be

considered harassment (Weimann, 1983; Scott, 2000). Given the context, it breaches our

expectations of how others in particular social networks will behave. Revealing secrets to a

33Social network theorists would call this a “strong tie,” or people with whom we share a lot of different
information about a variety of topics. It is also called a “high intensity” relationship.
34 Single-issue friendships at work may be called “weak ties” or “low intensity” relationships.

76
spouse is not a breach of another’s privacy because we expect strong trust and discretion in

marital relationships. Data gathering, aggregation, categorization, and subsequent disclosure

to third parties could not constitute invasions of privacy under a “secrecy paradigm”

(Solove, 2004, p. 8): they do not take away our control over our information because we

already gave up control when we disclosed details about ourselves to banks, consumer

websites, and governmental agencies. Rather, the process may be perceived as an invasion of

our privacy because the subsequent actions taken with our data violate the expectations we

had of the behavior of third parties in whom we entrusted our data.

The notion that invasions of privacy are based on erosions of expectations of trust

becomes even clearer when we consider acquaintanceship and staring. You expect, or trust,

that an acquaintance will continue to behave like one; you expect that no one—not even

strangers—will stare at you (Goffman, 1972). But staring can happen in public and requires

no personal disclosure other than presence, so it is not clear what traditional understanding

of privacy, other than the amorphous personhood concept, is implicated. Rather, when your

expectation of trust is violated, your privacy is also invaded.

Georg Simmel (1906) argues that acquaintanceship can only work because of the

discretion that we expect from acquaintances; a casual acquaintance shows discretion not

merely by keeping a secret he accidentally overhears, but by restraining himself from ever

getting into a position where he oversteps the boundaries of the acquaintanceship in the first

place. He adds that certain relationships demand that both parties reciprocally refrain from

intruding in the range of things not included in the underlying relationship. Your friends

from church or the gym or an extracurricular affinity group may invade your privacy by

asking any question, regardless of its ranking on your own personal intimacy scale, about

your life outside the church, gym, or that group. This explains why a question can seem
77
inappropriate in one context and engender no objections in another. Trust and discretion,

Simmel says, circumscribe all types of relationships and allows them to be born, survive, and

endure. It also explains how privacy-as-trust works among different social networks: a

stranger’s mere presence, whether in your home, office, too close to you on the subway, or

anywhere he is not meant to be, may strike us as “creepy”; the imposition of an acquaintance

into a social situation more appropriate for a friend or intimate may also be considered an

invasion of privacy. But that same kind of presence by a friend or intimate would seem

wholly appropriate. The difference is trust: we expect strangers to continue to act like

strangers and not prematurely jump the continuum to intimacy.

This is also evident in Erving Goffman’s (1963; 1972) explanation for why staring

and “intrusive looks” are invasions of privacy (1972, p. 45). Staring, Goffman writes, is not

an ordinary or appropriate social interaction: it discriminates against the target and puts him

“in a class apart” (1963, p. 86). You stare at zoo monkeys, not people, so the invasion of

privacy must either be a threat to the victim’s dignity as an end in himself, per Kant, or a

breach of some implied duty that individuals owe one another. Goffman argues the latter,

calling it a duty “civil inattention” (p. 85). This concept is just one formulation of social

trust. Civil inattention is a form of polite recognition of strangers, manifesting itself in nods

of acknowledgment alongside a respectful modesty not to intrude where you do not belong.

Staring at a physically injured or deformed bystander is the antithesis of civil inattention. In

this example, the target might consider his injury “a personal matter which [he] would like to

keep private” but the fact that it is visible makes it publicly obvious. This obvious injury

“differs from most other personal matters”—namely, those personal or private things that

go on in the private sphere—because everyone has access to the injury regardless of how

much the target would like to keep it secret (p. 86). We are told not to stare precisely because
78
the behavior’s abnormality disrupts the normal course of social interaction. It has been

known to cause fear and flight and runs counter to our expectations of how strangers are

supposed to behave (Ellsworth, 1972).

It should be evident, then, that we relate to bystanders and strangers with the same

tools with which we relate to friends and intimates: we develop expectations of their

behavior and expect them to continue to behave according to those expectations. For

strangers, we treat them with discretion, and we expect and trust that others will do the same

for us. Every interaction includes bystanders’ social obligation to protect social actors so that

their interactions can continue. We have a “tactful tendency … to act in a protective way in

order to help the performers save their own show,” Goffman (1959, p. 229) writes, using his

theatrical conceit to analogize to everyday social interaction. This tact is simply another word

for discretion and respect, and we trust that it will be there. We also owe a measure of

“tactful inattention” to neighboring conversations and nearby individuals to guarantee the

“effective privacy” of others, a principle colloquially encapsulated by the phrase, “keep one’s

nose out of other people’s” business (p. 230). Privacy invasions, therefore, are not simple

intrusions into personal territory or the disclosure of negative behaviors; rather, they are

socially inappropriate behaviors that violate the trust and discretion we owe others.

Privacy-as-trust is captured in Goffman’s early essay, The Nature of Deference and

Demeanor (1967). Deference conveys respect “to a recipient or of this recipient, or of

something of which this recipient is taken as a symbol, extension, or agent” (p. 56). In doing

so, deference certainly imbues others with value and dignity; but that is merely a byproduct

of the overarching purpose of creating a path for interaction. Rules of deference and respect

constitute “rules of conduct which bind the actor and the recipient together” and “are the

buildings of society” (p. 90). In others words, they cue others as to our potential as
79
interaction partners. This is the role of privacy-as-trust: by locating the basis of privacy in the

interactional context of disclosure, it creates a sense of confidence that allows people to

share.

Section 3.3: The Sociology of Trust

Privacy-as-trust focuses not on the what, when, or where of disclosure. It is based

how we interact with each other. And it is my theory that we disclose information based on

trust that we have in others. I will discuss the nature and determinants of that trust in more

detail in this section. For all the research and analysis on trust done by social psychologists,

sociologists, economists, and others, there is still some disagreement on how to

conceptualize trust (Nannestad, 2008). The disagreement is regrettable and not entirely

unexpected, but not fatal to my argument. Social scientists may disagree on the margins, but

an extensive review of the literature on trust evidences broad agreement at its core. Most

agree that trust is an expectation regarding the actions and intentions of particular people or

groups of people, whether known or unknown, whether in-group or out-group (Newton and

Zmerli, 2011; Möllering, 2001). This kind of trust is what sociologists call particularized

social trust: it is interpersonal, directed at specific other people or groups, and forms the

basis of person-to-person interaction. It allows us to take risks, cooperate with others, make

decisions despite complexity, and create order in chaos, among so many other everyday

functions (Coleman, 1990; Luhmann, 1979; Misztal, 1996). Trust not only has positive

effects on society,35 it is also essential to all social interaction, is at the heart of how we

35Trust has been shown to contribute to educational achievement and economic success and health (Kim et al.,
2006; Subramanian, 2002; Veenstra, 2000). Organizations with a high level of trust are also more efficient and
tend to out-perform competitors. According to several scholars, trust reduces transaction costs in this context
(Bradach and Eccles, 1989; Sako, 1992). Countries with a high level of trust among their citizens also benefit
from efficient local governments, economic growth, and health (Subramanian et al., 2001; Tolbert et al., 1998).

80
decide to share information about ourselves, and helps explain when we feel our privacy

invaded.

Section 3.3.1: What is Trust?

Particularized social trust is one of three types of sociological trust,36 all of which are

related and interconnected. Particularized trust is the kind of trust implicated when we share

things with or interact with others: A trusts B to do x, where x can be keeping a secret,

doing a job well, or not listening in on a two-way conversation. Sometimes, scholars assume

that this form of trust is entirely based on past knowledge. Russell Hardin (2000) thinks so,

arguing that “for me to trust you, I have to know a fair amount about you” (p. 34). Others

appear to agree (Luhmann, 1979; Offe, 1999; Yamagashi and Yamagashi, 1994). But, as Eric

Uslaner (2000-2001) has noted, past experience is only one basis for trusting particular other

people. Another is a common identity, or faith “in your own kind” (p. 573). This is akin to

Max Weber’s famous analysis of capitalism in America. He thought that common

membership in the Protestant sect in early America allowed people who did not really know

each other to trust that they would be competent contractual partners. Talcott Parsons

(1978) agreed, arguing that trust between persons required common values and common

goals: “People defined as sharing one’s values or concrete goals and in whose competence

and integrity one has confidence come to be thought of as trustworthy individuals or types”

(p. 47). Trust, in this sense, is akin to familiarity, but familiarity can be derived through

previous experience or a common identity.

36Indeed, trust has to be a sociological concept. “[T]rust must be conceived as a property of collective units
(ongoing dyads, groups, and collectives), not of isolated individuals. … [T]rust is applicable to the relations
among people rather than to their psychological states taken individually” (Lewis and Weigert, 1985, p. 968).

81
This form of trust is derived from Georg Simmel’s “specific, dynamic, and

situational” experiential trust —the trust we have in each other. It is about creating and

reacting to expectations of others’ behavior and it is at the foundation of almost every daily

social interaction, including our sharing of personal information. Simmel knew this. He said

that society would “disintegrate” without the trust that people have in each other (Frisby ed.,

2011, p. 178). As the sociologist Niklas Luhmann (1979) noted, trust in others is so essential

that an “absence of trust would prevent [man] even from getting up in the morning” (p. 4).

What sociologists mean by this type of experiential trust is a set of favorable

expectations about the behavior of others. Every time we cross an intersection, we do so

with a baseline of trust that the oncoming car is not going to run the red light; every time we

enter a crowded subway car, we trust that the passenger sitting in the corner is not going to

pull out a gun; when we extend a hand for a handshake, we trust it will be met with someone

else’s (clean) hand. Particularized trust makes it possible to deal with uncertainty and

complexity: accosted with myriad stimuli and problems in modern life, we trust experts to

help us navigate them (Parsons, 1978). We put our faith in everything from brand names,

doctors, and friends because it is impossible for any one person to have sufficient knowledge

about everything to make entirely rational decisions with complete knowledge. Knowledge is

costly and hard to come by and, often, decisions and actions have to come before knowledge

even exists (Lewis and Weigert, 1985). As Simmel (1906) implied and Luhmann (1979) stated

more explicitly, trust begins where knowledge ends.

Particularized trust is as necessary, if not more so, than generalized, or

“metaphysical,” trust. If the former is directed at specific other persons or groups of

persons, general social trust is more diffuse, referring to the belief that most people can be

trusted, even if you do not know them and even if they are not like you (Newton and
82
Zmerli, 2011; Uslaner, 2000-2001). Those who exude general social trust are trusting people.

And, finally, there is political or institutional trust, which focuses our trust onto institutions

or agents and agencies of government (Newton and Zmerli, 2011). These forms of trust are

all related. Some scholars see them on a continuum of personal to abstract, referring to the

focus of trust, or from “thick” to “thin” or “high density” to “low density,” referring to the

ties that sustain trust. Suffice it to say that although the concepts are intimately related, I

focus my analysis on particularized trust for several reasons.

First, particularized trust is everywhere. It is reasonable to assume that everyone

trusts at least someone (Uslaner, 1999). Second, particularized trust needs further study.

Social scientists have famously and extensively studied general and political trust, as well as

its determinants and the effects of its decline in society (Putnam, 2000). Third, I hypothesize

that this form of trust is in play when someone shares personal information: A shares x with

B because A trusts B with x. Therefore, it is essential to speak about privacy alongside

particularized trust. Fourth, and perhaps most importantly, particularized trust is a necessary

condition for the development of social and political trust,37 both of which are

overwhelmingly positive forces in society. Trust is essential in a modern, heterogeneous

society where social, economic, and political actors do not know each other (Newton and

37This conclusion has been the source of considerable debate in the social science literature and although
universal consensus is elusive, many scholars agree that there is a positive, yet conditional relationship between
particularized trust on the one hand, and general and political trust on the other. For some time, many
sociologists argued that the forms of trust were incompatible: if you only trust people you know or only trust
those who look like you, you will not trust strangers or anyone with whom you do not share experience or
identity (Yamagashi and Yamagashi, 1994; Yamagashi et al., 1998; Newton, 1999; Uslaner, 2002). More
recently, several scholars have shown that particularized trust can promote general trust, or that trusting
individuals you know makes you more trusting and can help you trust strangers and society, in general
(Whiteley, 1999; Glanville and Paxton, 2007). Newton and Zmerli (2011) have argued that particularized trust
and general and political trust are positively correlated in some cases. They argue, and show through empirical
study, that those who exhibit general trust also exhibit particularized trust, but those who trust in particular
other people will not necessarily and always become trusting people, in general. This suggests that particular
trust is a necessary component of general trust, but not necessarily sufficient.

83
Zmerli, 2011; Nannestad, 2008). It is the “bedrock of cooperation” and fosters economic

prosperity. It makes democratic institutions run better, more efficiently, and less corruptly

(Nannestad, 2008). It helps connect us to people different from us and encourages sharing

and greater, more meaningful interaction (Uslaner and Conley, 2003). And centering the law

of privacy on protecting and fostering relationships of trust is a significant step forward.

Section 3.3.2: How Does Trust Develop?

So far, I have defined trust—favorable expectations as to the behavior of others—

and connected trust to privacy by showing that invasions of privacy are felt as such because

they breach our expectations of trust. It remains for us to prove how trust develops between

persons and use this evidence to develop clear guidelines for judges and policymakers when

assessing whether disclosures occurred in contexts of trust. Because of the salience of trust

in our decisions to share, trust is what makes an expectation of privacy reasonable: it is

reasonable to share in a given context if that context is trustworthy. And given that a

reasonable expectation of privacy is usually a necessary precondition to winning privacy

protection, the genesis of trust is an essential step to be understood. To do this, I have made

extensive study of the current social science research on the development of trust and

concluded that trust can reasonably develop among intimates and friends as well as among

strangers given the presence of certain social forces, including strong overlapping networks

and a strong, stigmatizing identity. I tease out the evidence for this argument in the

remainder of this chapter and offer an empirical case study in Chapter 4.

Among intimates, trust may emerge over time as the product of an iterative exchange

(Blau, 1964; Rempel, 1985); this type of trust is relatively simple to understand and generally

considered reasonable. Therefore, I will spend little time proving the reasonableness of trust

based on experience. But social scientists have found that trust among strangers can be just
84
as strong and lasting as trust among intimates, even without the option of a repeated game.

Trust among strangers emerges from three social bases—sharing a stigmatizing identity,

sharing trustworthy friends, and indicia of expertise, all gleaned from the totality of the

circumstances. When these social elements are part of the context of a sharing incident

among relative strangers, that context should be considered trustworthy and, thus, a

reasonable place for sharing.

Traditionally, social scientists argued that trust developed rationally over time as part

of an ongoing process of engagement with another: if a interacts with b over time and b

usually does x during those interactions, a is in a better position to predict that b will act

similarly the next time they interact. The more previous interactions, the more data points a

has on which to base his trust. This prediction process is based on past behavior and

assumes the trustor’s rationality as a predictor (Doney, 1998; Good, 1988). Given those

assumptions, it seems relatively easy to trust people with whom we interact often (Macy and

Skvoretz, 1998).

But trust also develops among strangers, none of whom have the benefit of repeated

interaction to make fully informed and completely rational decisions about others. In fact, a

decision to trust is never wholly rational, it is a probability determination: “trust begins

where knowledge ends.” What’s more, trust not only develops earlier than the probability

model would suggest; in certain circumstances, trust is also strong early on, something that

would seem impossible under a probability approach to trust (McKnight, 1998). Sometimes,

that early trust among strangers is the result of a cue of expertise, a medical or law degree,

for example (Doney, 1998). But trust among lay strangers cannot be based on expertise or

repeated interaction, and yet, such trust is quite common.

85
I argue that reasonable trust among strangers emerges when one of two things

happen: (1) when strangers share a stigmatizing social identity, or (2) when they share strong

ties in an overlapping network. In a sense, we transfer the trust we have in others that are

very similar to us or trustworthy to a stranger or use the stranger’s friends as a cue to his

trustworthiness. Sociologists call this a transference process, whereby we take information

about a known entity and extend it to an unknown entity (Doney, 1998; Milliman and

Fugate, 1988; Strub and Priest, 1976). This explains why trust via accreditation works: we

transfer the trust we have in a degree from a good law school, which we know, to one of its

graduates, whom we do not. We are willing to trust doctors we have never met even before

they give us attentive care, exhibit a friendly bedside manner, and show deep knowledge of

what ails us because we trust their expertise, as embodied by the degrees they hang on their

walls. Transference can also work among persons. The sociologist Mark Granovetter (1985)

has shown that economic actors transfer trust to an unknown party based on how embedded

the new person is in a familiar and trusted social network. Hence, networking is important to

getting ahead in any industry and recommendation letters from senior, well-regarded, or

renowned colleagues are often most effective. This is the theory of strong overlapping

networks: someone will do business with you, hire you as an employee, trade with you, or

enter into a contract with you not only if you know a lot of the same people, but if you know

a lot of the right people, the trustworthy people, the parties with whom others have a long,

positive history; it is not just how many people you know, it’s whom you know.

The same is true outside the economic context. The Pew Internet and American Life

Project found that of those teenagers who use online social networks and have online

“friends” that they have never met off-line, about 70 % of those “friends” had more than

one mutual friend in common (Lenhart and Madden, 2007). Although Pew did not
86
distinguish between types of mutual friends, the survey found that this was among the

strongest factors associated with “friending” strangers online. More research is needed.

The other social factor that creates trust among strangers is sharing a salient in-group

identity. But such trust transference is not simply a case of privileging familiarity, at best, or

discrimination, at worst. Rather, sharing an identity with a group that may face

discrimination or has a long history of fighting for equal rights is a proxy for one of the

greatest sources of trust among persons: sharing values. At the outset, sharing an in-group

identity is an easy shorthand for common values and, therefore, is a reasonable basis for

trust among strangers. Social scientists call transferring known in-group trust to an unknown

member of that group category-driven processing or category-based trust (Williams, 2001;

Anheier and Kendall, 2002). But I argue that it cannot just be any group and any identity;

trust is transferred when a stranger is a member of an in-group, the identity of which is

defining or important for the trustor. For example, we do not see greater trust between men

and other men perhaps because the identity of manhood is not a salient in-group identity

(Doney, 1998). More likely, the status of being a man is not an adequate cue that a male

stranger shares your values. Trust forms and is maintained with persons with similar goals

and values and a perceived interest in maintaining the trusting relationship (Six, 2010; Welch,

2007). But it is sharing values you find most important that breed trust (Jones and George,

1998). For example, members of the LGBT community are, naturally, more likely to support

the freedom to marry for gays and lesbians than any other group. Therefore, sharing an in-

group identity that constitutes an important part of a trustor’s persona operates as a cue that

the trustee shares values important to that group and will continue to behave in accordance

with those values.

87
The social science literature, then, suggests that reasonable trust derives from several

different sources: experience, strong overlapping networks, expertise, and identity, gleaned

from the entirety of the social context, are the strongest determinants of social trust.

Consider the following illustrative examples of social trust. Regular commuters from New

Jersey to New York expect the 6:34 AM train to Penn Station to arrive at 6:34 AM because it

has arrived at 6:34 AM on 15 out of 17 days in a row. That is an example of trust based on

experience. A new commuter might not be entirely sure that the 6:34 AM train actually goes

to Penn Station, but she sees her friends and colleagues get on the train, so she gets on, as

well. That is an example of trust based on strong overlapping networks: you transfer the

trust you have in those in your social network to others and to help solve problems. A new

commuter might not know from experience how much the ticket costs, so she asks the train

conductor. That is an example of trust based on expertise: a job or degree or other indicia of

expertise automatically elevates a stranger to a position worthy of trust. A new commuter

might also be wary about taking a train so early because she may not know if it’s safe. But

she sees ten other women getting on the train and decides it must be safe enough. That is an

example of trust based on identity: you transfer to others and to situations the trust you have

in people who are like you. New and regular commuters may use all of these cues at once or

any number of combinations with other indicia of the trustworthiness of the 6:34. For

example, if it is indeed going from New Jersey to New York, it would be facing east. If there

are a lot of other people waiting dressed in business casual attire, whereas no one is waiting

on the other side of the tracks, you can deduce you are on the correct side to head into the

office in New York. Suffice it to say, trust is about context and its determinants range from

personal experience, transference from trusted sources, and unspoken social cues of

trustworthiness. It is trust that defines our expectations of others’ behavior.


88
What turns these contexts of trust into contexts of privacy is revelation of personal

information in circumstances that give rise to social obligations. Obviously, not all trusting

situations give rise to privacy interests: that I notice cues of trustworthiness among those

who take the 6:34 AM train to New York does not necessarily mean that all subsequent

interactions with them are protected by privacy tort law. Rather, privacy interests arise in

context from the totality of the circumstances. To date, however, American law has ignored

the importance of trusting relationships and privileged individual volitional acts in what

should be a highly social and contextual analysis.

There are several reasons why the aforementioned factors—salient in-group identity,

strong overlapping networks, and indicia of expertise—are the proper bases for establishing

when trust among strangers is reasonable and, therefore, when the privacy of those contexts

should be protected by society. First, it represents the best social science research into

human behavior. It reflects how we actually behave and helps determine when we share our

personal information with others. Legal rules that reflect and foster positive social behavior

have the best chance at success and making society better. Second, these are reasonable

bases for trust. It is hard to argue that trusting based on identity, strong overlapping

networks, and expertise is reckless. Third, and most importantly for the law of privacy, that

most people arguably trust based on these factors suggests that society should be willing to

recognize expectations of trust and privacy based on these factors.

Section 3.4: Benefits of Privacy-As-Trust

The theory of privacy-as-trust has several advantages over conventional theories of

privacy. First, privacy-as-trust is a pragmatic, bottom-up approach that reflects how social

actors behave in everyday situations and how we understand what it means to have our

privacy invaded. Analyses of human behavior are also better bases for policy: public opinion
89
polls can reflect mere whims, whereas the point of the law is to protect and encourage

socially beneficial behavior (Cotterrell, 2005). A coherent doctrine based on human behavior

is, therefore, a likely more effective foundation for legal policy.

It is also more determinate and functional for judges and policy makers: it asks

judges to determine where trust exists and then, where found, to protect it via an operative

tort or constitutional tool. Too often, judges have been forced to approach privacy litigation

without a clear understanding of the values at stake and the purposes and goals of privacy.

Alan Westin (1967) recognized that in 1967: “Few values so fundamental to society as

privacy have been left so undefined in social theory.” Robert Post (2001) called privacy “a

value so complex, so entangled in competing and contradictory dimensions, so engorged

with various and distinct meanings,” that it is no surprise when scholars and judges give up

on bringing coherence to it (p. 958). Myriad other scholars have voiced the same lament

(Gerety, 1977; McCarthy, 1999; Gross, 1967; Thomson, 1984). This is especially problematic

when a privacy right comes in conflict with another right—the right to speak, for example—

whose contours and goals are clearer: privacy will usually lose. In this thesis, I have provided

clear guidelines for judges and policymakers to use when assessing disclosures and contexts

of trust. And although some may argue that my proposal would turn judges into armchair

sociologists, such criticism misses the fact that we already do this. Judges are already tasked

with determining when expectations of privacy are socially “reasonable.” Where we have

failed is in providing any helpful and practical guidance to help them consistently, honestly,

and justly along that journey.

Second, privacy-as-trust protects and encourages social interaction with intimates

and strangers alike. And this is a good thing. It protects sharing and intimacy in close knit

relationships by respecting confidentiality. It also appreciates the value in protecting


90
interaction among strangers, which make up the bulk of our everyday interactions. Many of

our most enduring relationships begin as encounters among strangers and most of our daily

interactions—market transactions, for example—are ad hoc and rarely move beyond the

initial stage of stranger trust. And yet despite the fact that, at bottom, they are still examples

of two strangers interacting, they are essential for a functioning society.

On the micro level, social interaction with strangers can help the unemployed find

jobs and expand opportunities for love and successful and enduring affiliation (Adams, 2011;

Hogan, 2011; Ho and Weigelt, 2005). On a macro level, it encourages tolerance; it socializes

young people to the wider world and educates in areas that classroom study cannot (Chang

et al. eds., 2003; Milem and Hakuta, 2000; Hurtado, 1999; Edley, 1998). And it imbues the

concept of a marketplace of ideas with real meaning. Cass Sunstein (2001) has argued that

Internet and digital technologies, in general, and aggregators and news feeds, in particular,

may undermine democracy because they isolate citizens, allowing them to exist in an echo

chamber with those who agree with them and apart from those with different ideas. He

referred to this condition as “cyberbalkanization.” Failure to protect our interactions with

strangers has the same effect. If we are truly interested in creating a diverse pool of content

from which to learn and grow, the law should encourage us to expose our thoughts and

opinions to people who may have radically different ideas than our own. Any theory of

privacy that disincentivizes some measure of sharing and interaction with strangers, then,

cripples the very core of a democratic society.

Third, privacy-as-trust will be more effective at encouraging and protecting online

social interaction. As Frank Pasquale (2014a) and Dan Solove (2004) have shown, the

structural and technical demands of internet technologies have transferred significant

amounts of our personal data into the hands of third parties. And sociological and economic
91
studies suggest that online social networking and other digital platforms are unique places

for sharing intimate details among strangers (Madden, 2013). Traditional theories of privacy

offer no adequate protection for that behavior because they extinguish privacy rights upon

publicity. But online platforms encourage the same perceptions of discretion and trust

among strangers as they do among intimates.

Fourth, privacy-as-trust is more familiar than it sounds. Robert Post’s (1989) analysis

of the purposes and effects of the tort of intrusion upon seclusion supports my argument.

The tort, which protects against any form of invasion of “solitude or seclusion,” would

seem, on its face, to reflect the common understanding of privacy as separation and

exclusion. Post argues, however, that the tort is meant to “safeguard[] rules of civility that …

rest[] not upon a perceived opposition between persons and social life, but rather upon their

interdependence” (p. 959). Although he never articulated a social theory per se, his analysis

accepts the role trust plays in social life and reflects how, as a governing theory, privacy-as-

trust would implement the tort.

Post uses the narrative of the New Hampshire case, Hamberger v. Eastman (1964), to

make his argument. In that case, a landlord installed an eavesdropping device in a couple’s

bedroom, the revelation of which greatly distressed, humiliated, and offended the victims (p.

240). The plaintiffs won not because they proved that they felt severely injured. Rather, the

installation of the device was itself “offensive to any person of ordinary sensibilities” (p.

242). This makes the tort of intrusion rather unique among torts. Successfully litigating most

tort claims usually require the plaintiffs to prove that the defendant’s underlying action

actually cause some particularized harm or damage (Post, 1989). Claims of negligence, for

example, have to show that the defendant’s negligence in driving a car or operating a crane

caused some demonstrable injury. But, as Post notes, the tort of intrusion is different: the
92
offense is the action per se; the action does not need attendant negative effects to become

offensive. This turns the plaintiff from the recipient of personal injury, in the case of most

torts, to the victim of a breach of a social norm that we impliedly owe one another. Post

would say that norm is “civility;” I would say trust. The tort of intrusion “focuses the law

not on actual injury …, but rather on the protection of [the individual as] constituted by full

observance of the relevant rules of deference and demeanor,” Post writes, channeling

Goffman (p. 963).

An articulable theory of privacy was not Post’s concern. But his central insight is a

natural byproduct of privacy-as-trust. He argues that the tort of intrusion is meant to

safeguard the social norms that permit social interaction, what he calls “civility norms.” The

traditional theories of privacy discussed in Chapter 2 would have the tort focus on the

protection of an individual’s right to be in control of information dissemination and his right

to be let alone or take a respite from society. Post reorients the tort around its social

purposes. Post’s argument runs along the same path as privacy-as-trust because it reflects the

role of trust and discretion in sharing and disclosure. And both offer workable solutions for

judges to address privacy law problems.

Finally, trust is already a central part of other areas of law, like the contract law

covenant of good faith and fair dealing; its integration into the law of privacy is, therefore,

not only reasonable but also rather unremarkable. The covenant, which exists behind every

contract, codifies the trust we have that our contracting partners will fulfill their obligations

and not prevent us receiving the benefits due us under the contract (Restatement (Second)

of Contracts, § 205; Beatson & Friedman, 1997). No contracting party can predict every

eventuality, so, as ethnomethodologists would argue, every contract has an “et cetera

assumption”: unspoken yet generally understood assumptions about interaction and future
93
contingent actions (Garfinkel, 1964). Requirements of good faith and fair dealing reflect this

“et cetera assumption.” The sociologist Randall Collins (1982) argued that this and other

noncontractual elements of contracts and interaction are largely based on trust. After all, if

two parties never trusted each other, it is hard to imagine a contract ever being completed

between them; they would either give up or attempt to reduce to writing every conceivable

contingency, making the project unworkable. Trust is “social life[’s] … fundamental basis”

for this precise reason (Durkheim, 1893/1997, p. 162). Alongside the practical, empirical,

and theoretical benefits of the privacy-as-trust approach, the relative familiarity the law has

with trust suggests that my argument for refocusing privacy law will be functional, as well.

94
CHAPTER FOUR:
The Data
One of the most important strengths of privacy-as-trust is that it not only represents

a coming together of legal and social theory, but it also reflects real, observable behavior. As

discussed above, a review of the social science literature on trust suggests that individuals

reasonably trust others based on cues of experience, strong overlapping networks, identity,

and expertise gleaned from the entirety of the social context. But the current literature is as

yet silent on any study linking general or specific incidents of sharing personal information

with the presence of these social forces. This thesis aims to begin to fill that wide gap with a

limited initial case study of online sharing, the goal of which is to determine if there is any

correlation between contexts of particularized social trust and sharing personal information

with expectations of privacy. I also aim to provide modest predictions of social determinants

of sharing in certain contexts. To that end, I researched, aggregated, and analyzed raw data

made available by the Pew Research Center and surveyed a random sample of Facebook

users. I conclude that among participants in online social networks like Facebook, sharing

increases when trust increases, individuals use social cues of overlapping social networks to

identify trustworthy strangers, and at least some populations with minority identities

important to their personae seek out that identity as a trustworthiness cue. Although these

conclusions are necessarily limited to the admittedly unique population of online social

network users, the pervasiveness of platforms like Facebook make this population an

important and growing demographic. In this Chapter, I will describe the survey, outline the

methods and data, report the results, and discuss several preliminary conclusions and steps

for future research. I conclude this Chapter by noting several limitations to the data.

95
Section 4.1: Questions and Hypotheses

The primary empirical question posed in this thesis is whether there is a relationship

between trust and sharing: Do we share personal information in contexts of trust? And,

more specifically, is the trustworthiness of an individual’s interaction partner more of a

significant motivating factor in sharing personal information than other factors? These are

difficult questions to answer, but we can begin to build on current research to get closer to

conclusions from which we can extract tentative policy implications.

To answer these questions, we need additional information. We need to know what

types of information we share and we need a platform on which to observe sharing

behavior. We need to develop a proxy for the impact of particularized social trust on our

sharing behavior. And we need to be able to situate new research in the ongoing developing

literature. The most efficient way of answering these questions is to limit the world of

observable sharing behavior to one case study—namely, Facebook.

As is evident from Chapter 3, I hypothesize that there is a positive correlation

between trust and sharing. I suggest that individuals feel more willing and, in fact, do indeed

share more in contexts of trust. I also hypothesize that individuals are more willing to share

information with those they have never met as long as there are cues of trustworthiness,

which may include factors like strong overlapping networks and sharing a stigmatizing or

important identity. These are modest hypotheses that only begin to scratch the surface of

work that must be done to understand our behavior online. However, even modest

conclusions may be able to help policymakers and the legal academy develop policies that

better reflect real and mold social behavior.

96
Section 4.2: What We Know

To identify what we know about Facebook and its users, I studied social sharing

research and, in particular, reviewed the work done by the Pew Research Center (Pew). Pew

is a nonpartisan think tank that conducts public opinion polling, demographic research,

media content analysis, and other empirical social science research for the purposes of

informing the public about general trends in American social life. Researchers at Pew have

done considerable work over the last five years about a wide range of online behavior, from

political engagement to use of privacy settings on online social networks. This research

serves as an effective backdrop and context to the additional work presented in this thesis.

Facebook is a social networking website that began at Harvard in 2004. It was

originally available only to students at Harvard, then was expanded to Ivy League schools,

and, eventually, to everyone thirteen years old and older with a valid e-mail address. It now

has more than 1.23 billion active users, 945 million mobile users, and 757 million daily users

worldwide (Protalinski, 2015). It is by far the largest social networking platform on the

internet. It allows members to share thoughts, notes, pictures, employment information,

likes and dislikes, place of birth, age, eye color, and whatever else you can imagine (and want)

to share. According to Pew, 71% of online adults use Facebook, up from 67% in 2012,

compared to 28% who use LinkedIn, the second most popular online social network

(Duggan et al., 2014, p. 2). Among all American adults ages 18 and over, 58% use Facebook

(p. 4). It is, therefore, a growing microcosm of interpersonal behavior.

Facebook’s home screen is a “timeline” or “feed” of posts from your “friends,” or

those within your network. It allows you to see the posts that others have made visible to

you and that you would like to see; it also includes those posts that your friends have “liked,”

a feature that allows Facebook users to click an icon and express favor or agreement. In this
97
way, your feed lets you see posts from those outside your network. You can “follow” certain

friends and “unfollow” others, or keep others in your wider list of Facebook “friends” off

your timeline. Therefore, both sides to sharing—the sharer and the recipient—help

determine who sees what, but you do not have full control over what you see. Facebook’s

algorithm determines the content and order of the posts. Facebook also allows you to see

the ongoing interactions about those posts in real time. You can post photographs and

“tag,” or identify, yourself and your friends in those pictures. Facebook users can restrict

who sees their photos and posts by distinguishing between different network groups and

gathering persons into those different groups.

Facebook does not provide its user data to researchers, but as the largest social

network and sharing platform, it nevertheless offers researchers a chance to study user

behavior by asking Facebook users about what they share and observing trends in their

behavior. Only access is required.

Although neither Pew nor any other researcher or organization has published

research on what kinds of information various different types of groups tend to share on

Facebook, Pew has done considerable work to highlight what teenagers aged 12-17 share on

the platform. And although many demographic clusters use Facebook regularly and may do

so with different goals and intentions, the teenage-focused Pew survey data provide a

baseline of material on sharing. According to Pew reports, 92% of teens share their real

names, 91% share photos of themselves in various contexts, 83% share their birthdates, 62%

share their relationship status, 53% share their email addresses, and 20% post their cell

phone numbers (Madden, et al., 2013, p. 3). Pew’s research also shows that 16% of teens

automatically include locations in their posts (p. 8).

98
Furthermore, more than 33% of teenagers on Facebook are “friends” with people

they have never met offline. That means that even though 60% of teenagers on Facebook

restrict access to their profiles to only “friends” of “followers,” over 1/3 of those users are

sharing information with persons who could be called strangers in the traditional sense. Pew

has also found that 14% of users share their profiles and feeds publicly and another 25%

share with “friends of friends,” i.e., the social networks of their social network (p. 6).

Therefore, approximately 60% of teenage Facebook users share information with people

they have never met offline.

Male representation on Facebook is generally lower than female representation.

According to Pew, men tend to make up approximately 40% to 42% of the samples of

Facebook users its researchers have interviewed or observed, which appears to be in line

with Facebook’s own published statistics of its worldwide user base (Hampton et al., 2012;

Fitzgerald, 2012). As of January, 2014, approximately 5% of Facebook users were between

the ages of 13 and 17, 23% are 18-24 years old, 25% are between 25 and 34, 31% are

between 35 and 54, and 16% are ages 55 and older (Neal, 2014).

We also know that Facebook users tend to be more trusting of others as compared

to other internet users and non-internet users. Specifically, Pew has found that, controlling

for other demographic factors, the typical internet user is more than twice as likely as others

to feel that people can be trusted (Hampton et al., 2011, p. 4). Facebook users are even more

likely to be trusting. According to Pew, a Facebook user who uses the site multiple times

during a single day is 43% more likely than other internet users and more than three times as

likely as non-internet users to feel that most people can be trusted (p. 4, 32-33). However,

there has never been a study of perceived levels of trust and the extent or degree of sharing.

99
Section 4.3: Research Design

What we do not know is why Facebook users share and when they feel they can

share personal or intimate information. There is, to my knowledge, no empirical study of the

social determinants of sharing on Facebook. This could be for several reasons. First,

Facebook has the best access to this kind of information and, unlike Twitter, it has made a

strategic decision not to provide its data to researchers. Although this necessarily limits an

outsider’s ability to verify survey data or gather sufficiently large data sets, Facebook is still

an observable platform of social interaction. Researchers can ask Facebook users and, as

Pew has done, negotiate to work with Facebook to observe user behavior on a consent basis.

Given time, resource, and manpower limitations, I was not able to engage in the latter level

of observation for this thesis.

Second, any study of motivations is necessarily burdened by respondents’ reporting

biases. In other words, what users state as their rationales for behavior, even on anonymous

surveys, may not accurately reflect their true motivations. The only way to avoid this bias

and map motivations is to provide exogenous stimuli to an ongoing interaction and observe

how participants respond. Indeed, Facebook admitted to doing something much like this last

year (Booth, 2014). Facebook can do this because it has access to all its users and can

manipulate the platform in real time. The only way for outside researchers to accomplish

similar research goals is to construct an experimental setting to approximate actual users on

the real Facebook. That was neither feasible nor guaranteed to elicit the same quality of

responses as in the Facebook setting: users may behave differently when they know they are

part of an experiment.

To fill a gap in the understanding of trust and sharing on Facebook, I constructed a

survey of Facebook users and distributed it online. The survey, which is reproduced in
100
Appendix A, took 7-10 minutes to complete and was completely anonymous. Part I asked

for basic demographic data: respondents selected age categories, gender, education level, and

sexual orientation. I asked these questions to situate my work in a larger context, provide

data for potential future research, and to verify the randomness and accuracy of the sample

against known statistics of Facebook users. Respondents were then asked to select from a

list all the social networking websites on which they maintain active profiles, where “active”

referred to any website that respondents viewed or updated regularly. Ten of the most

popular social networks were listed; the eleventh option was an “other” category.

Part II asked the standard trust question: Generally speaking, would you say that most people

can be trusted or that you can’t be too careful in dealing with strangers. This question was asked to

obtain baseline information on respondents’ general feelings about trust and trust in others.

Furthermore, since this question has generally been asked over time and in other contexts,

respondents’ answers can be compared to previous data to suggest changes to trust over

time. Although such an analysis was never the focus of this thesis, asking these questions

may prove useful for future research.

Part III and Part IV asked users a series of questions about what type of information

they share on Facebook. Twenty-five different items were selected based on Pew’s research

and my own observation of sharing on Facebook and respondents were asked Yes/No

questions about whether they shared the given information. The questions, all of which are

available in Appendix A, ranged from “Do you share jokes or funny videos?” to “Do you share your

personal email address?” When coding the responses for analysis, I created a “Total Sharing”

column that aggregated all “Yes” answers and a separate “Total Intimate Sharing” column

that aggregated all “Yes” answers for items share that could be placed higher on an Intimacy

Scale. Relative position on the Scale itself is irrelevant; for the purposes of this thesis, it does
101
not matter whether “personal telephone number” is more or less intimate than “information

about illnesses or medication.” Both qualified as “intimate.” I am only concerned with

intimate information relative to non-intimate: telephone number versus news articles or

“BuzzFeed listicles.” The questions constituting the “Total Intimate Sharing” data set were

selected based on the social science literature discussed in Chapters 2 and 3 and a separate

survey in which 66 New York Law School students were asked to select “intimate or

personal” or “neither intimate nor personal” for each item on the Survey. The modes were

used select the top 12 items; I used personal judgment to break ties at the margin. Part IV

allowed respondents to specify if they share particular types of information with some

subnetworks and not others. This data will be used in future research but not in this thesis.

Finally, Part V attempted to approximate the impact of trust on Facebook sharing by

asking several questions about whether certain contextual factors would make respondents

more or less likely to accept a “friend request” from a stranger. As noted above,

approximately 60% of Facebook users share some personal information with individuals

they have never met offline. That willingness to share information with strangers has to be

based on some determinants that are observable in context, determinants that allow users to

distinguish between strangers whose “friend requests” they will accept and whose they do

not. Although it would be impossible to draw final conclusions on causality at this point, a

correlation between factors that model particularized social trust would lend some credibility

to my hypothesis.

The survey was distributed to a wide network of Facebook users through the cloud-

based Google Forms. Google Forms allows individuals to enter information online. After

clicking “submit,” all responses are automatically entered into a corresponding Google Sheet

(a spreadsheet). No human data entry is required, which protects against entry errors and
102
fabricated results. What’s more, the tool allows for surveys to be conducted cheaply and

efficiently. The survey was shared throughout my social network on Facebook over the

course of several months and emailed to 28 friends and colleagues. Colleagues at Columbia

and New York Law School distributed the survey link to their own networks; the survey was

also sent to a total of 119 of my former students at New York Law School. Members of my

family shared and emailed the survey to their networks, and the link was distributed to other

diverse networks: an email list of Equinox group fitness instructors nationwide, a contact list

for several nonprofit organizations, two bar association listservs, and various other networks

of which I am not aware.

Section 4.4: Describing the Data

A total of 386 survey responses were recorded after approximately 12 responses

were excluded for incomplete answers. Demographic data suggest that the pool of survey

respondents is similar to the broader Facebook community, with a few notable exceptions.

 In my sample, Facebook users in their late 20s and early 30s are overrepresented. Generally,

the third quintile of users by age accounts for approximately 25% of users;

respondents between the ages of 26 and 35 made up 36% of the sample.38 Other

categories, however, are roughly similar, as Table 4.1 illustrates, with somewhat

lower representation among 18-24 year olds and those 55 and older. The

discrepancy may be due to a bias toward the demographics of my personal

network and the one-year category skew, but this overrepresentation is not fatal.

38It is also worth noting that due to an error on my part in crafting the survey, several age categories are shifted
by one year relative to Facebook’s reported data. For example, Facebook reports demographic data for a
quintile ages 18-24 (Neal, 2014). My survey separated out that group by ages 18-25. This difference is not fatal
to any analysis because the roughly similar categories may still allow for adequate comparisons.

103
No one category is so underrepresented as to prevent overall aggregate analysis

of Facebook users.

Table 4.4.1:
Comparison of Sample to Facebook Population, Generally

Facebook Age Quintile, % of Facebook


Age Quintile % of Sample generally Population

< 18 4 < 18 5
18-25 19 18-24 23
26-35 36 25-34 25
36-55 30 35-54 31
> 55 12 ≥ 55 16

 The population is relatively, though not overly, networked. All members of the sample are

connected to the internet, a qualification of participation in an online survey.

Approximately 71% of the sample maintains active profiles on 1, 2, or 3 social

networking sites. All respondents, by definition, have a Facebook profile; the

next most popular platforms were LinkedIn and Twitter, which comports with

Pew findings (Duggan et al., 2014). This bias toward networked individuals may

skew results toward a population not representative of the general population of

individuals over the age of 13, but that bias is of no moment for conclusions

limited to the online community.

 There are more women than men in the sample. Women make up approximately 60% of

the sample, which is on par with the wider Facebook community given that

American women are far more likely to use Facebook than men (Duggan, 2013;

Hampton et al., 2012; Fitzgerald, 2012). The overrepresentation of women

104
relative to the overall population may give us pause, but I restrict any conclusions

from this data to the online social networking community. My sample is a good

fit for that community.

 Members of the lesbian, gay, and bisexual (LGB) community seem overrepresented, but the

numbers are roughly in line with independent research. Members of the LGB community

make up approximately 19% of the sample. That may seem disproportionately

high, especially since the best estimates suggest that LGB individuals constitute

roughly 3.5-4% of the population (Gates and Newport, 2013), but the number

makes more sense given the significant presence of LGB individuals on online

social networks. According to Pew, 80% of LGB survey respondents have used

Facebook or Twitter, compared with 58% of the general public (Taylor, 2013).

Pew attributes this imbalance to the “fact that as a group LGB adults are younger

than the general public, and young adults are much more likely than older adults

to use social networking sites” (p. 14). That is indeed plausible, but LGB youth

are also more likely to use social networking sites because the physical

communities in their immediate geographic areas are more likely to be

unsupportive, distant, or even hostile than for all other populations (Waldman,

2012). When young LGBT adults are compared with all young adults, the share

using Facebook and similar websites is almost identical (89% of LGBT adults

ages 18 to 29 vs. 90% of all adults ages 18 to 29) (Taylor, 2013, p. 14). Therefore,

although the LGB are overrepresented in the sample relative to the general

population, their numbers in the sample are not significantly dissimilar with the

general online population of social network users.

105
 The sample is highly educated. More than 75% of the sample has attended at least

some college, which exceeds the general population rate, but is not far above the

general Facebook population (Hampton et al., 2011).

 The sample is more trusting than the general population and somewhat more trusting than the

general Facebook population. According to Pew, approximately 45% of social

network users state that “most people can be trusted” (Hampton et al., 2011, p.

33). In my sample, 51% of respondents answered the standard trust question in

the affirmative. There may be several explanations for this discrepancy, the most

likely of which is that because the sample is highly educated, which is correlated

with both wealth and potential future earning, the sample tends to be more

optimistic about the future than the general population. This could help increase

the trusting segment of the population because, as Eric Uslaner (2002; 2014) has

shown, economic equality and expectation of future economic success are closely

tied to perceptions of general trust, which is what the standard trust question

tests. The additional trust levels in the sample may give some pause given this

thesis’s focus on trust. However, even if we considered the difference significant,

I am limiting my concern to levels of perceived trust in Facebook and in others

and making conclusions relative to the particular online community of social

network users. Further research and, perhaps, a larger sample may permit

broader conclusions about levels of trust in future work.

Section 4.5: Results and Discussion

This section will present an analysis of data collected from the survey. Once the

collection period ended, responses, which were automatically recorded in a Google Sheet,

was downloaded into an Excel Spreadsheet. Certain baseline analyses were conducted
106
directly in Excel. Responses were coded and imported into SPSS, a statistical software

program, and crosstabs were run to examine relationships between data. Frequencies were

run to establish data details discussed above; then, correlations were run to establish some

relationships between the data. It is beyond the scope of this thesis to analyze and present all

lessons from the data; future research will permit more extensive work. For now, I restrict

my analysis to a limited number of empirical questions.

1. Question: What factors, if any, are correlated with extent of sharing information and a

willingness to share sometimes highly personal information on Facebook?

Hypothesis: Trust is the strongest factor influencing a Facebook user’s willingness to share

personal information on the platform.

As I noted in the Introduction, there is an intuitive connection between trust and

sharing personal information. Technology companies like Apple, Uber, and Facebook are

beginning to understand this connection; Facebook, at least, has recently begun asking its

members survey questions to tease out more depth to the connection. Beyond our intuition,

and because Facebook does not make its data available outside the company, there is little

hard social science evidence connecting trust and sharing. To that end, I would like to begin

to fill that gap with analysis from my survey of Facebook users. If it is true that trust helps

determine an individual’s willingness to share sometimes personal information, then privacy-

as-trust may, at least with respect to some participants in online social networks, may have

the added heft of reflecting our behavior.

I first ran simple correlations to establish directions for further analysis. The central

empirical question of this thesis aims to establish determinants of a willingness to share

information on Facebook. Therefore, I used Excel and SPSS’s Correlations tool to establish

a baseline of relationships between continuous variables and “Total Sharing” and “Total
107
Intimate Sharing.” I first used Excel to determine linearity by creating scatterplots of the

results. Figures 4.5.1 and 4.5.2, for example, show a linear relationship between the level of

trust respondents had in Facebook and both general and intimate sharing.

Figure 4.5.1
Relationship Between Trust in Facebook and Sharing, Generally
25
Number of Items Shared on Facebook

20

15

10

0
0 2 4 6 8 10 12
Level of Trust in Facebook

Figure 4.5.2
Relationship Between Trust and Sharing Intimate Information

10
9
Number of Intimate Items Shared

8
7
6
5
4
3
2
1
0
0 2 4 6 8 10 12
Level of Trust in Facebook

108
As Figures 4.5.1 and 4.5.2 also show, there are no significant outliers. Variables are

continuous with evidence of a linear relationship. Using the Shapiro-Wilk test of normality,

the variables meet the normality test (all Sig. values significantly > 0.05). With all Pearson

product-moment correlations assumptions met—the data showed no violation of normality,

linearity or homoscedasticity—I ran the correlation in SPSS. Table 4.5.3 displays the results.

Table 4.5.3:
Demographic Correlations with Sharing on Facebook

Total Sharing Total Intimate Sharing

Networked Level Pearson Correlation .228** .068


Significance (2-tailed) .000 .181
n 386 386

Trust in Facebook Pearson Correlation .722** .577**


Significance (2-tailed) .000 .000
n 386 386

** Correlation is significant at the 0.01 level (2-tailed)

Table 4.5.3 shows that a Pearson product-moment correlation was run to determine the

relationship, if any, between the extent of an individual’s use of and presence on online

social networks and the level of trust in Facebook, on the one hand, and the extent of

sharing on the platform. There was a strong, positive correlation between level of trust in

Facebook and amount of information shared on the platform, which was statistically

significant (r = .722, n = 386, p < .0005). There was a slightly weaker, but still strong,

positive correlation between level of trust in Facebook and amount of intimate information

shared on the platform, which was statistically significant (r = .577, n = 386, p < .0005). This

109
suggests that trust in the platform translates not only to general use—sharing news articles

and non-personal information—but is also related to individuals’ sharing of personal or

intimate information like telephone numbers, sexual orientation, and feelings of depression.

The difference between sharing general information and more personal information is seen

in the Pearson correlations in Table 4.5.3 relating “Networked Level” and sharing behavior.

The variable “Networked Level” refers to the extent to which an individual respondent is

networked, or maintains and actively uses an online social network. Although there was a

relatively weak, but still statistically significant positive correlation between networked level

and amount of information shared on the platform (r = .228, n = 386, p < .0005), the

relationship and its significance disappear when correlated only with intimate information

shared (r = .068, n = 386, p > .0005). This suggests that more networked individuals are

simply more active online, not more revealing. Trusters, however, are sharers.

Multiple regression analysis was then conducted to determine if the extent of sharing

on Facebook and the extent of sharing intimate information on Facebook can be predicted

by level of trust in the platform. All assumptions of multiple regression were met. The

dependent variables—“Total Sharing” and “Total Intimate Sharing”—are continuous. Six

independent variables were used: Gender, Age, Sexual Orientation, Education Level,

Networked Level, and Trust Level. As noted above, the data showed no violation of

normality, linearity or homoscedasticity. I hypothesize that trust level would be the only

statistically significant predictor of sharing behavior and that networked level will not

adequately predict a willingness to share intimate information. Tables 4.5.4 and 4.5.5 display

the results.

110
Table 4.5.4
Multiple Regression: Total Sharing on Facebook

Model Summary

Std. Error of
Model R R Square Adj. R Square Estimate

1 .738 .545 .538 2.7234

ANOVA

Model Sum of Squares df Mean Square F Sig.

1
Regression 3366.423 6 561.071 75.649 .000
Residual 2810.947 379 7.417
Total 6177.370 385

Coefficients

Standardized
Unstandardized Coefficients Coefficients
Model B Std. Error Beta t Sig.

1
(Constant) 2.123 .789 2.693 .007
Gender .270 .309 .033 .874 .383
Age .005 .111 .002 .043 .966
Sexual Orientation -.253 .333 -.028 -.758 .449
Education Level .073 .255 .011 .284 .777
Networked Level .425 .102 .149 4.184 .000
Trust in Facebook 1.228 .061 .708 20.126 .000

111
Table 4.5.5
Multiple Regression: Total Intimate Sharing on Facebook

Model Summary

Std. Error of
Model R R Square Adj. R Square Estimate

1 .588 .346 .336 1.5155

ANOVA

Model Sum of Squares df Mean Square F Sig.

1
Regression 460.813 6 76.802 33.438 .000
Residual 870.513 379 2.297
Total 1331.326 385

Coefficients

Standardized
Unstandardized Coefficients Coefficients
Model B Std. Error Beta T Sig.

1
(Constant) .056 .439 .129 .898
Gender .088 .172 .023 .510 .611
Age .162 .062 .117 2.614 .009
Sexual Orientation .069 .186 .017 .374 .709
Education Level -.228 .142 -.072 -1.604 .110
Networked Level .018 .057 .014 .327 .744
Trust in Facebook .467 .034 .580 13.748 .000

With respect to Table 4.5.4, R = .738 suggests that the independent variables are good

predictors of the extent of sharing behavior on Facebook, explaining 54.5% (R Square) of

the variability in extent of total sharing. Eliminating extraneous variables still shows that our

variables can predict 53.8% (Adjusted R Square) of the variability in total sharing. The

ANOVA table shows that the independent variables statistically significantly predict the
112
dependent variable, F(6, 379) = 75.649, p < .0005 (i.e., the regression model is a good fit of

the data). Looking at the Coefficients table, we see the power of each independent variable

as a predictor when the other variables are held constant. There are two statistically

significant independent variables: Networked Level and Trust Level. The unstandardized

coefficient for Networked Level is equal to .425, suggesting that for participation in every

additional social network, there is an increase in sharing on Facebook of .425 items. The

unstandardized coefficient for Level of Trust is equal to 1.228, suggesting that for every

additional level of trust in Facebook, there is an increase in sharing on Facebook of 1.228

items on the platform. This outcome makes sense given the correlations above, which found

statistically significant correlations with both variables vis-à-vis sharing on Facebook, but a

stronger relationship with level of trust in the platform.

Table 4.5.5 further verifies my initial conclusion above that more networked

individuals are simply more active online, but that trusters share personal information. With

respect to Table 4.5.5, R = .588 suggests that the independent variables are fair predictors of

the extent of sharing behavior on Facebook, explaining 34.6% (R Square) of the variability in

extent of total sharing. Eliminating extraneous variables still shows that our variables can

predict 33.6% (Adjusted R Square) of the variability in total sharing. This is, admittedly, not

as strong as a prediction as in table 4.5.4. Conceding that point, we still see that the

independent variables statistically significantly predict the dependent variable, F(6, 379) =

33.438, p < .0005 (i.e., the regression model is a good fit of the data). As for the Coefficients

table, we see that Trust Level is the only statistically significant predictor among the

independent variables. That Networked Level is not contributes to the conclusion that the

more networked one is may be correlated with more online activity rather than any increased

willingness to share personal information. The unstandardized coefficient for Level of Trust
113
is equal to .467, suggesting that for every additional level of trust in Facebook, there is an

increase in sharing on Facebook of .467 intimate items on the platform. It also bears

mentioning that age, with an unstandardized coefficient of .162, bears a somewhat

statistically significant relationship to a willingness to share intimate information. The data

show that for every jump in age category—namely, from under 18 to 18-25, or from 18-25

to 25-35—there is an increased in sharing on Facebook of .162 intimate items. If true, this

challenges the common assumption that young persons are willing to share more intimate

information online than their elders. However, an overrepresentation of respondents in their

late 20s and early 30s may be skewing these results. Suffice it to say, trust remains an

important contributing factor to a willingness to share personal information online.

2. Question: If trust in Facebook is correlated with willingness to share sometimes intimate

information on the platform, what, if anything, helps determine when individuals are willing to

share that information with strangers?

Hypothesis: That strong overlapping networks and sharing a stigmatizing social identity are

the strongest indicators of trustworthy strangers and therefore, are likely to be the strongest

determinants of a decision to accept a “friend request” from a stranger.

As discussed above, we know that many online social network users share

information with “friends” that they have never met offline. We also know from Chapter 2

that current law is taking this tendency to share with strangers and doing violence to our

broader privacy interests. But the social science literature, much of which was discussed in

Chapter 3, suggests that trust can develop among strangers to the point where individuals

who do not know each other still feel comfortable sharing and do not expect to have their

privacy destroyed. There are also powerful reasons why society as a whole would be better

off if the law encouraged rather than discouraged this kind of sharing with strangers. If this
114
is true, then privacy-as-trust has the potential to protect personal privacy in a networked

world replete with social interactions among offline strangers.

The survey asked respondents several questions about whether a given piece of

information about a stranger, defined as an individual they had never met offline in person

before, would make it more or less likely that they would accept the stranger’s “friend

request.” The questions, to which individuals responded on a Likert scale ranging from

“much less likely” to “much more likely,” are available in Appendix A. They cover a wide

range of possible reasons for accepting a “friend request” from a stranger, from “large

number of mutual friends” and “the stranger is friends with your close friends” to “physical

attractiveness” and “you will never see the stranger in real life.” Answers to the first and

second questions would speak to the strength of overlapping networks. Respondents were

also asked if they are more likely to accept a “friend request” from a stranger who shares

their minority status. This last question was used as a proxy for determining the role of a

stigmatizing identity in developing a connection with a stranger.

The first striking part of the data is that there are certain factors that received an

overwhelming concentration of “more likely” and “much more likely” answers, while the

modes of other factors centered on “neither more nor less likely.” For example, 83.6% of

respondents (n = 341) stated that they were at least “more likely” to accept a friend request

from a stranger if they shared many mutual friends. A similarly high percentage (80.6%) of

respondents (n = 340)39 said that they are more likely to welcome a stranger into their online

39 There were different n’s for each question in this section. Respondents were not required to answer all
questions, and many respondents who had never accepted a friend request from a stranger elected to answer
none of them. But respondents who had no history of bringing strangers into their online networks were
allowed to select answers based on how they would respond if they accepted friend requests from strangers in
the future. That many respondents elected to answer certain questions and not others is itself notable. The
factors with the highest n (i.e., highest participation) were “you assume you will never meet the stranger” (n =
115
social network if they shared similar close friends. The next most common positive

influencing factors were that the stranger would be a good professional contact (70.0%, n =

283) and attending the same college or university (57.3%, n = 281). No other factor

breached the 50% mark. This suggests that in raw numbers, strong overlapping networks, as

represented by mutual friends generally and friends of close friends, may be a powerful force

in making individuals feel comfortable sharing with offline strangers.

The one factor that did not immediately jump out as a powerful motivating force for

accepting friend requests from strangers was “same sexual orientation”: 54.7% of

respondents (n = 386) felt that sharing the same sexual orientation would have no effect on

their decision to accept a stranger’s friend request. As I argued in Chapter 3, however,

sharing a strong stigmatizing identity should make an individual more willing to share

information with strangers. Evaluating the raw data in Excel shows that of the 84 individuals

who said that sharing the same sexual orientation would make them “more likely” or “much

more likely” to accept a stranger’s friend request, 56 of them (exactly 2/3 or 67%) identified

as gay, lesbian, or bisexual.

I ran binary logistic regression in SPSS to predict what demographic factors, if any,

would make a Facebook member be more willing to share personal information with a

stranger of the same sexual orientation. I collapsed the Likert scale responses on the impact

of “same sexual orientation” into a nominal scale: 1 for those who responded “more likely”

and “much more likely,” and 0 for all other responses. I did this for three reasons: simplicity,

significance, and relevance. There were only a few respondents who said that sharing a

367), “same sexual orientation” (n = 363), “same or similar political views” (n = 354), “many mutual friends”
(n = 341), and “friends with your close friends” (n = 340). Notably, most respondents elected to respond to the
first three by emphasizing how the factor would not make them more willing to accept friend requests from
strangers. To the latter two, our proxies for strong overlapping networks, respondents noted that these factors
were much more powerful considerations.
116
sexual orientation would make it “less likely” or “much less likely” that they would accept a

stranger’s friend request. Furthermore, for the purposes of determining whether a given

factor is predictor of a willingness to accept strangers’ friend requests, there is no relevant

difference between being indifferent to sharing a sexual orientation and “less likely”

responses. I also collapsed the LGB demographic into those that identified as heterosexual

(0) and those that identified as either lesbian, gay, or bisexual (1). The survey reflected a

conscious choice to exclude “transgender” as a category because of the variance of the

definition and perception among various demographic groups. All assumptions of binary

logistic regression were met. The dependent variable (importance of same sexual orientation

to accepting a stranger’s friend request) is ordinal, ranked “more likely” over “not more

likely.” Independent variables are ordinal or nominal, and we have proportional odds. The

results are displayed in Table 4.5.6.

117
Table 4.5.6:
Predicting Importance of Sharing Same Sexual Orientation for Willingness to Accept
Friend Requests from Strangers

Model Fitting Information

Model -2 Log Likelihood Chi-Sqaure df Sig.

Intercept Only 208.029


Final 101.657 106.372 9 .000

Goodness-of-Fit

Chi-Square df Sig.

Pearson 48.602 40 .165


Deviance 40.782 40 .436

Pseudo R-Square

Cox and Snell .254


Nagelkerke .384
McFadden .271

Parameter Estimates

Estimate Std. Error Wald Sig.


Threshold Same sexual orientation as stranger .133 .593 .050 .823

Location Age = 1.0 1.643 1.123 2.143 .143


Age = 2.0 .867 .605 2.053 .152
Age = 3.0 .138 .532 .067 .796
Age = 4.0 .061 .603 .010 .919
Age = 5.0 -.738 .642 1.323 .250
Age = 6.0 0
Gender = 0 1.079 .322 11.206 .011
Gender = 1.0 0
Education Level = 1.0 -1.461 .924 2.496 .114
Education Level = 2.0 -.621 .331 3.517 .061
Education Level = 3.0 0
Sexual Orientation Demographic = 0 -2.356 .317 55.133 .000
Sexual Orientation Demographic = 1.0 0

The results are clear. As Table 4.5.6 shows, the odds of those who identify as either lesbian,

gay, or bisexual being more willing to accept a Facebook friend request from a stranger if the
118
stranger was also LGB was 2.356 (95% CI, 1.734 to 2.978) times that of heterosexuals. This

was a statistically significant effect: Wald x2(1) = 55.133, Sig = .000. No other independent

variables showed a similarly strong and significant relationship. Gender came the closest to

showing a statistically significant impact, but that may be due to an artificially high number

of respondents who identify as both “male” and “lesbian, gay, or bisexual.” When excluding

gender from the analysis, the significance of sexual orientation on the odds of bringing a

stranger into an online social network is even more pronounced. In this case, the odds of

those who identify as LGB being more willing to accept a stranger’s friend request on

Facebook if the stranger was also LGB was 2.648 (95% CI, 2.049 to 3.247) times that of

heterosexuals (Wald x2(1) = 75.004, Sig = .000). This, of course, makes sense. For those for

whom a particular identity is important to their worldview, values, perspective, or behavior,

that similar identity in others is a sign of sharing important values, which, as discussed in

Chapter 3, is a signal of trustworthiness (Parsons, 1978, p. 47).

Section 4.6: Limitations

Although the data have allowed us to identify certain statistically significant variables

as good predictors of sharing behavior of this sample and to identify that strong overlapping

networks and sharing a stigmatizing social identity may play a role in encouraging this set of

Facebook users to share personal information with strangers, I recognize certain potential

limitations to the data that counsel caution before making more grand conclusions: sample

bias embedded in the survey architecture, variations in the definition of “intimate”

information, inadequate or incomplete proxies in the data, and limited variables tested.

In order to make broad conclusions about general behavior, a large simple random

sample is ideal. The survey analyzed in this Chapter was restricted to Facebook users, which

may represent a population already biased in favor of sharing personal information. After all,
119
participation in Facebook is voluntary and necessarily involves sharing with others. Although

the Total Intimate Sharing dependent variable was an attempt to distinguish between mere

online activity and sharing personal information, the population surveyed is observably

different from the broader American, let alone international, population of persons over the

age of 13. What’s more, the survey was disseminated through personal networks, which may

further bias the sample toward individuals who resemble each other. I attempted to resolve

that problem ahead of time by distributing the survey to a wide population of diverse

individuals from different backgrounds. But I must concede that simple random sample of

Facebook users may not have been achieved. The analysis conducted herein is, therefore,

necessarily limited to the unique population of Facebook users, and represents the beginning

of a broader study of sharing behavior.

Although the difference between “Total Sharing” and “Total Intimate Sharing” is

important to the conclusion that the level of trust individuals have in Facebook influences

their willingness to share personal information, there is the potential that the distinction

between the two variables inadequately captures the intended result. As I noted above, the

questions constituting the “Total Intimate Sharing” data set were selected based on the

social science literature discussed in Chapters 2 and 3 and a separate survey in which 66 New

York Law School students were asked to select “intimate or personal” or “neither intimate

nor personal” for each item on the Survey. The modes were used select the top 12 items; I

used personal judgment to break ties at the margin. There are two potential limitations to

this design that should be noted: what is personal for law students in New York may not

adequately capture what is personal for others, and the use of personal judgment on

marginal matters injects a dose of arbitrariness to the data. The impact of these problems is

120
itself limited because a diverse population was used and personal reflection was only used on

two items.

Several questions were used as proxies for a cluster of behaviors. For example, an

individual’s willingness to accept a Facebook “friend request” from a stranger who shares

the same sexual orientation was, along with self-identification of sexual orientation, used as a

proxy for the impact of sharing an identity important to one’s persona. The impact of sexual

orientation on members of the lesbian, gay, and bisexual individuals may be different than

the impact of race on persons of color, or religion on members of orthodox religious

communities. I elected to use sexual orientation because I was certain of my ability to

include a sufficiently large sample of members of the LGB community in my sample

population, but was less certain about inclusion of other minority communities. Plus, salient

identities are not restricted to minority status. Conclusions on the role of sharing a

stigmatizing or important social identity should thus be cabined by this limitation.

Finally, the survey only asked a limited number of demographic information,

including age, gender, sexual orientation, education level, and networked level. I elected to

limit the demographic questions as part of an overall attempt to keep the survey short

enough to not chill complete responsiveness. Still, the desire to limit the survey to a

maximum of 7-10 minutes may have contributed to less rich demographic portrait and,

therefore, too few independent variables in the analysis.

There are other limitations to the data, but although they counsel pause when

drawing conclusions, they are likely not fatal to the project. Most importantly, the

quantitative work done in this thesis is intended as an initial salvo in career-long project to

detail the nature, contours, and details of trust and their impacts on law and policy. This

study serves as a proof of concept at this stage rather than a rigorous test of the hypotheses.
121
Future researchers can take several concrete steps to create a more comprehensive, rigorous

study. First, population biases can be solved by selecting a larger, more diverse population

not tied to any social network. For example, if researchers want to determine the impact of

identity of race on racial minorities’ willingness to trust, more racial minorities must be

included in the sample set. A larger, more diverse random sample can be achieved by

capturing an already diverse sample set, like the entire population of Columbia

undergraduates. Sufficient participation can be incentivized with a raffle or other possible

rewards for completing the survey. We can also attempt to partner with Facebook or other

online platforms to distribute the survey. Second, accepting a “friend request” may not be

the best proxy for a willingness to share with strangers. Instead, future researchers could ask

a specific question or series of questions that involve the revelation of personal information,

and ask if various factors would make sharing more likely. Third, the survey can and should

be extended beyond the Facebook platform to other social networks and to offline

populations. This is harder to test without a simulated sharing experiment. An ideal sharing

experiment would invite individuals to simulate interactions with different persons, including

offline situations, followed by a series of questions to determine whether a given exogenous

change—a similar race or sexual orientation, an indication of experience, a mutual friend—

would make it “more likely” or “not more likely” that the interaction would proceed. These

and other additions can bring future researchers closer to more rigorous study of sharing

behavior. For now, although the limitations are clear, there is still some indication that

experience, strong overlapping networks, and sharing an important identity encourage

sharing of personal information.

122
CHAPTER FIVE:
The Effects: The Tort of Breach of Confidentiality
Privacy-as-trust is a pragmatic, sociological approach to understanding privacy

behavior and crafting a legal response. It captures our intuitive sense about intrusions into

privacy; its underlying theory can be reflected in at least one quantitative case study about

our social sharing behavior online. It then links them together into a single doctrine—

namely, that because trust is both an essential element of social interaction and at the core of

our sense of invasion of privacy, privacy law should protect, foster, and incentivize

disclosures in situations of trust. That doctrinal coherence offers judges a workable path for

resolving questions of privacy law: After identifying the nature of the relationship between

the parties involved, judges should look to the presence of experience, strong overlapping

networks, and identity to determine whether a given disclosure was made in a context of

trust. If so, the sharer should retain a privacy interest in the information disclosed; if not, the

privacy interest is extinguished.

This interpretive framework has the potential to function in a variety of contexts; my

future research will bear out the full scope of privacy-as-trust’s implications. For the

remainder of this thesis, I would like to focus on three privacy questions of particular

importance today: First, and discussed in this Chapter, should those who widely disseminate

personal information about you be liable for an invasion of privacy even if you had

previously disclosed the information to a select, limited few? Second, does the Fourth

Amendment’s guarantee against unreasonable searches and seizures neither require a warrant

nor exclusion of evidence at trial when the evidence at issue had already been disclosed or

was accessible to a limited extent? And third, is pre-patenting use or demonstration of an

invention sufficiently public to extinguish patent rights? All three of these privacy law

123
problems are based on the same underlying problem: the limited disclosure of personal

information. They ask the same socio-legal question: what is the line between public and

private. I will argue that privacy-as-trust represents the most just way forward.

The first problem is a particularly pressing one in a modern, networked world. As

the legal scholar Frank Pasquale (2014a) has compellingly shown in his recent text, The

Black Society, much of our personal information today is the hands of others. It is not just

that we talk about ourselves to our friends and have conversations in public places; we have

always done that. We share sometimes intimate pictures on internet platforms that have our

identifying data and track our likes and dislikes (Simonite, 2012). We hand over credit card

and other information to e-commerce websites that are accounting for an ever-growing

share of all retail sellers (Heggestuen, 2013). And a multibillion-dollar data industry that

tracks, collects, analyzes, and learns about us has emerged as a result (Pasquale, 2014a).

Today, then, quite unlike the world before the internet, much of what we traditionally

considered private information is held or known by others with whom we have no special

relationship. The networked world is a world of intimate strangers.

The cost of further dissemination of personal information already in the hands of

others can be significant. It can be embarrassing when information meant for a few ears is

transmitted to thousands. It could also set up targets for bullying and harassment (Lipton,

2010; Waldman, 2012), employment discrimination, and denial of service (Pasquale, 2014b).

As such, this is a perfect target for law reform.

If trust is at the core of privacy, then the remedy for invasions of privacy should

remedy the breach of trust. And if, as the theory and data suggest, trust can exist or be

breached (and privacy can be maintained or invaded) among intimates as well as certain

strangers given the right social context, then the remedy for invasions of privacy should be
124
similarly broad in scope. Fortunately, the trust-based tort of breach of confidentiality, which

has a long tradition in Anglo-American common law, can provide a clear, practical way

forward for victims of privacy invasions and for judges looking for answers to vexing

problems of modern privacy. Stunted in American law by various contingent historical

factors—most notably, William Prosser’s failure to include it in his article on “privacy

torts”—the breach of confidentiality tort unleashes us from traditional privacy scholarship

and reflects the meaning and implications of privacy-as-trust: it focuses not on the individual

or the nature of the information, but rather on the social relationship in which the

information is shared.

Neil Richards and Dan Solove (2007) have recounted the history of the tort of

breach of confidentiality, showing how embedded it is in Anglo-American law, and

proposed a rejuvenation of the tort in the United States by importing modern confidence

jurisprudence from Britain. Other scholars have also proposed using the tort to protect

privacy (Hartzog, 2014; Bezanson, 1992; Zimmerman, 1983), even as still others have

suggested it would not do much good (Gilles, 1995). I would like to build on their work and

show how the tort is premised on particularized social trust and propose modifications to

the tort based on the lessons of this thesis. I then apply the tort of breach of confidentiality

to determine when we retain privacy interests in previously disclosed information,

concluding that the tort can better protect personal privacy in a world of rampant disclosures

to myriad third parties.

Section 5.1: The Tort of Breach of Confidentiality

The tort for breach of confidentiality is premised on particularized social trust and

would impose liability when someone who is expected to keep confidences divulges them. I

propose that the claim would have three elements: a successful plaintiff must prove that (1)
125
the information is simply not trivial or already widely known, (2) the original disclosure

happened in a context that indicated trust, and (3) the use of the information caused an

articulable, though not necessarily individualized, harm. These elements are based on the

work of Richards and Solove (2007) and Helen Nissenbaum’s (2010) theory of privacy as

contextual integrity, but the claim construct departs from those influences to learn the

lessons of privacy-as-trust. Those lessons are that we expect to retain privacy even after

initial disclosures, that strangers can receive information in contexts of trust reasonably

developed by identity, strong overlapping networks, and other indicia of trust based on the

totality of the circumstances, and that, as injuries to trusting relationships, privacy harms may

antedate any specific, personalized, or defamatory effects. Admittedly, this proposal would

take privacy tort law in a new direction; but that reorientation is necessary to protect

personal privacy in a networked world filled with involuntary and voluntary disclosures.

Confidentiality law has always been premised on particularized social trust as

captured by the privacy-as-trust doctrine. For example, the centuries old common law

evidentiary privileges, where confidentiality law got its start, prohibit one party to a special

relationship from revealing the others’ secrets in court (Richards and Solove, 2007). As the

Supreme Court has stated repeatedly, the attorney-client privilege “encourage[s] full and

frank communication” and allows both parties to feel safe to share facts, details, and

impressions without fear of disclosure (Upjohn v. United States, 1981, p. 389); that is, it

protects the relationship, fosters the confidence necessary to share, and puts the weight of

the law behind each party’s expectation that the other would behave with discretion. The

same is true for spousal privilege, which protects “marital confidences” because they are

“essential to the preservation of the marriage relationship” (Wolfe v. United States, 1934, p. 14).

And the other special relationships that traditionally warranted confidentiality and
126
discretion—those between a doctor and patient, a clergyman and penitent, a principal and

agent, a trustor and trustee, a parent and child, to name just a few—are all premised on the

expectation that the parties will continue to behave in a manner that protects a disclosee’s

confidences (Richards and Solove, 2007).

Notably, the kind of trust at the foundation of special relationship privileges is

precisely the kind of particularized social trust that I argue is at the heart of privacy. We trust

that our attorneys, doctors, confessors, and other fiduciaries will keep our confidences not

because we have long historical data sets that over time prove they do not divulge our

secrets. Rather, confidentiality is the norm because of expertise, strong overlapping

networks, and transference. Lawyers,40 doctors,41 and priests,42 for example, all have canons

of ethics that promise confidentiality. We tend to choose our physicians and lawyers, at least,

based on personal recommendations from our embedded networks (Rabin, 2008): we ask

close friends and those we know well and transfer the trust we have in them to their

recommendations. Particularized social trust, therefore, is at the heart of these relationships

and the law of evidentiary privileges is meant to protect that trust.

Confidentiality law reflects privacy-as-trust even as British confidentiality

jurisprudence has unmoored the tort from the narrow confines of particular relationships.

40Canon 37 of the American Bar Associate Canon of Ethics states: “Confidences of a Client: It is the duty of a
lawyer to preserve his client’s confidences. This duty outlasts the lawyer’s employment, and extends as well to
his employees; and neither of them should accept employment which involves or may involve the disclosure or
use of these confidences, either for the private advantage of the lawyer or his employees or to the disadvantage
of the client, without his knowledge and consent, and even tough there are other available sources of such
information. A lawyer should not continue employment when he discovers that this obligation prevents the
performance of his full duty to his former or to his new client.”
41As noted in Miles (2003), the modern version of the Hippocratic Oath states, in part: “I will respect the
privacy of my patients, for their problems are not disclosed to me that the world may know.”

42Roman Catholic Canon Law 983 §1 states: “The sacramental seal is inviolable; therefore it is absolutely
forbidden for a confessor to betray in any way a penitent in words or in any manner and for any reason.”

127
Professors Richards and Solove (2007) cite several cases, many of them dual intellectual

property and confidential relationship cases, to show that the required relationships were

never very narrow.43 Those relationships have become even more attenuated in modern

British confidentiality law, which only hinges “on the acceptance of the information on the

basis that it will be kept secret” (Stephens v. Avery, 1988): Consider the 1969 case of Coco v.

Clark, which, according to Richards and Solove (2007), “crystalized” British confidence law

(p. 161). Coco involved a trade secret, but the court took the opportunity to define the three

elements necessary for a breach of confidentiality claim: the information (1) needs “the

necessary quality of confidence about it,” it (2) “must have been imparted in circumstances

importing an obligation of confidence,” and there must be (3) some use of the information

to the disclosee’s “detriment” (Coco v. Clark, 1969). Richards and Solove (2007) show that

subsequent case law has shown these categories to be quite broad: the “quality of

confidence” prong merely means that the information is “neither trivial nor in the public

domain” and the “circumstances” prong extends beyond defined relationships and even to

friends (p. 163). The damage prong has never been clearly explained, but it appears that

British law does not require the kind of specific, particularized harm that is common to

American tort law (p. 164), as evidenced by several British cases that have found the

disclosure per se harmful (Gurry, 1984).

This jurisprudence reflects many of the lessons of privacy-as-trust. By definition, the

tort recognizes that we can retain privacy interests in information already disclosed; after all,

43Richards and Solove (2007) refer to several seminal confidentiality cases involving manuscripts, Duke of
Queensberry v. Shebbeare (1758), medicinal recipes, Yovatt v. Winyard (1820), lecture notes, Abernethy v.
Hutchinson (1825), photographs, Pollard v. Photographic Co. (1888), and etchings, Prince Albert v. Strange
(1848). Even though these cases involve intellectual property, many scholars regard many of them as seminal
confidentiality cases regard, as well. As the leading legal historian of the time, Francis Gurry (1984) stated,
“Undoubtedly most of the references in the cases to confidential information as property are metaphorical.”

128
the tort holds the subsequent disseminators liable. Most importantly, it distinguishes

between disclosure in contexts of trust and wider publicity. Privacy-as-trust would extend the

British cases beyond friends to the social obligations that arise even among acquaintances

and strangers, cabined by the presence of the indicia of trust of experience, strong

overlapping networks, identity, and expertise. This jurisprudence should also extend beyond

cases involving traditional defamatory or reputation damages that result from wide

dissemination of information. Under privacy-as-trust and confidentiality law, the breach of

confidence is an invasion of privacy because of the damage the breach has done to our

expectations and relationships. As such, plaintiffs could satisfy the injury requirement of the

claim by showing that possessors of personal information distributed the data to third

parties for purposes unrelated to why the data was given in the first place.

Cases like Dwyer v. American Express (1995) and Shibley v. Time (1975), for example,

could have come out differently had plaintiffs made a breach of confidentiality claim based

on privacy-as-trust. Dwyer involved American Express (Amex) cardholders who objected to

the company’s consumer tracking habits: Amex collected hundreds of data points on

cardholders, tiered them based on spending habits and other factors, and rented both the

raw data and the list to third party partners. Notably, this customer tracking behavior is not

only ongoing today, but also exponentially easier given the dominance of e-commerce and

web platforms’ use of cookies and web beacons to track our online habits. In any event,

objecting to having their data sold to third parties they knew nothing about, cardholders filed

a claim for intrusion upon seclusion, one of Prosser’s privacy torts. That claim requires that

there be “an unauthorized intrusion” into a plaintiff’s private life, but because users of

American Express cards “voluntarily, and necessarily, giv[e] information to [American

Express] that, if analyzed, will reveal a cardholder’s spending habits and shopping
129
preferences,” there could be no intrusion (Dwyer v. American Express, 1995, p. 1354). The

court rejected the claim. Adopting a “secrecy paradigm” approach, the court found that the

information had ceased to be private: cardholders had already given up their privacy willingly

by using the card with full knowledge that Amex was gathering their data.

A similar fate met the claim in Shibley. In that case, a magazine subscriber sued the

publisher for selling subscription lists to a direct mail advertising business, but the court

rejected the claim because, among other things, there was nothing private about his name,

address, and magazine preferences. Under the conventional understandings of privacy

discussed in Chapter 2, both the Dwyer and Shibley decisions were correct; a conception of

privacy based on control presumes that individuals assume the risk of subsequent disclosure

when they voluntarily reveal their personal information to others. A tort for breach of

confidentiality based on privacy-as-trust offers another way. Under the confidentiality tort,

the fact that the information was previously revealed is irrelevant; what matters is the social

context. Therefore, Dwyer and Shibley should have turned on a broader social analysis of the

disclosure context: Was information given to a third party for a particular purpose—

purchases on credit or magazine subscriptions—and without the expectation of wider use?

Did data usage policies state that customer data would be sold? Had the companies licensed

their customers’ information before? Were data partnerships with third parties sufficiently

routinized such that customers would be aware that information sharing would occur? These

factors respect the role trust plays in initial disclosures, and they are the questions lawyers

must ask to determine if the elements of a breach of confidentiality claim could be made

successfully. The dockets do not provide any answers, which proves that privacy law has

been asking the wrong questions for some time.

130
And even though the tort so conceived would not fit within the model, discussed in

Chapters 1 and 2, of privacy as “right against the world,” our modern socio-technological

world requires us to rethink the conventional wisdom. Private parties and public agencies

maintain massive digital dossiers about us (Solove, 2004), ISPs and other digital platforms

hold large amounts of our personal data and may use it to our detriment (Pasquale, 2010),

and voluntary and required disclosures associated with online social networking give others

unprecedented access to our personal histories and information. Brandeis and Warren may

have been acutely aware of the invasive tendencies of an aggressive yellow press. But the

newer risks to personal privacy require innovative solutions that the tort for breach of

confidentiality may be able to provide.

Section 5.2: Further Disclosure of Previously Revealed Information

One of those risks arises when an individual discloses personal information to one

other person or a small group. The general rule of thumb in American privacy law is that

these individuals assume the risk of further disclosure and thus have no recourse when the

recipient of information disseminates it to a wider audience. But privacy-as-trust would hold

that, in certain contexts, when someone reveals private information to one or several

persons, he could reasonably expect that the recipients would not disseminate his

information any further. Therefore, a third party’s further disclosure of that information, this

time to a different and, likely, larger audience, could constitute an invasion of privacy and a

breach of confidentiality.

Currently, there are two problems to address: some courts do not accept this idea at

all and when others do, there appears to be no coherent scheme for judging when a previous

disclosure leaves a privacy interest intact. Lior Strahilevitz (2005) addressed these issues in an

insightful and powerful article, A Social Network Theory of Privacy, arguing that social science
131
literature on information dissemination through social networks could give judges an

articulable, quantitative method for adjudicating limited privacy cases. As discussed in

Chapter 3, Professor Strahilevitz’s work puts us on a path toward a more just and fair limited

privacy jurisprudence. However, his theory is weaker than privacy-as-trust, risks further

marginalizing already disadvantaged groups, and fails to protect personal privacy where trust

exists among strangers. I propose that a robust breach of confidentiality tort informed by

British law and the principles of privacy-as-trust would better protect personal privacy and

offer judges a clear, practical tool for adjudicating these cases.

Several cases illustrate the danger and lack of coherence in the current law, many of

which formed the basis for Professor Strahilevitz’s social network theory. In Sanders v. ABC

(1999), the California Supreme Court found that an undercover news reporter violated one

of her subject’s privacy interests in the content of his conversations with her when she

broadcast those conversations on television. ABC had argued, however, that any privacy

right was extinguished by the simple fact that the subject’s co-workers had been present and

overheard the broadcasted conversations. The court disagreed. Privacy, the court said, “is

not a binary, all-or-nothing characteristic. … ‘The mere fact that a person can be seen by

someone does not automatically mean that he or she can legally be forced to be subject to

being seen by everyone’” (p. 72). Here, the court was able to distinguish between

information that was public only as to several co-workers versus information publicized to

the broadcast audience of ABC News.

A similar question was resolved in a similar way in Y.G. v. Jewish Hospital (1990) and

Multimedia WMAZ, Inc. v. Kubach (1994). In Y.G., a young couple that underwent in vitro

fertilization in violation of the doctrines of their conservative church found their images on

the nightly news after attending a gathering at their hospital. Prior to the segment, only
132
hospital employees and a parent knew of their plans to have a family and the party was only

attended by hospital employees and other participants in the in vitro fertilization program.

The court rejected the argument that the couple’s attendance at the party waived their

privacy rights, holding that the couple “clearly chose to disclose their participation to only

the other in vitro couples. By attending this limited gathering, they did not waive their right

to keep their condition and the process of in vitro private, with respect to the general public”

(p. 502). And in Kubach, an HIV-positive man who had disclosed his status to friends, health

care personnel, and his HIV support group retained a privacy interest in his identity. The

court reasoned that a television station could not defy its promise to pixilate his face merely

because of Kubach’s previous disclosures because those disclosures were only to those “who

cared about him … or because they also had AIDS” (p. 494). Kubach, the court said, could

expect that those in whom he confided would not further disclose his condition.

And then there are those cases that reject the notion that anyone could retain privacy

interests in disclosed information. In permitting agents of General Motors to interview

associates of Ralph Nader and use the information they gathered under false pretenses to

discredit him and his criticisms of the company, a court held that “[i]nformation about the

plaintiff which was already known to others could hardly be regarded as private” (Nader v.

General Motors, 1970, p. 770), ignoring that those “others” were Nader’s friends. Similarly, in

an ironically well-publicized case, a Michigan court found that Consuelo Sanchez Duran, the

Colombian judge that indicted drug kingpin Pablo Escobar, had no privacy right in her re-

located Detroit address; she used her real name when shopping and leasing an apartment

and told several curious neighbors why she had security guards. The court said that these

actions rendered her identity “open to the public eye” (Duran v. Detroit News, 1993, p. 720).

133
The results of these cases vary. But most importantly, there seems to be no coherent

and consistent way of determining when a previous disclosure extinguishes a private right.

Rights-based theories are of little help. The sharers in these cases freely and voluntarily

disclosed information to others and privacy theories based on separation, secrecy, and

exclusion cannot adequately extend beyond an initial disclosure. They would either give

individuals unlimited power over disclosure or justify the rigid bright line rules that

characterized Nader and Duran. In cases like Y.G. and Kubach, a central animator of the

holdings was the fact that the plaintiffs’ free and voluntary agreements to attend the hospital

party or go on television, respectively, depended upon the defendants’ assurances that their

identities would not be publicized (Y.G., 1990, p. 501; Kubach, 1994, p. 494). They never

chose to be identified and, therefore, the publicity violated their right to choose to be

private. This makes little sense as a workable theory of privacy. It would grant individuals

total control over a right that must be balanced against others and offers no instruction on

where to draw the line between sufficient and insufficient publicity.

Perhaps social network theory could answer these previous disclosure questions.

After a comprehensive review of this literature, which need not be repeated here, Professor

Strahilevitz (2005) gleaned several practical lessons for adjudicating cases like Nader, Y.G.,

and Kubach. His conclusions are worth quoting in full:

We have seen that weak ties generally do a poor job of aggregating nonredundant
information that is possessed by multiple nodes on a network. Thus, instances in
which scattered private information about an individual is pieced together, and the
aggregated information is disclosed, can be expected to be rare. … By contrast, when
scattered bits of private information exist within a close-knit network of people
linked by strong ties, aggregation of that information is much more likely, and the
plaintiff’s expectation of privacy with respect to the aggregated information ought to
be low.
We also have seen that the more interesting a particular piece of private
information, the less likely it is to degrade as it passes through a network. Thus, if
private information involves a highly unusual or surprising event, a well-known
134
public figure, or relates to an important current event or trend, it is more likely to be
disseminated through a network. … Relatedly, once interesting information reaches a
supernode, the supernode is more likely to deem the information worth sharing with
her many contacts. And information that can be traced to an inherently credible
source … is also more likely to be disseminated through a network … . As a general
matter, then, a plaintiff ought to expect that if he discloses previously private
information that is likely to be regarded as highly interesting, novel, revealing, or
entertaining, that information is rather likely to be disseminated.

Strahilevitz goes on to apply these lessons to the cases above. In Kubach, the plaintiff had

told medical professionals as well as friends and family about his HIV-positive status.

Strahilevitz concludes that since norms prevent patient information from flowing from

doctors and since several studies suggest that HIV-status information is rarely divulged

outside of certain tight networks, the information was unlikely to get out on its own.

Therefore, Kubach had a privacy interest on which he could sue ABC for its wide

dissemination of his private information (Strahilevitz, 2005, p. 977). Strahilevitz finds Y.G.

harder to decide. He has no study on how knowledge of in-vitro fertilization travels in a

network. Instead, he relies on the assumption that “there appears to be less stigma associated

with in vitro fertilization” than, say, HIV-status (p. 978). The pertinent information—that

the couple was using in-vitro in contravention of their religious community’s wishes—was

hard to piece together, so not many people at the gathering would be privy to it. And many

of the participants would have been either co-participants or health care providers and thus

less likely to spread the news. Strahilevitz found the court’s decision to recognize a privacy

interest “defensible,” though not a slam dunk under social network theory (p. 978).

Social network theory, however, would say Duran came out wrong. Strahilevitz notes

that shopping and eating in restaurants are “weak-tie interactions,” so using one’s real name

would only become interesting and likely to spread through a network if a waiter was able to

piece together that the woman to whom he just served salad was the Colombian judge who

135
indicted Pablo Escobar. “Perhaps,” Strahilevitz notes, “a Colombian waiter would have put

two and two together” (p. 979), but the interactions were too fleeting and the information

too complex to be likely to get out and reach a wide audience.

None of Professor Strahilevitz’s conclusions are unreasonable. In fact, they make a

great deal of sense in part because of the attractive elements of his social network theory.

Like Helen Nissenbaum’s (2004) privacy as contextual integrity, a social network theory

elevates the social context of a given interaction over formal rules and the mere fact of

disclosure. It also highlights the important role social science can play in adjudicating

modern legal questions. But there remains a question of evidence. Strahilevitz never states

how lawyers would go about proving complexity of information, how fast or slow a given

piece of information would flow in a network, or how to identify important nodes in a

network. Absent proof, we are left with assumptions and a judge’s personal views, which

would further marginalize populations whose networks look very different from those of

mainstream members of the American judiciary. A friend going through in-vitro fertilization

might be a rather ordinary piece of information for a network of young persons, progressive

women, and members of the LGBT community. The same could hardly be said for radically

different networks of radically different people.

A social network theory of privacy also has a problematic relationship with strangers.

In some cases, if a stranger knows something about you, social network theory would

extinguish your privacy rights (Strahilevitz, 2005, p. 974). But we know that should not be

the case: privacy based on trust can exist among strangers given social cues that invite

revelation and a subsequent interaction. Privacy-as-trust would amend Strahilevitz’s network

theory to appreciate the context of information sharing with strangers and retain privacy

interests in information shared with strangers who nevertheless exist in a relationship of


136
trust and discretion. The tort for breach of confidentiality may provide a clear, practical

alternative.

Armed with the tools of privacy-as-trust and confidentiality tort, we can consider

limited privacy cases anew. Currently, the cases are resolved using either a bright line rule

that extinguishes privacy after a minor disclosure to even one person or, to use Professor

Strahilevitz’s phrase, an ad hoc “I know it when I see it” standard (p. 973). Ralph Nader and

Consuelo Duran had told several people information about themselves, but a bright line

disclosure rule extinguished any remaining privacy interest in that information as against the

world. But, under privacy-as-trust, what matters is not the mere fact that Mr. Nader and Ms.

Duran told something to others, but rather the context in which they told it. It is not clear

from the record in Nader (1970) the exact nature of the questions asked, but we do know

that among those interviewed were Mr. Nader’s “friends” (p. 770). We know from British

confidentiality law that circumstances giving rise to an obligation of confidence can arise

amidst disclosures to friends. Duran (1993) also makes clear that Ms. Duran only told three

neighbors—namely, those with whom she had previous interactions—why she needed

security guards and used her real name to lease a home. The nature of the information, not

to mention the minimal disclosure to a close-knit group, would engender trust against

further disclosure and may satisfy the requirements of the confidentiality tort.

In Sanders, ABC had argued that it could broadcast the conversation in question

because several of Sanders’s co-workers overheard it at the time. The substance of the clip

was rather banal: Sanders noted that he used to be a stand-up comedian and that he was

hardly enamored with his current job doing over-the-phone psychic readings. Given the

original audience and the lack of anything newsworthy or interesting in the conversation,

Professor Strahilevitz’s (2005) social network theory would suggest that it is highly unlikely
137
that information would have been widely disseminated but for the ABC news report. Here,

the result under privacy-as-trust and the confidentiality tort would be the same. Sanders felt

comfortable disclosing information because the few people around, his work associates,

were trusted to exercise the appropriate discretion about whatever non-work information

they happen to overhear. A privacy interest remains.

Kubach and Y.G. may have been about juicer bits of information, but privacy-as-trust

would protect their rights against further disclosure. Kubach was about HIV-status disclosure,

something that the sociologists Gene Shelley (1995) and others have found is usually only

disclosed in environment contextualized by trust. This kind of information has also been

found to be the kind of information that would give rise to an obligation of confidence

under British law (Toulson and Phipps, 1996). Therefore, especially given the social,

political, and public health benefits associated with disclosure, privacy-as-trust and

confidentiality tort would note the strong trust that exists in an HIV support group, protect

Kubach’s privacy, and help foster the trust and discretion that permits HIV-status disclosure

in the first place. The breach of confidentiality tort would be satisfied: the information has

the necessary confidential quality, it was only disclosed to friends, doctors, and an HIV

support group, and its dissemination could do significant damage. And, in Y.G., the

attendance at the hospital gathering among other in-vitro couples and hospital personnel

suggests that any information was being disclosed in an environment of trust, much like

Kubach disclosing his status to a support group or to fellow members of the HIV-positive

community. The couple in Y.G. shared with other attendees what they thought was

stigmatizing social identity; they became a tight-knit, socially-embedded group, even though

they were “strangers” in the traditional sense of the word. What’s more, the hospital staff

138
could also be trusted as experts in their fields. Privacy-as-trust and the confidentiality tort

would both protect the couple’s privacy and encourage them to seek the support of others.

Privacy focused on trust better protects privacy and the socially beneficial effects of

sharing and gives judges a coherent scheme for answering limited disclosure questions. It

reflects our intuitive understanding of the injustice of bright line rules extinguishing privacy

rights after one disclosure. And it understands the importance of context in sharing

behavior. In these ways, a breach of confidentiality tort that accepts that trust and discretion

can exist among relative strangers would provide an effective antidote to the current

confusion on privacy.

139
CHAPTER SIX:
The Effects: The Fourth Amendment and the Third-Party Doctrine
The Fourth Amendment is different, and not just because it regulates the

relationship between the government and individuals rather than relationships between

private persons. There may be normative and historical reasons to suggest that privacy-as-

trust might not be as powerful an interpretive tool for understanding reasonable

expectations of privacy under the Constitution’s guarantee against unreasonable searches and

seizures (U.S. Const. amend. IV). Individuals might be willing to tolerate more invasive

practices from other private parties than from a Kafka-esque government. A separation of

public and private spaces—to foster dissident speech, intellectual pursuits (Cohen, 2003),

and the pursuit of different conceptions of the good life (Rawls, 1971)—may be essential to

the survival of democratic government.44 These are important considerations.

Nevertheless, I argue that privacy-as-trust has a role to play in Fourth Amendment

law, a necessary discussion for any treatise on privacy. For now, I would like to make a

modest two-step argument: that understanding the right to privacy as protecting

relationships of trust occupies a heretofore underappreciated, though not yet predominant,

role in Fourth Amendment jurisprudence, and that trust is engaged in a fight with other

principles for social construction of the Fourth Amendment. In this way, the development

of constitutional privacy law mirrors the model posed by the sociologists Trevor Pinch and

44In Talley v. California, (1960), the Supreme Court linked a privacy right with fostering important dissident
speech: “Anonymous pamphlets, leaflets, brochures and even books have played an important role in the
progress of mankind. Persecuted groups and sects from time to time throughout history have been able to
criticize oppressive practices and laws either anonymously or not at all. … Before the Revolutionary War
colonial patriots frequently had to conceal their authorship or distribution of literature that easily could have
brought down on them prosecutions by English-controlled courts. Along about that time the Letters of Junius
were written and the identity of their author is unknown to this day. Even the Federalist Papers, written in
favor of the adoption of our Constitution, were published under fictitious names. It is plain that anonymity has
sometimes been assumed for the most constructive purposes” (p. 64).

140
others, called the Social Construction of Technology (SCOT), for understanding the

emergence of technology in society. This should make intuitive sense: as I argued in Chapter

1, the development of privacy tort, constitutional, and statutory law has historically been

bound up with the social uses of technology and is best understood through this social

construction model. In this Chapter, I flesh out that conclusion and apply the SCOT

interpretive schema toward a richer understanding of ongoing Fourth Amendment disputes,

arguing that privacy-as-trust is as much part of the Fourth Amendment’s period of

interpretive flexibility as other, ostensibly more popular theories.

In Chapter 1, I showed that a simple act-react paradigm cannot explain the

development of privacy law over time. The history is actually more complex, less linear, and

decidedly social. I showed that the Fourth Amendment, like privacy tort law, does not

respond to new technologies as engineering marvels; rather, it participates in the process by

which meanings are imbued into innovations. It happened with the internet and computer

technology, where government and individual actors fought over how the internet would be

used in society (Waldman, 2015). Fourth Amendment law helped recognize the privacy

challenges such uses have posed. It also happened with sensory enhancing technologies,

where law enforcement fought with politicians to make surveillance a common and abused

practice. I showed how cases like Katz and statutes like the FCA responded to this social

construction, expressing disfavor on rampant wiretapping.

But that is only the first step; the next step is to determine the basis of the judiciary’s

response. I want to know if there is a coherent basis upon which Fourth Amendment

privacy law assesses the government use of the internet as a database of personal

information and law enforcement’s use of sensory enhancing technologies. Some scholars,

most notably, Orin Kerr (2004), have argued that the Fourth Amendment responds to new
141
technologies with a relatively stable respect for property, or by borrowing concepts from real

property law to determine the privacy interest and if it was invaded. If that is true, then we

should see courts finding Fourth Amendment searches where property lines are defeated.

Others have shown that secrecy has been the recurring theme in Fourth Amendment privacy

law (Solove, 2005).45 Each explanation may have some merit, capturing, as they do, some

facet of privacy law. But looking for a single strand that determines the scope of the Fourth

Amendment would amount to a magician’s misdirection.46 New technologies do not

engender a single response from the Fourth Amendment. They destabilize the relationships

between individuals and between individuals and the government during a period of

interpretive flexibility. The same model describes what is happening in Fourth Amendment

law: concepts are jockeying for dominance in privacy law’s own period of interpretive

flexibility. This puts an important responsibility on the judiciary: rather than abdicate its

responsibility, as Supreme Court Justice Samuel Alito47 and Professor Kerr (2004) would

prefer, the judiciary must participate in the social construction of the Fourth Amendment if

norms of the rule of law and justice are to have an impact on constitutional privacy in a

technology driven society.

45Professor Solove (2005) deserves considerable credit for both identifying the “secrecy paradigm” in Fourth
Amendment jurisprudence and criticizing it. Unlike Professor Kerr’s (2004), Professor Solove’s argument was
merely descriptive; he is far from sanguine about the recurring property and secrecy strands in Fourth
Amendment law. This thesis agrees. But both visions paint an incomplete picture about what the federal
judiciary has been doing and, on a normative basis, what it should be doing.
46In a arguing for a taxonomy of privacy issues, Dan Solove (2002; 2006; 2007) argued that no single “common
denominator” could explain every element of privacy. He argued that privacy, as a series of “family
resemblances,” defies a single common denominator; that is, “privacy is not reducible to a singular essence,”
but rather a “web of related problems that are not connected by a common element, but nevertheless bear
some resemblance to each other” (2007, p. 759).

In Quon (2010), Justice Alito stated that a court “risks error by elaborating too fully on the Fourth
47

Amendment implications of emerging technology before its role in society has become clear” (p. 2630).

142
The conversation has been lacking a robust sociological approach despite the social

elements of both technology and privacy, as I discussed in Chapter 1. I argue that trust, the

social expectation of the behavior of others, is an underappreciated yet powerful recurring

theme in Fourth Amendment jurisprudence as typified by Fourth Amendment applications

to internet-based collection and aggregation of personal information and sensory enhancing

technologies. It is competing along with other doctrines during a period of interpretive

flexibility. But its role should not be understated. If it were to become a dominant force for

applying the Fourth Amendment, we could avoid many of the gaps in constitutional

protection—most notably, the third party doctrine—caused by limiting privacy to property

or secrecy.

Section 6.1: The Fourth Amendment

The Fourth Amendment to the U.S. Constitution protects against unreasonable

searches and seizures. It reads:

The right of the people to be secure in their persons, houses, papers, and
effects, against unreasonable searches and seizures, shall not be violated, and no
Warrants shall issue, but upon probable cause, supported by Oath or affirmation,
and particularly describing the place to be searched, and the persons or things to be
seized (U.S. Const. amend. IV).

Although it is beyond the scope of this thesis to recount the entire history, development, and

interpretation of this provision,48 a few points are worth noting. First, the amendment

generally requires that law enforcement obtain warrants based on probable cause in order to

conduct a search. Those warrants must specify the who (the target of the search), what (the

items for which they are searching), when (the locations of the search), and why (the basis

48There are innumerable histories of the Fourth Amendment. In a particularly useful one accessible to a lay
audience, Clancy (2008) shows how the Fourth Amendment developed as a response to the British use of
general warrants and takes readers through a narrative that explains the complex and often contradictory
jurisprudence on the Fourth Amendment from the federal courts.

143
for probable cause and the purpose of the search). Searches without warrants are

presumptively unreasonable and unlawful. There are, however, a handful of exceptions to

the warrant requirement, including searches incident to arrest, when items are in plain view

of police, when consent is obtained for the search, when emergencies require, and several

others. The reach of Fourth Amendment protections reached its zenith during and after the

Warren Court and has, with some notable exceptions, been contracting and becoming

confused ever since (Amar, 1994).

Per the Supreme Court’s decision in Katz v. United States (1967), warrantless searches

violate the Fourth Amendment when they impinge on a subjective expectation of privacy

that society is willing to recognize as reasonable. That formulation comes from Justice

Harlan’s concurrence in Katz, but it has come to dominate Fourth Amendment practice and

interpretation. One implication of this rule is that certain investigative techniques that do not

touch reasonable expectations of privacy are not even considered searches: police need

neither probable cause nor search warrants to conduct them. For example, the Supreme

Court held in California v. Greenwood (1988) that searching garbage left at the curb of a home

is not a Fourth Amendment search; individuals can have no reasonable expectation of

privacy in materials left accessible to the public. Police use of dogs to sniff luggage at

airports do not constitute searches, and therefore do not need warrants or probable cause,

either (United States v. Place, 1983). There are a number of other types of searches that do not

implicate the Fourth Amendment; some of these will be discussed in this chapter. In all

other cases, if prosecutors at trial seek to introduce evidence obtained from an illegal search,

that evidence may be excluded from trial if the subject of a proper motion. This

“exclusionary rule” is not only a reflection of fundamental principles of due process of law;

it is also an incentive for law enforcement to employ warrants when possible.


144
Because the Fourth Amendment is intended as a bulwark against police overreach in

their investigations, the interpretation and application of the provision is highly reactive to

technologies that allow police to surveil the public. As I will show, new technologies

destabilize the Fourth Amendment. Trust is competing for a place in its re-stabilized world.

Section 6.2: Interpretive Flexibility in Fourth Amendment Jurisprudence

Let us start by describing the dominant forces struggling to describe Fourth

Amendment responses to new technologies. I summarized the state of privacy scholarship in

Chapter 2. But with respect to Fourth Amendment responses to new technologies, two

theories occupy the most column-inches. For scholars like Orin Kerr (2004), “the basic

contours of modern Fourth Amendment doctrine are largely keyed to property law,” thus

suggesting that an expectation of privacy only becomes reasonable “when it is backed by a

right to exclude borrowed from real property law” (p. 809-810). Seen in reverse, the Fourth

Amendment is violated, Kerr would say, when a property-based right to exclude is violated.

He would have the state respond to technology’s destabilizing effects on privacy by

rebuilding privacy protections around the principles of trespass, space, and exclusion

discussed in Chapter 2.

A competing doctrine focuses on secrecy, or the view that the Fourth Amendment’s

protections do not extend to information known to third parties. Before critiquing this

strand running through Fourth Amendment jurisprudence, Dan Solove (2005) called this the

“secrecy paradigm” (p. 42-47, 143-149). Privacy is backed by secrecy, not property, in this

conception; invasions only occur, and warrants are only needed, when the information

145
sought is held under wraps.49 But the limitations of a property- or secrecy-based vision of

privacy are clear: it shrinks privacy, never really allows adaptation to new technologies, and

leaves out much of what we consider private.

Fourth Amendment privacy law, like the technologies it helps socially define, cannot

be captured so simply. New technologies do not force it to respond; they inspire a period of

interpretive flexibility whereby competing social forces fight for a dominant definition. And,

as was evident from the historical discussion in Chapter 1, the role of the court is to

participate, not to wait on the sidelines. It would be inadequate, too, to suggest that the

process is binodal—a fight between property and secrecy for the soul of the Fourth

Amendment. Remarkably, we have generally ignored the intuitive attractiveness of using a

social science approach to interpret and apply a legal standard based on social expectations

of privacy. As I argued in Chapter 3, privacy is really about trust. We perceive our privacy is

invaded—by others and by the state—when the trust we have in others is breached. In

short, reasonable expectations of privacy should exist when they are backed by trust. But, as

much as I would like it to, trust has not yet won interpretive closure over the Fourth

Amendment. It is, therefore, the role of academics, lawyers, judges, and advocates concerned

about privacy to continue to engage in the fight over its meaning. For the remainder of this

Chapter, I will briefly sketch the competing theories for privacy law’s responses to new

technologies, apply them to Fourth Amendment jurisprudence, and argue that the Fourth

49There are, of course, other conceptions of privacy. Theories like intimacy and personal autonomy are
undoubtedly at play in the background of privacy law, but all have limitations and none are as obviously present
in Fourth Amendment responses to technologies that allow law enforcement to discover information they
previously could not. Dan Solove’s (2002; 2006) Wittgensteinian notion of privacy as a series of “family
resemblances,” Helen Nissenbaum’s (2004; 2010) powerful idea that privacy is about norms of appropriateness,
Julie Cohen’s thesis on privacy as autonomy (2003), and so many others are part of a broader discussion of
understanding privacy, but beyond the scope of this chapater’s narrow look at the Fourth Amendment and new
technologies. For a more detailed discussion of these theories, see Chapter 2.

146
Amendment’s response to new technologies is really a complex, multifaceted competition

for social definition that resembles the social construction of technology.

Section 6.2.1: Privacy as Property

Professor Kerr (2004) has argued that Fourth Amendment jurisprudence is bound

up with property law. He suggests that under the Fourth Amendment, privacy-destabilizing

technologies are “property-defeating” technologies and, therefore, federal courts’ response

to those technologies has always been to return to concepts of real property to restore the

balance. Although it is impossible to deny that property principles remain part of the

equation, any attempt to emphasize property over other factors stems from a misreading of

the case law.

Kerr sees property law in Fourth Amendment jurisprudence in myriad ways. The

Fourth Amendment protects people in their homes, and despite the protestation from Katz

(1967) that the guarantee “protects people, not places” (p. 351), innumerable search-and-

seizure cases have at least one rhetorical homage to the sanctity of the home. The Supreme

Court said in Kyllo v. United States (2001) that the right to be secure in one’s home is at the

very “core” of the Fourth Amendment. Renters have reasonable expectations of privacy in

their homes, as do renters of hotel rooms and storage lockers. Their rights extend only so

long as they pay their rent (that is, as long as they have rights to the property). Once they do

not, they not only lose their right to exclude others from that space, they also lose their

privacy interest in it (Kerr, 2004, p. 810). This property-based view even extends to visitors,

for whom a reasonable expectation of privacy depends on whether the homeowner

delegated his or her right to exclude. Kerr also reminds us that property principles determine

expectations of privacy in cars: an owner has it, the guests he allows to drive have it, and a

renter has it as long as his name is on the lease (p. 811-812). Similarly, owners of closed
147
containers retain rights to them unless they abandon them, following common law property

principles of abandonment (p. 812).

The power of property in Fourth Amendment law also extends to sensory enhancing

technologies and the internet, according to Kerr. Olmstead v. United States (1928) is the easy

case; it is universally understood to reflect a property-based view of the Fourth

Amendment’s guarantee against unreasonable searches and seizures. In that case,

government agents tapped Roy Olmstead’s phone line by installing a device at the top of a

telephone pole on a public street outside his house. Because “[t]here was no entry of the

houses or offices of the defendants,” that is, no violation of Olmstead’s property rights,

there was no search under the Fourth Amendment (p. 464).

For Kerr, property principles also dominate Katz (1967), though perhaps not so

obviously. In Katz, the FBI taped a microphone and recording device to the roof of a public

telephone that their chief suspect used every morning. Investigators turned on the

microphone and recorded the content of calls Katz made and played the recordings at his

gambling trial. The Court concluded that “[o]ne who occupies [a telephone booth], shuts the

door behind him, and pays the toll that permits him to place a call is surely entitled to

assume that the words he utters into the mouthpiece will not be broadcast to the world” (p.

352). Kerr (2004) argues that the act of Katz “paying the toll” sold it for the Court because

at that point, Katz became a “momentary” renter of the booth backed by traditional

property rights (p. 822-823).

Kerr goes on to argue that recognizing reasonable expectations of privacy based on

property rights remained relatively stable after Katz. United States v. Knotts (1983) and United

States v. Karo (1984), the tracking beacon cases, came out differently, Kerr notes, because

Knotts only involved tracking on public streets, while the tracking device in Karo transmitted a
148
signal from a private home (Kerr, 2004, p. 831-833). And Kyllo v. United States (2001), where

the Court concluded that the use of a thermal imaging device to penetrate a wall of a home

was a Fourth Amendment search, is, like Karo, a conservative decision that protects the

sanctity of the home. Kerr suggests that the Court found the use of trackers and heat sensors

violative of the Fourth Amendment only when they “defeat[ed] property’s ability to

safeguard traditional privacy protections in the home” (Kerr, 2004, p. 835). Property, it

seems, has been behind the reasonable expectation of privacy all along.

Though Professor Kerr’s privacy-as-property theory is insightful and made an

outsized contribution to Fourth Amendment scholarship,50 his argument for property’s

dominance stems from a misreading of the case law. He ignores the Court’s own language,

takes dicta and holdings out of context, and fits a square peg into an artificial round hole. I

do not doubt that property, as the dominant theory of the Fourth Amendment in the

Olmstead era, left traces in modern jurisprudence. But it is simply not the case that property is

the dominant factor in Fourth Amendment jurisprudence.

It would be inaccurate to state that an individual’s expectation of privacy in a hotel

room is based on his status as the legitimate renter. Kerr (2004) cites the Ninth Circuit’s

decision in United States v. Nerber (2000) for that proposition. But in doing so, he misreads the

case. Nerber involved police informants who drew the defendants to a hotel room they had

rented and that was under video surveillance in order to videotape a drug sale (p. 599). The

court concluded that despite the intrusiveness of video surveillance, the defendants did not

have an expectation of privacy when the informants were in the room (p. 604). Kerr stops

50 The article spawned two direct responses from Dan Solove (2005) and Sherry Colb (2004) and 274 other
citations in law reviews in journals. It has also been cited, though for the article’s institutional competence
argument not discussed here, in 8 judicial opinions.

149
here, concluding that the rule for privacy in hotel rooms is that an individual loses his

expectations of privacy when, as with rented apartments, he “loses his right to be on the

premises” (Kerr, 2004, p. 810). But that ignores two essential facts: first, the defendants did

not rent the room, the police did (Nerber, 2000, p. 599); second, the court went on to say that

the defendants regained their expectations of privacy when the informants left the room (p.

604). Their status as legitimately (or not) on the premises did not change, so Professor Kerr’s

conception of privacy-as-property cannot explain the switch.

Professor Kerr’s analysis of Katz suffers from similar misreading errors. Kerr (2004)

states that it was Katz’s “‘momentary’ property rights” stemming from his “pay[ing] the toll

that permit[ted] him to place the call” that guaranteed him a right to privacy (p. 823). But

that conclusion ignores the rest of the opinion. First, it privileges one part of a conjunctive

sentence over the others:

One who occupies [a telephone booth], shuts the door behind him, and pays the toll
that permits him to place a call is surely entitled to assume that the words he utters
into the mouthpiece will not be broadcast to the world (Katz, 1967, p. 352).

If paying the toll was the most important, it is not clear why Justice Stewart felt the need to

mention anything else. Second, there is a lot more to this sentence than a door and a toll, a

conclusion quite evident from the immediate context in which it was written. The holding is

sandwiched between two explicit rejections of a property theory: the court declares irrelevant

the parties’ debate, which dominated their briefs, over whether a phone booth is a

“constitutionally protected area” (p. 351) and also overrules the rationale of Olmstead (p. 352-

353). It also follows immediately after the Court’s clarification of the importance of the

closed door. Rather than having anything to do with property-based concepts of physical

occupation or legitimate presence, as Kerr (2004) suggests, the fact that Katz closed the door

to the phone booth protected him from “the uninvited ear” (Katz, 1967, p. 352). Even
150
flexible property-based rules took a back seat to the totality of the social context in which

Katz made his call.

Kerr makes the same selective reading mistake when he gets to Knotts and Kyllo. In

Knotts (1983), law enforcement placed a tracking beeper in a large jug of chemicals to find

out where the purchaser, one of Knotts’s co-conspirators, was taking the jug (p. 277).

Because the co-conspirator drove on public roads, he “voluntarily conveyed” to police

where he was going (p. 281). This was not even a search. For Kerr (2004), this was an

example of information obtained without defeating any physical property boundary. That

misses a lion’s share of the Court’s reasoning. Consider the language of the holding: “A

person traveling in an automobile on public thoroughfares has no reasonable expectation of

privacy in his movements from one place to another” (Knotts, 1983, p. 281). The publicness

of the streets was indeed relevant, but the limited information obtained—merely that he was

traveling from Point A to Point B on those streets—was arguably more important to the

Court. We know this because the Court contrasted the Knotts facts with a “twenty-four hour

surveillance … dragnet” spending three times as many paragraphs on what information the

police obtained and how they obtained as on the publicness of the information (p. 281-284).

It is also not clear that a property-based conception of the Fourth Amendment could

distinguish between a single incident of tracking from A to B on public streets and a

constant dragnet that also tracked a target on public streets and in public locations. The

latter, though clearly more revealing, is as property-affirming as the former.

In Kyllo (2001), the Court concluded that pointing a heat sensor at a home’s solid

wall from across the street was a Fourth Amendment search because it obtained

“information regarding the interior of the home that could not otherwise have been

obtained without a warrant” (p. 34). Kerr (2004) quotes the Court’s holding—“Where … the
151
Government uses a device that is not in general public use, to explore details of the home

that would previously have been unknowable without physical intrusion, the surveillance is a

‘search’ and is presumptively unreasonable without a warrant”—but then ignores the

dependent clause at the beginning. He concludes only that “Kyllo measures the intrusiveness

of sense-enhancing devices directed at the home compared to the traditional benchmark of

physical intrusiveness” (Kerr, 2004, p. 835) and says nothing about the Court’s interest in the

ready availability of the device. Once again, even though the social context in which the

search occurred factored heavily into the Court’s decisions, it remained an underappreciated

element of Fourth Amendment jurisprudence.

In the end, Professor Kerr is not wrong to suggest that property-based principles

remained a strand in Fourth Amendment jurisprudence even after Katz. But I resist his

attempt to go as far as he did, claiming that property is an “accurate” and “strong” guide to

Fourth Amendment doctrine (p. 809, 815) and that “an expectation of privacy becomes

‘reasonable’ only when it is backed by a right to exclude borrowed from real property law” (p.

809-810). It is there, but so is much else.

Section 6.2.2: Privacy as Secrecy

As compared to property, secrecy is an equally, if not more powerful descriptor of

Fourth Amendment doctrine since Katz. This “secrecy paradigm” was Dan Solove’s (2001;

2005) descriptive argument in The Digital Person and elsewhere. “Traditionally,” Solove

(2005) argues, “privacy problems have been understood as invasions into ones hidden world.

Privacy is about concealment, and it is invaded by watching and by public disclosure of

confidential information” (p. 42). The corollary to this is the “secrecy paradigm”—namely,

that information that is not secret is not private; anything known to a third party could not

be subject to a privacy interest. This, of course, amounts to a wholesale rejection of Simmel’s


152
(1906) conception of secrecy as that which binds together secret societies. In any event, if

secrecy is the dominant force in Fourth Amendment doctrine, we should find that

expectations of privacy only become reasonable when they are backed by secrecy. To some

extent, this is disturbingly true, despite how damaging and narrowing it is to our privacy.

This thesis discussed the negative effects of a secrecy-based privacy regime in the tort law

context in Chapter 5 and ultimately argued for a reorientation of privacy tort law around

protecting relationships of trust. With respect to constitutional privacy law, privileging

secrecy over other strands ignores a more complex process of Fourth Amendment

jurisprudential development and does violence to fundamental rights.

Before the “secrecy paradigm” took hold, Solove (2005) argues, property was indeed

dominant. The focus was on tangible things: Boyd v. United States (1886) involved law

enforcement’s attempt to subpoena a merchant’s business records, Ex Parte Jackson (1877)

centered on a search of closed letters in the mail, and Union Pacific Railway Company v. Botsford

(1891) concerned whether a woman could be forced to have a physical examination. Alan

Westin (1967) called the theory of privacy embraced by these cases “propertied privacy” (p.

339), a theory that came to the fore in Olmstead. Indeed, property and other physical

incursions continued to be deeply ingrained in Fourth Amendment jurisprudence for some

time after Roy Olmstead started bootlegging.51

Like many scholars, Solove sees Katz as a watershed; but he departs from the

conventional wisdom on what happened next. With Katz, the Court made a clean break from

the limiting property-based doctrine of Olmstead, explicitly overruling it (Katz, 1967, p. 353).

51That physical incursions were so much a part of privacy law was the jumping off point for Samuel Warren’s
and Louis Brandeis’s article, The Right to Privacy, discussed in Chapters 1 and 2. It also made the article’s ultimate
argument for a “right to be let alone” that was untethered to property principles that much more
groundbreaking.

153
But Solove’s (2005) insightful argument is that the Court replaced one stifling standard with

another, privileging secrecy rather than property. In Florida v. Riley (1989), for example, the

Court held that there was no reasonable expectation of privacy in a greenhouse because

police could easily fly over and look down. In California v. Greenwood (1988), the Court made a

similar conclusion about garbage at the curb, which is “readily accessible to animals,

children, scavengers, snoops, and other members of the public” (p. 40). In both cases, the

subject of the search—plants and trash—were not secrets; rather, they were handed over to

the public or left in the open for anyone to see.

But arguably the best example of the “secrecy paradigm” in Fourth Amendment

jurisprudence is the third-party doctrine. Emerging from post-Katz cases like United States v.

Miller (1976) and Smith v. Maryland (1979), the third-party doctrine states that there is no

reasonable expectation of privacy in information in the possession of third parties. In Miller,

police subpoenaed banks to obtain the defendant’s financial records. Miller objected, saying

that the Fourth Amendment requires warrants based on probable cause, not subpoenas. The

Court disagreed. Obtaining an individual’s financial information from a bank did not even

implicate the Fourth Amendment: a reasonable expectation of privacy in information

“revealed to a third party” (e.g., a bank) could never exist because that information was

technically no longer private (Miller, 1976, p. 443). Similarly, in Smith, the Court had no

objection to a warrantless use of a pen register, which records the numbers you dial on a

landline telephone, because people “know that they must convey numerical information to

the phone company” (Smith, 1979, p. 743). The information, though highly revealing, was

voluntarily conveyed to a third party and, therefore, was not private.

Privacy-as-secrecy clearly represents a strand in Fourth Amendment jurisprudence. It

does not suffer the same selective reading problem as Professor Kerr’s privacy-as-property.
154
In fact, secrecy’s power diminishes property’s: even if property remains as a guide in Fourth

Amendment law, secrecy’s co-presence undermines its dominance. The central problem with

privacy-as-secrecy is its inadequacy as a weapon against modern invasions of privacy

resulting from new technologies. Applying Miller’s and Smith’s so-called third-party doctrine

to today’s networked world evidences a number of concerns. Immeasurable amounts of

personal information are in the hands of third parties by virtue of our presence online. The

information is diverse: the phone numbers we dial and the texts we send on our cellphones,

the pictures and documents we store in the cloud, the passwords we keep in our iPhone

keychain, the credit card information we store on Target’s website, the bank information we

entrust to J.P. Morgan Chase, and the internet protocol addresses that identify us online, to

name just a few. The third parties that store this information reach beyond banks and

telephone companies and include internet service providers, social networks, cloud

operators, cell phone companies, employers, health care companies, e-commerce websites,

and anywhere else we input personal information in order to use the web. With all these

pieces of data necessary prerequisites for online participation, anyone with an email account

has eviscerated his or her privacy rights under the “secrecy paradigm” of Miller and Smith.

Continued tolerance of the privacy-as-secrecy regime threatens to erase privacy in the

modern world.

In his book, The Digital Person, Professor Solove (2005) does a remarkable job

describing and criticizing the “secrecy paradigm,” particularly for its inability to deal with the

Kafka-esque problem of governments and private parties collecting digital dossiers on

millions of citizens. But he appears so persuaded by his own argument that secrecy has

infected and eroded the Fourth Amendment that he turns elsewhere for reform: he accepts

that the Fourth Amendment cannot help, so he offers a structural or architectural proposal
155
that would return more control over information to the individual.52 I would like to take a

step back from the cliff. This thesis accepts Professor Solove’s (2002) challenge to take a

ground-up approach to privacy, understanding privacy from the perspective of the problems

that arise from socially constructed uses of new technologies. But Fourth Amendment

jurisprudence is more than just an act-react paradigm dominated by property- or secrecy-

based reactions; as I showed in Chapter 1, it is a multifaceted process that contributes to the

social construction of technology and, as this chapter is suggesting, is itself the subject of a

struggle for dominance between notions of property, secrecy, and trust. The entirety of the

social context embraced by trust is an underappreciated part of this picture.

Section 6.3: Privacy As Trust and the Fourth Amendment

Applying the privacy-as-trust theory discussed in Chapter 3, I argue that trust not

only intuitively explains how we develop expectations about others’ behavior and thus

speaks directly to the reasonable expectation of privacy test in Katz; it is also embedded in

the Fourth Amendment itself. For example, warrants must “particularly” describe the place

or person to be searched and the items to be seized (Berger v. New York, 1967, p. 55-57). And

even where probable cause may exist, we still require that a “deliberate, impartial” judge be

“interposed between the citizen and the police” (Wong Sun v. United States, 1949, p. 481-482).

We do this for several reasons. Practically, these rules prevent law enforcement agents from

doing whatever they please, whenever they please, and to whomever they please to do it. In

Boyd v. United States (1886), for example, the Supreme Court stated that the British practice of

52Professor Solove’s (2005) proposals are based on the Fair Information Practices, a set of recommendations
from a 1973 report of the Department of Housing, Education, and Welfare. The proposals include: no secret
record-keeping systems, a means for individuals to find out what is in their record and how it is used, a way for
people to prevent certain misuses of their information, a process for information correction, and ensuring
security (p. 104-105).

156
general warrants, which allowed unlimited investigative leeway, arbitrarily delegated power,

destroyed liberty, and was a principal animator behind the Fourth Amendment (p. 624). But

it is more than that. Warrant requirements are symbolic sources of what Francis Fukuyama

(1993) called “reciprocal recognition,” or the citizen-government trust at the heart of the

modern liberal state (p. 208). They cue to the public that law enforcement is not lawless, that

it is regulated, and regulated by capable and powerful courts that impose demands upon it.

Combine this with my own work that suggests that breaches of trust are at the core of why

we think our privacy has been invaded and it makes sense to start thinking about reasonable

expectations of privacy under the Fourth Amendment in terms of trust.

Section 6.3.1: Applying Privacy-as-Trust to Sensory Enhancing Investigative


Technologies

Professor Kerr’s (2004) insightful yet ultimately flawed suggestion is that, despite

rhetorical protestations to the contrary, when the Supreme Court needs to determine

whether information obtained through a warrantless search was already public or still

protected by a reasonable expectation of privacy, the Court looks to principles of authorized

presence, trespass, and exclusion borrowed from real property law. He concludes this by

reading cases in isolation. A more precise reading of the cases shows that alongside any

residual interest in property is the social context of the disclosure. I would like to illustrate

this using the case study of the Fourth Amendment’s application to sensory enhancing

surveillance technologies.

Kerr (2004) says decisions like Knotts, Karo, and Kyllo reflect a property-based view.

He makes much of the fact that the beeper in Knotts only returned tracking information

about Knotts’s travels on public streets, whereas the tracking in Karo included data from a

private residence: the information in Knotts was, therefore, already public and its collection by

157
law enforcement would not constitute a Fourth Amendment search. Kyllo turned out more

like Karo than Knotts, Kerr argues, because, like the beeper that crossed the private-property

boundary in Karo, the heat sensor in Kyllo defeated the protections traditionally offered by

the solid wall of a private home (p. 831-837). Property law principles would say that what is

inside the home is not public, so a search of the home would fall under the Fourth

Amendment.

But the Supreme Court and the federal appellate courts already see things somewhat

differently. The D.C. Circuit, along with the Supreme Court and its sister circuits, have

disclaimed any connection between their holdings and property principles. The Court has

denied a property connection to the Fourth Amendment many times, including in Katz

(1967), discussed above, and in California v. Ciraolo (1986), where the Court stated that “the

area is within the curtilage does not itself bar all police observation” (p. 212-213). The

Seventh Circuit reaffirmed the principle more recently in United States v. Garcia (2007), where

it stated that “it is irrelevant that there is a trespass” for the purposes of determining whether

a warrant is required for a search (p. 997). To retain fidelity to the Katz reasonable

expectation of privacy test, many courts are replacing property and starting to use the

language and tools of particularized social trust—experience and transference, in

particular—to determine expectations of others’ behavior. Sometimes, the practical

application of these principles is incomplete. But there is considerable evidence that the

social construction of technology and how new technologies factor into our expectations of

others’ behavior are growing pieces of the Fourth Amendment’s period of interpretive

flexibility.

United States v. Maynard (2010), for example, addressed the kind of dragnet

surveillance the Supreme Court said was not at issue in Knotts: extended tracking via a GPS
158
device. In determining that police’s use of the GPS was indeed a Fourth Amendment search,

the D.C. Circuit did not simply rely on whether a GPS device transmitted information from

a private or public place; rather, it considered the entire social context to determine if the

defendant’s movements were already sufficiently public so as to take the search outside the

orbit of the Fourth Amendment. The court concluded that even a person’s movements on

public streets are “not actually exposed to the public” over the course of a month-long

surveillance (p. 560). It is worth quoting the court in full:

[T]he likelihood a stranger would observe all those movements is not just remote, it
is essentially nil. It is one thing for a passerby to observe or even to follow someone
during a single journey as he goes to the market or returns home from work. It is
another thing entirely for that stranger to pick up the scent again the next day and
the day after that, week in and week out, dogging his prey until he has identified all
the places, people, amusements, and chores that make up that person’s hitherto
private routine (p. 560).

Not hung up on property boundaries, the court focused on the fact that it is unlikely that

anyone would go about her day with the expectation that someone else, given today’s

technology, could track her every movement, even on public streets. Indeed, the test is “not

what another person can physically see and may lawfully do” (p. 559), a paraphrasing of

Professor Kerr’s (2004, p. 819) “broader conception of property” ostensibly underlying the

Fourth Amendment. Rather, the Katz test is “what a reasonable person expects another

might actually do” (Maynard, 2010, p. 559), or a manifestation of particular social trust about

the behavior of others. Therefore, privacy-as-trust would look at the entirety of the relevant

facts forming the social context of a disclosure and ask, as the D.C. Circuit did in Maynard, if,

taken together, this socially constructed situation is one in which an individual would expect

another to readily observe him or his behavior. If the answer is yes, the information was

already public and the police did not need a warrant to collect it; if the answer is no, the

159
information was subject to a reasonable expectation of privacy and should be excluded from

trial if obtained without a valid warrant.

We can discern several examples of the Supreme Court doing more than just relying

on property and attempting to focus on expectations to determine the difference between

public and non-public information. California v. Greenwood (1988), a case involving the search

of trash placed at the curb of a house, had little to do with the sanctity of the home and

much more to do with the “common knowledge” that garbage bags left at the curb are

“readily accessible” to a host of people, animals, and otherwise (p. 40). California v. Ciraolo

(1986) develops the point even further. In that case, police used a helicopter to take an aerial

view of a fenced-in backyard growing marijuana. At a height of 1,000 feet, police were able

to look down and observe marijuana plants. They also took pictures using “a standard 35mm

camera” (p. 209). In finding that the marijuana was already public and, therefore, not the

subject of a reasonable expectation of privacy, the Court’s decision hinged not on the fact

that the garden was inside the boundaries of the home, but on two facts that bring together

the social construction of technology and the expectations of others’ behavior: first, private

and commercial flights were “routine” and, second, all police had to do to see the marijuana

plants was look down from an easily accessible aerial perch (p. 213-214).

A similar respect for some pieces of the social context informing our expectations of

how others behave is evident in Dow Chemical v. United States (1986) and in Kyllo (2001). In

Dow, the EPA did not seek an administrative warrant to inspect several power plants on one

of Dow’s large facilities. Instead, inspectors hired a commercial aerial photographer and

asked him to take a few pictures looking down onto the plant from above (Dow, 1986, p.

229). Dow objected, arguing, among other things, that it manifested an expectation of

privacy by surrounding the plant with enclosures and security and that its expectations were
160
reasonable because another court had granted the company trade secret protections against

competitors (Dow Chemical v. United States, 1985, p. 1367). Essentially, Dow was arguing that

its plant was not public merely because it was exposed to the air. The Court disagreed,

finding no Fourth Amendment search. After paying rhetorical homage to the importance of

private homes in Fourth Amendment jurisprudence, the Court relies on the fact that the

“EPA was not employing some unique sensory device that, for example, could penetrate the

walls of buildings and record conversations.” Rather, the agency used “a conventional, albeit

precise, commercial camera commonly used in mapmaking” (Dow, 1986, p. 238). The ready

availability of the technology and the common use of the pictures and aerial photography,

the Court suggests, should be taken into account when Dow decides where to put its

material, machines, and facilities. And, in Kyllo, Justice Scalia makes much of the fact that the

thermal imaging device was “not in general public use” (Kyllo, 2001, p. 34), an essential

caveat to the holding over which Professor Kerr glosses.

Ready availability is important from a social construction perspective because it

reflects actual uses of the technology in society and because more commonly used

technologies are more significant and powerful factors in our decision-making. As discussed

in Chapter 3, many scholars have shown that more data points about how others act are

better predictors of their future behavior—namely, better tools for trust. The fact that

something happens often cues for us that it may happen again and, as such, should be

factored into our future decision-making. Admittedly, to privilege ready availability over the

totality of the social context falls prey to the same overinclusive error as privacy-as-property

and privacy-as-secrecy. At a minimum, though, ready availability of technology is essential to

understanding how we predict others’ behavior; it is a necessary piece of defining the Katz

reasonable expectation of privacy standard and necessary to determine whether what we


161
disclose is sufficiently public to make a warrant unnecessary. But it is not the only piece;

after all, social construction is a multifaceted process.

In fact, privacy-as-trust calls for a totality of the circumstances test that sees as

relevant the facts surrounding the disclosure that speak to how individuals develop

expectations of others’ behavior. For example, in deciding that extended GPS surveillance

constituted a Fourth Amendment search, the Maynard (2010) Court considered several

factors—ready availability (p. 559-560), the extent of the information obtained (p. 560-563),

among others—to determine if an individual would factor into his decision making the

likelihood of this kind of search. Indeed, the court in Maynard explicitly acknowledged the

role of trust in determining if a warrant was required in the first place: “In considering

whether something is ‘exposed’ to the public as that term was used in Katz we ask not what

another person can physically and may lawfully do but rather what a reasonable person

expects another might actually do” (p. 559).

The court goes further and recognizes that individuals operate with the trust that

others—“short perhaps of [a] spouse” (p. 563)—will not be privy to the total tonnage of

personal information divulged during an extended GPS surveillance. In other words, an

individual could never be sufficiently exposed to the public to obviate law enforcement’s

warrant requirement for a GPS search because she reasonably trusts that no one could be

aware of that information during the ordinary course of life. This comes from a version of

the “mosaic theory” common in national security cases.53 In this context, it refers to the fact

that the totality of information gleaned over the course of extended and constant

53In CIA v. Sims (1985), the Supreme Court stated that “bits and pieces of data ‘may aid in piecing together bits
of other information even when the individual piece is not of obvious importance in itself.’ Thus,‘[w]hat may
seem trivial to the uninformed, may appear of great moment to one who has a broad view of the scene and
may put the questioned item of information in its proper context” (p. 178).

162
surveillance is, as Julie Cohen (2000) observed, more than just the sum of its individual

constituent parts. Dan Solove (2005) called this the “aggregation effect” and it reflects how

principles of trust operate to determine reasonable expectations of privacy.

Consider the enormity of the information revealed in a GPS search, for example, as

described by the D.C. Circuit in Maynard (2010):

Repeated visits to a church, a gym, a bar, or a bookie tell a story not told by any
single visit, as does one’s not visiting any of these places over the course of a month.
The sequence of a person’s movements can reveal still more; a single trip to a
gynecologist’s office tells little about a woman, but that trip followed a few weeks
later by a visit to a baby supply store tells a different story. A person who knows all
of another’s travels can deduce whether he is a weekly church goer, a heavy drinker,
a regular at the gym, an unfaithful husband, an outpatient receiving medical
treatment, an associate of particular individuals or political groups—and not just one
such fact about a person, but all such facts (p. 562-563).

The court goes on to say that it is simply inconceivable that individuals expect that such

detailed information would be available to others. The reason is trust. The analysis, which

echoes the privacy theories of the sociologists Georg Simmel (1906) and Erving Goffman

(1959), implies that privacy helps individuals construct different personae and shows that

privacy-defeating technologies like GPS and data tracking erode the social norms bound up

with what we expect others to know about us. Like Simmel (1906), who argued that we

conceive of others based on conclusions that are true for us (but may not be true for others)

(p. 444-445), Goffman (1959) argued that the presentation of who we are is contextual,

depending upon time, place, and audience. Our personae can change from one “dramatic

effect” to the other depending on what we are doing, in front of whom, and for what

purpose (p. 28).

The D.C. Circuit is not alone in recognizing that our constitutional concerns with

this kind of search is based on the fact that it allows the searcher to know much more about

us what we would expect. The Seventh, Eighth, and Ninth Circuits have all recognized that
163
“total surveillance,” even entirely public total surveillance, would at least implicate Fourth

Amendment concerns. These courts were suggesting that our ordinary movements on public

streets—admittedly a form of disclosure of information to others—do not accompany

expectations that the aggregation of those movements could ever be known by another.

Therefore, using public streets and thereby disclosing our locations could not release police

of their obligation to obtain a warrant (United States v. Garcia (2007); United States v. Marquez

(2010); United States v. Pineda-Moreno (2010)). The Eastern District of New York used the trust

and expectations rubric when it said the same thing with respect to cellphone site data (In re

U.S. for an Order Authorizing the Release of Historical Cell-Site Data (2011)). And other courts

have recognized the importance of similar dramatic intrusions in other contexts.54 Together

this suggests that trust, our expectations about the behavior of others, is at least a competing

force in privacy law’s period of interpretive flexibility.

Indeed, neither privacy-as-property nor privacy-as-secrecy can adequately account

for the distinction that all federal courts have made between simple A-to-B tracking and total

surveillance. As the D.C. Circuit noted in Maynard, even total surveillance that is restricted to

public streets can reveal personal information of a qualitatively different kind that raises

Fourth Amendment concerns that the point-to-point search in Knotts did not. Nor can

Professor Kerr’s (2004) broader conception of privacy-as-property, which captures

legitimate presence rules under the property umbrella, comprehend the difference. For Kerr,

there must be some element to the surveillance that defeats property or an individual’s

54There are many other cases with similar holdings. For example, in Galella v. Onassis (1972), an intrusion upon
seclusion case, a paparazzo’s “endless snooping constitute[d] tortious invasion of privacy” because he
“insinuated himself into the very fabric of Mrs. Onassis’ life” (p. 227-38). And in New York v. Weaver (2009), the
New York Court of Appeals found that extended GPS surveillance “yields … a highly detailed profile, not
simply of where we go, but by easy inference, of our associations—political, religious, amicable and amorous,
to name only a few—and of the pattern of our professional and avocational pursuits” (p. 1199-1200).

164
legitimate, albeit “momentary,” control over a space (p. 819, 822). That element is not

essential to total surveillance; constant GPS tracking can invade personal privacy so as to

“create a detailed profile” from even purely public information (Maynard, 2010, p. 562). Total

surveillance implicates broader concerns than defeating property and only with a broad

conception of privacy can the Fourth Amendment adequately participate in the social

construction of intrusive technologies like GPS searches.

Section 6.3.2: Applying Privacy-as-Trust to Internet-Based Collection and


Aggregation of Personal Data

Professor Solove (2005) criticized Professor Kerr’s (2004) privacy-as-property

argument as insufficient to address modern privacy challenges and blind to the federal

judiciary’s post-Katz jurisprudence that created a “secrecy paradigm” rather than a property-

based rule. But Solove’s theory only tells part of the story. He suggests, for example, that

cases like Florida v. Riley, California v. Ciraolo, Dow Chemical v. United States, and Knotts v. United

States are all proofs of the secrecy paradigm because they ostensibly stand for the proposition

that anything exposed to the public, even at some point and minimally so, is not protected

by the Fourth Amendment (p. 751-752). But, as we have discussed, the lesson of those cases

is far more complicated, focusing not on secrecy alone, but on our expectations of what

other people could see and know. Ciraolo and Dow, not to mention Knotts and Kyllo, at least

pay equal, if not more, attention to the ready availability of the sensory enhancing

technologies law enforcement used to search. Therefore, these cases do little to show that

secrecy is a dominant paradigm in Fourth Amendment jurisprudence. Instead, they, like

Maynard and Historical Cell-Site Data, show that there are competing theories still jockeying for

position in a period of interpretive flexibility about the Fourth Amendment and new

technologies.

165
Trust is also competing for a place in the interpretation of the Fourth Amendment as

it relates to internet searches. And it is reaching the federal appellate courts. United States v.

Warshak (2010), for example, concerned the government’s quest for approximately 27,000

emails sent by the company’s president in the course of planning the misleading ad campaign

and business model for selling Enzyte.55 The Sixth Circuit found that Warshak had a

reasonable expectation of privacy in the content of his emails by analogizing email contents

to the content of a letter in the mail: the police can neither intercept and read sealed letters in

the mail, nor can they intercept and read emails despite the fact that both go through the

hands of third parties. “Put another way,” the court stated, “trusting a letter to an

intermediary does not necessarily defeat a reasonable expectation that the letter will remain

private” (p. 285). Like the court in Maynard, which found the mere possibility that someone

could track a person’s movements on public streets, the court in Warshak was concerned

with our expectations of how other people would actually behave. The “mere ability of a

third party intermediary to access the contents” is not enough (p. 286). Nor is a right of

access (p. 287). In other words, concepts of secrecy and property are not sufficient. What

does matter is our expectation about how other people would actually behave and, given the

totality of the social context—the similarity between email and regular mail and given the

sheer number of emails we send56—it seems evident that we send emails with the

expectations that their content will not be divulged.

55The commercials for Enzyte, which purported to increase the size of a man’s erection, featured a man with
an exaggerated smile that was presumably the result of using Enzyte (Anderson, 2013).
56In Warshak, the court stated that “[s]ince the advent of email, the telephone call and the letter have waned in
importance, and an explosion of Internet-based communication has taken place. People are now able to send
sensitive and intimate information, instantaneously, to friends, family, and colleagues half a world away. Lovers
exchange sweet nothings, and businessmen swap ambitious plans, all with the click of a mouse button.
Commerce has also taken hold in email. Online purchases are often documented in email accounts, and email is
frequently used to remind patients and clients of imminent appointments. In short, ‘account’ is an apt word for
166
But Professor Solove (2004; 2005) is absolutely correct when he finds the secrecy

paradigm at the core of the third-party doctrine. This is an example where closure appears to

have settled in a corner of Fourth Amendment jurisprudence. New technologies, however,

should force reconsideration during a period of interpretive flexibility of the third-party

doctrine. The problem with the doctrine, then, is that it ossifies Fourth Amendment

jurisprudence in closure when new technologies continually, and appropriately, create

destabilization.

As discussed above, the third-party doctrine holds that there is no reasonable

expectation of privacy in information known or possessed by third parties. As Professor

Solove (2004) has argued, this makes the Fourth Amendment almost entirely unhelpful in

many modern day searches based on new technologies. Our movements on public streets are

not secret; we therefore can be tracked without a warrant, like in Knotts (1983). More

menacingly, it neuters the Fourth Amendment when law enforcement wants access to the

increasingly mammoth troves of personal data collected in digital dossiers of internet

intermediaries (Solove, 2004). That much is clear. For Professor Solove, the doctrine violates

principles of privacy because it is Kafkaesque: it hands to private intermediaries and the

government complete control over our data (p. 48, 51). Loss of control and helplessness in

the face of an opaque system does indeed represent a problem for personal privacy.

Professor Solove wrote several articles and published The Digital Person in order to fight

back against the ossifying third-party doctrine given the problem of digital dossiers.

the conglomeration of stored messages that comprises an email account, as it provides an account of its
owner’s life. By obtaining access to someone’s email, government agents gain the ability to peer deeply into his
activities. Much hinges, therefore, on whether the government is permitted to request that a commercial ISP
turn over the contents of a subscriber’s emails without triggering the machinery of the Fourth Amendment” (p.
284).

167
Privacy-as-trust questions the legitimacy of the third-party doctrine; Fourth

Amendment jurisprudence based on sociological principles of trust would overturn it. But it

is unsurprising that the secrecy-based third-party doctrine can exist alongside the more trust-

based aggregation theory and Katz jurisprudence. The Fourth Amendment is experiencing a

period of interpretive flexibility, with different doctrines jockeying for dominance. Trust has

yet to win. In this way, interpreting Fourth Amendment jurisprudence as a social construct

is, therefore, a call to action for advocates, judges, and academics that are concerned about

personal privacy in a world where third parties possess terabytes of data about us.

168
CHAPTER SEVEN:
The Effects: Public Versus Private in Intellectual Property
So far, I have used privacy-as-trust to define the boundary between public and

private in two privacy law contexts that both involve limited disclosures. In the tort context,

I have argued that privacy-as-trust would better protect personal privacy in a networked

world by replacing arbitrary bright-line rules with a tort that protects relationships of trust

and confidence. The trust-based tort of breach of confidentiality accepts that individuals may

share personal information with others and yet still retain privacy interests in that

information. In the Fourth Amendment context, privacy-as-trust would also consider

reasonable those expectations of privacy that emerge from trustworthiness cues evident

from the entirety of the social context and, as such, serves as a coherent doctrinal basis for

rejecting the third-party doctrine. Because it extinguishes all expectations of privacy upon a

single disclosure, that doctrine violates the same laws of social science as any tort rule that

denies privacy interests after limited disclosures.

These cases raise the same fundamental, first principles question of privacy law:

where—and on what basis—do we draw the line between public and private? That question

has been at the center of this thesis on the law of privacy, but the public-private divide is not

the exclusive realm of those writing about privacy. Consider the following three narratives

featuring three different legal weapons wielded in three different jurisdictions to address

three different wrongs.

A jilted ex-boyfriend posts a nude “selfie” of his old girlfriend on the internet. She

feels violated, exploited, and embarrassed, and sues him in state court under the tort of

public disclosure of private facts. He claims she took the picture and sent it to him

voluntarily (Franks, 2011). This scenario would implicate privacy tort law because a private

169
party allegedly wronged another private party by invading her privacy. It would also

implicate the problem of limited disclosures because the victim took the picture herself and

freely and voluntarily gave the tortfeasor her picture.

Elsewhere, a man is on trial for several drug crimes. The prosecution’s evidence

against him comes from a GPS device that gave police round-the-clock surveillance of his

movements on public streets over twenty-eight days. They had no warrant. The accused

seeks to exclude the evidence as a violation of the Fourth Amendment’s guarantee against

unreasonable searches and seizures (United States v. Maynard, 2011). This would implicate the

Fourth Amendment because an agent of the state—in this case, law enforcement—is

allegedly invading the privacy of a citizen protected by the Constitution. This is also a case of

limited disclosure because the movements surveilled by police using the GPS were on public

streets and, thus, observable.

And in another jurisdiction, the validity of an inventor’s patent for a widget is being

challenged because she showed several prototypes to a handful of colleagues and friends

more than one year before applying for the patent (Beachcombers Int’l v. WildeWood Creative

Products, 1994). This is a question for the Patent Act, recently amended by the America

Invents Act (AIA), which governs grants of limited monopolies to inventors of new and

useful devices and processes. It is also a question of limited disclosure: the patentee used her

invention in a limited way before publicizing her invention to the world through the patent

process.

It should be clear, then, that intellectual property scholars have an interest in this

fight, as well. Both patent and trade secret law have provisions that respond to limited first-

person disclosures. Section 102 of the Patent Act states that an invention “in public use” or

“disclose[d]” or “otherwise available to the public” for more than one year prior to filing an
170
application for the patent will not be considered novel and, thus, not eligible for a patent.

And Section 1 of the Uniform Trade Secrets Act (UTSA), codified as law in 47 states and the

District of Columbia, requires trade secrets be “not generally known” and the subject of

reasonable efforts to keep them secret. To determine what is public use under the Patent Act

and what is not generally known in trade secret law, it is crucial that we find the line between

public and private.

There has been no uniform approach to the problem. As Mark Lemley (2015) has

noted, “public” in the Patent Act has traditionally seemed to mean merely “not secret” (p.

10). The case law also suggests that the control the inventor retains over any use of her

invention prior to patenting will be determinative of the “public use” bar. In this way, one of

the dominant conventional theories of privacy discussed in Chapter 2—privacy as the right

to control what others know about you—is reflected in patent law’s novelty jurisprudence.

This theory is an affirmative right that embraces principles of autonomy and choice. It

locates the privacy right within the individual and links the private and public worlds with

retention and loss of control over information, respectively.

As means of determining the extent of personal privacy rights, a doctrine based on

control and secrecy is problematic. As I argued in Chapter 2, its bright-line rule extinguishes

our privacy interests when any third party knows something about us, an increasingly

common phenomenon in a networked world (Solove, 2004). In so doing, it allows others,

including law enforcement, to encroach on spheres we would normally consider private. I

also argued that privacy-as-control is willfully blind to the common practice of modern

social interaction, much of which takes place online. Similarly, as a means of determining the

difference between public and non-public uses under the Patent Act, privacy-as-control

frustrates the goals of that law, discourages experimentation, and has negative effects far
171
beyond patent law. It also privileges wealthy and corporate inventors over other innovators

by relying too heavily on executed confidentiality agreements and, as a result, disrespects the

norms of how a very different cluster of inventors—individual entrepreneurs not backed by

corporate interests—interacts with others. In short, the “public use” bar tends to ignore the

unique social context and the relationships between social actors among non-established

inventors. That is unequal, unfortunate, and a mistake.

Private contexts are not hidden, controlled, or purely autonomous contexts; privacy

is neither a simply liberal nor individual value. Private contexts are contexts of trust, and

because we share when we trust, the line between public and private should be defined on

social terms from the totality of the circumstances. To ignore the social relationships that

allowed disclosures to happen within expectations of confidentiality would turn the social

inventor’s life on its head.

Trade secret law looks at the public-private question differently from patent law:

instead of relying on the individual’s right to control and exclude, trade secret law to a great

extent relies on network-based social relationships of trust to determine the line between

public and private. A good example of this is the law related to limited disclosures, or the

doctrine that trade secret protection can still extend to business information known to a few

select others. It recognizes that privacy depends on context and, in particular, the trusting

social relationship between the owner and recipient of confidential business information.

I argue that individual rights-based concepts of privacy have dominated judicial

interpretation of the “public use” bar in patent law. As means of defining the boundary

between public and private, those conceptualizations are ill equipped to serve the values

embraced by those laws. To do that, I propose we turn away from rights-based theories of

privacy and look to the relationship between the parties to determine public uses and
172
performances: private contexts are contexts of trust, identified from norms of social

interaction evident from the totality of the circumstances of a given relationship.

Section 7.1: The “Public Use” Bar and Denial of Social Relationships

To get a patent, your invention must be novel. To be novel, it cannot have been in

public use, disclosed, or otherwise available to the public more than one year prior to

patenting (Patent Act, § 102(a)). If, as Lemley (2015) and Merges (2012) have argued, the

AIA amendments do not change the meaning of the novelty requirement, patent law’s

publicity triggers will continue to be based on either a secrecy paradigm or, in the case of the

“public use” bar, on the extent to which an inventor retains control over her invention

during any pre-patent use. For this thesis, I would like to focus on the latter and argue that

the “public use” bar, based as it is on privacy as control, discourages experimentation,

privileges wealthy inventors, and has the deleterious effect of legitimizing a doctrine that

crowds out personal privacy rights.

A non-public use occurs when the inventor has a “legitimate expectation of privacy

and of confidentiality,” which depends on an inventor’s retention of control of her invention

(Dey v. Sunovision Pharmaceuticals, 2013, p. 1356). The Federal Circuit was explicit about this in

its decision in Moleculon Research Corp. v. CBS (1986): because the court agreed with the

district court’s findings that the puzzle’s inventor had at all times “retained control” over the

device, a legal conclusion of non-public use necessarily followed (p. 1266). As Table 7.1.1

shows, a random sample of 102(b) “public use” cases shows that a finding of control is

inversely related with a conclusion of public use.

173
Table 7.1.1
The Relationship Between Inventor Control and “Public Use”

Control? Public
Use?

Moleculon Research Corp. v. CBS, Inc. (1986) Yes No


Baxter International, Inc. v. Cobe Laboratories, Inc. (1996) No Yes
Beachcombers Int’l v. WildeWood Creative Products (1994) No Yes
American Seating Co. v. USSC Group, Inc. (2008) Yes No
Dey, L.P. v. Sunovion Pharmaceuticals, Inc. (2013) Yes No
Lough v. Brunswick Corp. (1996) No Yes
MIT v. Harman International Industries, Inc. (2008) No Yes
Minnesota Mining & Manufacturing Co. v. Appleton Papers, Inc. (1999) No Yes
Netscape Communications Corp. v. Konrad (2002) No Yes
Bernhardt LLC v. Collezione Europa USA, Inc. (2004) Yes No
Pronova Biopharma Norge v. Teva Pharmaceuticals (2013) No Yes

The real question, then, is what the Federal Circuit means by “control.” One

tempting theory is that courts find sufficient control where inventors employ confidentiality

agreements before any pre-patenting use. Table 7.1.2 shows an imperfect positive correlation

between control and the execution confidentiality agreements.57

Table 7.1.2
The Impact of Confidentiality Agreements on Findings of “Public Use”

Confidentiality Public
Agreement? Control? Use?

Moleculon Research Corp. v. CBS, Inc. (1986) No Yes No


Baxter International, Inc. v. Cobe Laboratories, Inc. (1996) No No Yes
Beachcombers Int’l v. WildeWood Creative Products (1994) No No Yes
American Seating Co. v. USSC Group, Inc. (2008) No Yes No
Dey, L.P. v. Sunovion Pharmaceuticals, Inc. (2013) Yes Yes No
Lough v. Brunswick Corp. (1996) No No Yes
MIT v. Harman International Industries, Inc. (2008) No No Yes
Minnesota Mining & Mfr. Co. v. Appleton Papers, Inc. (1999) No No Yes
Netscape Communications Corp. v. Konrad (2002) No No Yes
Bernhardt LLC v. Collezione Europa USA, Inc. (2004) No Yes No
Pronova Biopharma Norge v. Teva Pharmaceuticals (2013) No No Yes

57This hypothesized correlation is only based on 9 cases, one of which is unreported and two of which are
district court cases. Though insufficient to make statistical conclusions, these numbers do suggest an avenue
for further research.

174
This imperfect correlation should give us pause, but the data nevertheless remind us of the

continued importance of confidentiality agreement despite the Federal Circuit’s insistence

that a formal agreement is not necessary (Moleculon, 1986, p. 1266). What’s more, if we go

one step deeper to try to explain why confidentiality agreements are irrelevant for some but

important for others, we find a troubling trend: individual entrepreneurs routinely lose their

“public use” cases, while corporate inventors, even without confidentiality agreements, tend

to win.

In Baxter (1996), for example, the Federal Circuit found that the use of a centrifuge

by an NIH researcher in his personal laboratory constituted disqualifying public use because

he maintained no control over the device. The most important factor leaning against control

seemed to be the fact that the inventor demonstrated the technology to colleagues without a

confidentiality agreement or any indication that it should be kept secret (p. 1058-1059). In

Lough (1996), a boat repair man invented a corrosion-proof seal for stern drives that he

tested on boats belonging to several of this friends and colleagues. The court determined

that the use was public because the inventor lacked any control over the seals: he asked for

no follow up, did not supervise their use, and never asked his friends to sign a confidentiality

agreement (p. 1116, 1120-1121). And in MIT (2008), student inventors used their friends to

test a car navigation system, but never required confidentiality agreements from them or

corporate sponsors (p. 303-304). In each of these cases, the lack of a confidentiality

agreement between the parties, though ostensibly only one of many factors to consider, was

always among the most important. Notably, the inventors in Lough and MIT could be

clustered together as individual entrepreneurs rather than corporate inventors.

175
The narrative in Beachcombers (1994) makes the point even more clear. As a lone

innovator, the designer and developer of an improved kaleidoscope in Beachcombers wanted to

solicit feedback on the design from her friends and colleagues. She invited twenty to thirty of

them over to her house for a demonstration and, without asking them to sign a

confidentiality agreement, allowed her guests to handle and use the invention (p. 1159-1160).

Without the confidentiality agreement, the use was considered sufficiently public because the

developer could not control what her guests did with the kaleidoscope either at the party or

what they did with the information they learned after they left. That conclusion, however,

seems like an exceedingly narrow reliance on a connection between control and the

execution of formal confidentiality agreements. The handful of guests were “friends and

colleagues,” the party took places at a private home, and the evidence suggested that the goal

of any demonstration was feedback, not to drum up a future market. The only thing missing

was a signed and executed confidentiality agreement, an easy hook for adjudication.

Closer examination, however, reveals that the relationship between control and

confidentiality is neither so simple nor formalistic. Table 7.1.2 shows that not all cases

without executed confidentiality agreements ended in determinations of pre-patenting public

use. We cannot, therefore, fall back on that hypothesis. Even more to the point, the cases

show that evidence of confidentiality is usually analyzed after evidence of use, distribution,

and indicia of more practical control over the invention, suggesting that lack of control will

make any use public unless such use is protected by confidentiality. A more faithful

expression of the holdings in these cases highlights the two-step use-subject-to-

confidentiality analysis. In Baxter (1996), for example, the court held that the (1)

demonstration and use of an invention in an environment where others freely come and go

(2) without expectations of confidentiality constitute public use under 102(b). In Pronova
176
(2013), (1) sending drug samples to a researcher (2) without restriction on their use was also

considered public use. And in Minnesota Mining (1999), (1) sending sample forms to

employees (2) without any indicia of confidentiality lead to a similar conclusion.

And although some cases appear to require a formal confidentiality agreement,

others do not. In Bernhardt (2004), for example, an invite-only furniture show did not

sufficiently publicize the company’s designs even though none of the participants signed

confidentiality agreements because they were not necessary in the context of the industry

and the show. Evidence was presented at trial that it had always been industry custom to

keep the new innovations at this show confidential, that the event was invite only, that

attendees were escorted through the show, and that they were prohibited from taking notes

(p. 1381). Nor was the lack of formal confidentiality agreements fatal in American Seating

(2008, p. 1268), which involved another corporate inventor. As the court stated in Moleculon

(1986, p. 1266), a formal confidentiality agreement was never supposed to be necessary.

Taken as a whole, then, the case law appears to suggest that a court will find “public

use” based on even small or singular incidents of use or demonstration when there is no

attendant evidence of expectations of confidentiality. But courts appear to require

confidentiality agreements for individual entrepreneurs even though they are willing to rely

on industry norms of confidentiality for corporate inventors. This is an unequal

implementation of an assumption of risk standard, with a bias against the lone innovator: if

she shows or demonstrates her invention to others or uses it with others without sufficient

indicia of expectations of confidentiality, she cannot rely on social norms of her network;

rather, she has assumed the risk that others will use, talk about, and build her invention

themselves. The Supreme Court suggested as much in the 1881 case of Egbert v. Lippmann.

There, the Court found that a man who gave an improved corset to his “intimate friend” fell
177
prey to the public use bar: “If an inventor, having made his device, gives or sells it to

another, to be used by the donee or vendee, without limitation or restriction, or injunction

of secrecy, and it is so used, such use is public, even though the use and knowledge of the

use may be confused to one person” (p. 335-336). All of the cases listed in Table 7.1.2 make

sense under this biased assumption of risk framework. For example, by demonstrating her

kaleidoscope and allowing twenty to thirty friends and colleagues to use it without a clear

expectation of confidentiality, the solo designer in Beachcombers (1994) assumed the risk that

her reduced-to-practice ideas would get out. The same was true of the research scientist in

Baxter (1996), who let various people come through his office unrestricted. In Bernhardt and

American Seating, however, the corporate inventors did not employ confidentiality agreements

and yet the courts in those cases were willing to respect industry norms of confidentiality to

conclude that control over the inventions had been retained.

The liberal, rights-based influences in the “public use” bar, and its biased

implementation, should now seem clear. The assumption of risk doctrine is a creature of tort

law, holding that a plaintiff cannot recover from injury from a risk created by another if the

plaintiff (1) possessed knowledge of the risk and (2) had the free choice to avoid that risk.

When someone exercises her own volition and chooses to encounter a risk—say, by

exposing her invention to others before applying for a patent—she assumes the risk that her

behavior could lead to injury—namely, someone else might take her idea and reduce it to

practice before she can control the market. I discussed the doctrine in Chapters 2 and 6, as

well. Justice Cardozo explained the doctrine best in Murphy v. Steeplechase Amusements (1929, p.

174): “One who takes part in [a potentially dangerous activity] accepts the dangers that

inhere in it so far as they are obvious and necessary, just as a fencer accepts the risk of a

thrust from his antagonist or a spectator at a ball game the chance of contact with the ball.”
178
Free choice, then, is an essential element of the assumption of risk logic. As discussed in

Chapter 2, it is also at the core of liberal theory.

We see these influences in “public use” jurisprudence in myriad ways, but most

notably through the inverse relationship between a finding of inventor control and a

conclusion of public use. Any time the court thinks an inventor has, by her actions, lost

control of the invention or of the context of disclosure—using and demonstrating a

centrifuge while declining to restrict entry to a lab (Baxter), letting party guests use and touch

the kaleidoscope without signing confidentiality agreements (Beachcombers), installing devices

onto friends’ boats and allowing them to use them without restriction (Lough), testing the

invention in cars driven by others (MIT), or shipping viable medication samples to a

researcher without limitations (Pronova)—the court concludes that the pre-patenting use was

public. In each case, the court seems to suggest that the inventors freely and voluntarily

ceded their inventions to others and took a come-what-may attitude toward securing their

innovations. As such, they chose to assume the risk the invention could get out. None of

these cases involved inventors who signed confidentiality agreements. What distinguished

them from the inventors in Bernhardt and American Seating, for example, was the social

network: individual entrepreneurs versus corporate inventors. In most cases, the latter do

not assume the risk of further dissemination even without forcing their collaboration

partners to sign confidentiality agreements because well-defined industry norms offer judges

a convenient adjudicatory hook on which to hang their hats. But that convenience amounts

to a willful blindness to the norms of confidentiality in the more informal networks of

friends and colleagues.

Even though the assumption of risk doctrine strikes a familiar tone—it is part of a

long tradition of respect for individual rights—it is a problematic way of determining the
179
difference between “public” and “private.” In Chapter 2, I argued that it weakens personal

privacy because it ignores the impact of technology, assumes free choice exists where

disclosures are compulsory, and inadequately describes our motivations behind sharing

personal information with third parties. In the patent law context, applying the doctrine in

“public use” jurisprudence has at least four negative consequences:

First, it gives courts license to ignore social norms within relationships of disclosure.

Even if executed confidentiality agreements are not always required to stave off a finding of

public use, Table 7.1.2 shows courts’ dangerous tendency to give them special privilege. In

cases like Bernhardt and American Seating, the Federal Circuit acknowledged that the

relationship between the inventor and those to whom she discloses her invention should

matter in a “public use” determination because a relationship of trust that gives rise to an

expectation of confidentiality could signify retention of control. In Bernhardt (2004, p. 1381),

the court accepted that participants in the pre-market furniture show could have custom of

confidentiality based on their status as industry partners. And in American Seating (2008, p.

1268), the Federal Circuit agreed with the district court below that even without

confidentiality agreements, the disclosure to a business partner who helped build the

invention and the internal demonstration to the inventor’s employees were both done in

contexts of implied confidentiality.

But it is hard to see this as a rule in all “public use” jurisprudence; it only makes

sense once we cluster the inventors. If anything, the relationships between the parties in

Beachcombers (friends and colleagues), Lough (friends and colleagues), and MIT (friends) were

closer and less in need of formal agreements than the relationships in Bernhardt (participants

in the same business) and American Seating (business partners and employees) and yet all three

of the former ended in findings of public use. Plus, the court said in American Seating that
180
internal use and demonstration of an invention only for employees was not public use

because there was both assumed confidentiality and control; the court said the exact

opposite in Minnesota Mining, when the company distributed special forms for employees to

use. The only difference appears to be the number of employee recipients, which would be

the most arbitrary of arbitrary lines between public and private. Elsewhere, courts have gone

out of their way to disclaim any relevance of the relationship between the parties for

determining confidentiality or control (MIT, 2008, p. 313).

At best, this creates confusion in law; at worst, it ignores the fact that different social

networks have different norms of confidentiality. As I argued in Chapters 3 and 4, the best

social science evidence suggests that sharing occurs in relationships of trust and that,

therefore, privacy should be understood as a social or relational norm. As a practical matter,

that would define the line between public and private on terms different from the

assumption of risk doctrine: rather than looking at the extent to which an individual exercise

her free choice to control what others know, privacy-as-trust would look, among other

things, at the social context of the relationship between the parties to a disclosure, social

cues of trustworthiness, and a disclosure that gives rise to an expectation of future behavior.

But “public use” jurisprudence has shown itself at best schizophrenic and at worst disdainful

of social norms.

A second implication of the assumption of risk doctrine in the patent content is that

it privileges wealthier and more established corporate inventors over other innovators.

Corporate inventors have the money to pay attorneys to write confidentiality agreements, the

experience to know their importance in business, and the leverage to force employees and

business partners to sign them. Lone entrepreneurs do not. As a result, corporate inventors

have a greater likelihood of winning a holding of non-public use because they are more likely
181
to have confidentiality agreements in place. It may, therefore, be no coincidence that the

non-corporate inventors in Beachcombers, Lough, and MIT lost their 102(b) cases, while the

corporate inventors in Bernhardt and American Seating won. And yet in the former three cases,

the inventors argued that they were behaving according to accepted social norms: the

designer in Beachcombers only invited her friends and colleagues to her home, the boatman in

Lough installed his invention on friends’ boats, and the members of the young cohort at MIT

asked their friends to drive the cars. Norms of interpersonal trust among friends, however,

appear to have always been ignored in “public use” jurisprudence as far back as Egbert

(1881). To exclusively privilege norms of institutionalized trust cemented through

agreements is to adopt an overly formalistic approach to law regulating social interaction.

And formalism privileges those who can afford to comply with expensive formalities.

This leads to the third negative effect of the status quo: inadequate incentives for

experimentation. It is indeed beyond cavil that patent law, in general, and the public use bar,

in particular, must serve the policy goals the Federal Circuit outlined in Tone Brothers v. Sysco

(1994):

(1) discouraging the removal, from the public domain, of inventions that the public
reasonably has come to believe are freely available; (2) favoring the prompt and
widespread disclosure of inventions; (3) allowing the inventor a reasonable amount
of time following sales activity to determine the potential economic value of a patent;
and (4) prohibiting the inventor from commercially exploiting the invention for a
period greater than the statutorily prescribed time (p. 1198).

It is, therefore, clear that any loosening of public use rules that would allow expansion of the

patent monopoly should give rule makers pause. But it is not clear that looking to the

relationship of the parties to determine the boundary between public and non-public uses

actually defeats or even frustrates these policy goals. Inventions disclosed to close friends or

colleagues whom we trust cannot truly be said to be “freely available” in any sense. Prompt

182
disclosure and patenting is still incentivized by the AIA’s first-to-file rule. And looking to

relationships of trust may advance the goals of the patent system: it would encourage more

experimentation among corporate inventors and lone entrepreneurs alike. As the Supreme

Court said in 1877, it does not “frustrate the public interest” when delays in patenting are

“occasioned by a bona fide effort to bring [the] invention to perfection, or to ascertain

whether it will answer the purpose intended.” The patent monopoly is, after all, only

temporary, “and it is the interest of the public, as well as [the inventor’s], that the invention

should be perfect and properly tested, before a patent is granted for it” (City of Elizabeth v.

American Nicholson Pavement, 1877, p. 137). A respect for relationships of trust among

inventors and their friends and colleagues would not only help realize this goal, but it would

also challenge the results in cases like Beachcombers, Lough, and MIT.

A fourth deleterious effect of the assumption of risk doctrine in patent law is

legitimacy: by entrenching a harsh rule for distinguishing between public and non-public

uses and by sometimes privileging confidentiality agreements over social norms, “public use”

jurisprudence has the expressive effect of legitimizing the assumption of risk doctrine in

other areas of law. Various scholars have discussed how the law has power beyond its

coercive effect (Lessig, 1995; Hellman, 2000). As Durkheim (1893/1997) argued, law both

reflects and influences social norms. Law is also expressive, note legal scholars like Cass

Sunstein (1996, p. 2022) and Danielle Citron (2009b, p. 377), among others: “it constructs

our understanding” of what is right and what is wrong, what is harmful and what is benign.

That becomes a matter of life and death or equal protection when, for example, laws

minimize harms when they take place online (Franks, 2011) or when they affect women or

marginalized groups (Citron, 2009a). But it is also important when harmful doctrines bleed

from one subject to another. As noted in Chapters 2, 5 and 6, the assumption of risk
183
doctrine is a vice on personal privacy: it cuts off privacy rights on the presumption of free

choice where no free choice exists. As Dan Solove (2004) has shown, a modern world

dominated by online social and commercial interactions demands that we disclose significant

personal information to third parties and yet, privacy law still perversely holds that we

assume the risk of those supposedly voluntary disclosures. As in the privacy context, the

assumption of risk doctrine in “public use” jurisprudence cuts off innovation and

experimentation, disadvantaging lone entrepreneurs who lack the leverage and wealth of

corporate inventors. Its continued legitimacy is, therefore, a problem for privacy and

intellectual property scholars alike.

Section 7.2: Trade Secret Law’s Respect for Social Relationships

Although jurisprudence concerning the “public use” bar in patent law tends to

ignore social relationships of trust of a certain kind, trade secret law has taken a different

path. A trade secret is confidential business information that, by virtue of its secrecy, gives its

owner an advantage in her business (Restatement (First) of Torts, 1939). The Uniform Trade

Secrets Act § 1 is both more specific and broader, encompassing methods, techniques, and

processes in addition to formulas, patterns, and devices that derive their economic value

from “not being generally known” and are subjected to “reasonable” efforts to keep them

secret. Trade secrets are remarkably common and important parts of our culture: everything

from Coca Cola’s recipe to Google’s algorithm, from the formula for Listerine to how they

make Krispy Kreme doughnuts, are trade secrets. Like patent law’s “public use” bar, which

addresses the problem of pre-patenting disclosure of an otherwise patentable invention,

trade secret law must deal with the problem of the minimal or limited disclosures necessary

to use a trade secret in commerce. Despite the name, trade secrets are not always so secret;

sometimes, they must be shared with others. Consider, for example, the Krispy Kreme
184
recipe. It is a trade secret, but many people know it: employees, subcontractors, and business

partners, just to name a few.

Therefore, although information must be secret to qualify as a trade secret, defining

what society means by “secret” is where the rubber meets the road. The Fifth Circuit

outlined the black letter law in the 1986 seminal trade secret case, Metallurgical Industries v.

Fourtek: “[T]o qualify as [a secret], the subject matter involved must, in fact, be a secret;

‘[m]atters of general knowledge in an industry cannot be appropriated” because they are not

secrets (p. 1199). Secrecy, however, need not be absolute. Forcing a trade secret owner to

keep “totally silent” about her secret would, at a minimum, impair her ability to take

advantage of the secret in the market: she would need to tell at least some employees,

business partners, and subcontractors. The Restatement (First) of Torts (1939) made this

quite clear, noting that trade secret owners can share their secrets with others pledged to

secrecy. In a sense, this echoes Simmel’s (1906) conception of secret societies, or a cluster of

persons grouped together by virtue of the secret its members hold and their obligations to

the collective to maintain it. The Metallurgical Court went a step further, broadening the reach

of permissible disclosures to include any “limited” disclosures meant to further the secret

owner’s “economic interests” (Metallurgical, 1986, p. 1200).

As the legal scholar Sharon Shandeen (2006) has noted, this relative secrecy doctrine

is reflected in several provisions of UTSA. First, the definition of a trade secret allows for

this leeway: trade secrets are not really “secrets;” they are pieces of information “not

generally known” (UTSA, 1985, § 1). Various trade secret scholars have shown that this

phrase has meant that the information be not known “to the trade in which the putative

trade secret owner is engaged,” suggesting that knowledge per se is less important than

knowledge in a particular social network (Shandeen, 2006, p. 697). Second, if total secrecy
185
were required, the mandate that trade secret owners only exercise “reasonable” efforts to

keep the information secret would be woefully inadequate. Absolute secrecy would require

all efforts. Third, and most importantly for our purposes, the relationship between the trade

secret owner and the recipient of the information must be considered when determining

whether the information constituted a trade secret in the first place. This implies that

disclosures to some could extinguish trade secrecy while disclosures to others would not.

The real question, then, is how does trade secret law determine when a given

disclosure is sufficiently limited as to not vitiate legal protection against further disclosure via

misappropriation? Notably, judges are adjudicating this same question in the privacy context,

and as Lior Strahilevitz (2005) has shown, courts have taken a haphazard approach. In

Chapter 5, I argued that we should determine the answer based on relationships of

interpersonal trust defined by the totality of the circumstances, paying particular attention to

observable facts like experience, strong overlapping networks, and a shared strong identity.

Without using those words, trade secret law comes close to respecting personal relationships

in a similar manner.

It does this in at least two ways. First, it eschews reliance on formal, executed

confidentiality agreements and recognizes a broad conception of relationships that give rise

to expectations of confidentiality. As the Texas Court of Appeals said in Gonzales v. Zamora

(1990, p. 265), trade secret “[p]rotection is available even in the absence of an express

agreement not to disclose materials; when a confidential relationship exists, the law will

imply an agreement not to disclose those trade secrets. Relationships that have implied

confidentiality in trade secret cases have included the usual suspects—employer and

employee, purchaser and supplier, licensor and licensee, and partners in joint ventures—as

well as some other, less defined relationships—licensor and prospective licensee, seller and
186
purchaser of a business, an inventor and a prospective manufacturer of the invention

(Shandeen, 2006, 698-699). In Phillips v. Frey (1994), for example, the Fifth Circuit held that a

trade secret disclosed in the context of a negotiation for sale of a business gave rise to an

expectation of confidentiality. The prospective purchaser could not go ahead and use the

information on his own because, even though there was neither a formal confidentiality

agreement nor an express request of confidentiality, the nature of the relationship gave rise

to an implied duty: the “parties mutually came to the negotiating table” and the “disclosure

was made within the course” of those negotiations (p. 631-632).

There is also an experimentation and testing exception in trade secret law: any

knowledge of a trade secret gleaned through testing of it does not extinguish protection

against subsequent misappropriation (T-N-T Motorsports v. Hennessey Motorsports, 1998).

Respect for the social context of disclosure is at the heart of allowing more leeway for

experimentation: we disclose information for the purposes of experimentation, whether to

make a process more effective or fine tune a formula, because the expectation of

confidentiality implied in that relationship give us the confidence and security to disclose.

The second way trade secret law respects relationships in disclosure contexts is by

distinguishing between social networks. The disclosures in Syncsort v. Innovative Routines

International (2011), a case applying New Jersey law, illustrate this point quite well. Syncsort

accused Innovative Routines International (IRI) of misappropriating its unique UNIX

language. IRI responded that misappropriation was impossible because the code was already

public: licensees had published portions of the code online, and the Syncsort manual, which

described the code in detail, was published online in its entirety in Korea and Japan. With

respect to the latter, the court concluded that the code language would never become

“generally known to the relevant people” (p. *14). The court appears to be suggesting that a
187
manual published in Asia, in Japanese and Korean, would be unlikely to find its way into the

hands of Syncsort’s competitors, all of whom are American and, the evidence suggested,

could not understand any Asian language. Therefore, even full publication of a trade secret

did not extinguish protection for subsequent misappropriation of that information because

disclosure in one social network unlikely to come in contact with competitors still allowed

the trade secret owner to derive market benefit from the information.58

What’s more, the rationale behind this respect for relationships of disclosure is a tip

of the hat to the trust that emerges between persons in certain contexts. Courts talk about

“mutual understanding” and “good faith” (Syncsort, 2011, p. *13) and give special weight to

testimony that evidences expectations of trust among the parties involved (Leonard v. State,

1989, p. 175). In fact, although there have been thousands of reported trade secret cases in

the various state courts over the last 30 years, more than 800 trade secret cases available on

the Westlaw database make explicit use the word “trust” or some derivation thereof in

connection with an individual’s responsibility to keep secret information she learned as a

result of a relationship with the trade secret owner. That total does not include all the myriad

alternatives or proxies of social trust: “promise,” or similar (154); “secure,” or similar (436);

and “safe,” or similar (199). Trust, therefore, is a powerful force in justifying, explaining, and

applying trade secret’s relative secrecy doctrine.

There are great benefits to this approach over and above the avoidance of the

problems associated with the assumption of risk doctrine described above. Most notably,

58 Trade secret law acknowledges the power of social networks and social network theory in another way—
namely, by holding that even if every single piece of information is public, that information taken together can
still constitute a trade secret (EEMSO v. Compex Technologies, 2006). This notion, which resembles the
aggregation theory in privacy law, is founded on social network principles because complex, aggregated
information does not travel easily from one social network to the next (Strahilevitz, 2005).

188
relative secrecy recognizes the unavoidable fact that we are social sharers. As Dan Solove

(2002) has noted,

[l]ife in the modern Information Age often involves exchanging information with
third parties, such as phone companies, Internet service providers, cable companies,
merchants, and so on. Thus, clinging to the notion of privacy as total secrecy would
mean the practical extinction of privacy in today’s world (p. 1152).

Drawing the line via a biased assumption of risk doctrine in the trade secret context would

have similar deleterious effects, not the least of which would be forcing companies to take

exceedingly tight, oppressive, and impractical actions to protect their secrets and

discouraging experimentation, testing, and entrepreneurship. As to the first point, consider

the case of E.I. DuPont deNemours v. Christopher (1970). Christopher is an “industrial espionage”

case, per the weighted first line of the opinion, that began when an unknown duPont

competitor hired two photographers to fly over and take high-resolution aerial photographs

of a new duPont plant (p. 1013). DuPont sued the Christophers, arguing that they unlawfully

misappropriated the company’s trade secrets by snapping photographs of part of a chemical

making process that was exposed to aerial view. The Christophers responded by saying that

what is in public view cannot be a trade secret (p. 1014). The court sided with duPont: the

company had taken “reasonable” efforts to secure its plant from the ground, including

posting security guards and erecting ground-level barriers. The law could not, the court said,

force duPont to build a temporary roof and impenetrable walls: “Reasonable precautions

against predatory eyes we may require, but an impenetrable fortress is an unreasonable

requirement, and we are not disposed to burden industrial inventors with such a duty in

order to protect the fruits of their efforts” (p. 1017). An assumption of risk rule, however,

would take us back to roofs, solid walls, and fortresses. Nor is it clear that duPont would

have even started on its innovation project—developing a new chemical production process

189
and building a plant—if it knew it would have to build an impenetrable fortress around its

construction site. The breathing room provided by trade secret law’s relative secrecy

doctrine, therefore, lowers innovation costs, giving corporate inventors the space to put their

ideas into practice.

Section 7.3: Respecting Relationships in Patent Law

It should be beyond cavil that trade secret law draws the line between public and

non-public information on a different basis than patent law: the former looks to the social

relationship of the disclosure context; the latter looks to the control the inventor had over

her invention. The reason for this divergence of approach is unclear: the policy goals at the

heart of patent and trade secret law are, after all, similar: both aim to strike a balance

between the inventor and the public and both are primarily aimed at ensuring a public

benefit from innovation. Respecting relationships only advances those objectives.

Patent law’s overarching purposes—to promote scientific and technological progress

and to enhance scientific knowledge—reflect a balance of two social values: that new

technologies should benefit the general public and that information should be available to

the public so that others can use and keep improving upon it. These goals emanate from the

Progress Clause, which empowers Congress to pass laws that “promote the Progress of

Science and useful Arts, by securing for limited Times to Authors and Inventors the

exclusive Right to their respective Writings and Discoveries” (U.S. Const. art. I, § 8, cl. 8).

Two elements of the first clause are social: it focuses on the promotion of “science,” which

was synonymous with “knowledge” and “learning” in eighteenth century vernacular

(Walterscheid, 2002, p. 125-126) and the creation of “useful” things, which presumes a

population that would use them. Even the second clause, which is commonly interpreted as

an economic or financial rationale for granting patent monopolies (Scherer and Ross, 1990),
190
is restricted to “limited times,” reflecting the balance between controlling knowledge and

allowing public access to it. The Supreme Court has also stated the public benefit goals of

the patent system and has routinely reminded patent applicants that patent law reflects a

bargain between the inventor and the public (Bilski v. Kappos, 2010, p. 3236). The extensive

disclosures we require from patent applicants—a detailed description of the invention,

including words and pictures, that describes, in definite terms, every piece of the device;

every previous technology that influenced the design, known as “prior art”; and a series of

claims that explicitly state what is included and what is excluded from the patent—are the

“quid pro quo of the right to exclude” others from making and marketing the patented device

(Kewanee Oil v. Bicron, 1974, p. 484). Those disclosures, furthermore, must be sufficient so

that someone “of ordinary skill in the [relevant] art” or industry could make and use the

invention. This requirement stems from the need for “meaningful disclosure” to the public

so that the next innovator can build upon previous work (Enzo Biochemicals v. Gen-Probe, 2002,

p. 970). The limited patent monopoly, then, is premised on the expectation that “[t]he

productive effort thereby fostered will have a positive effect on society through the

introduction of new products and processes of manufacture into the economy, and the

emanations by way of increased employment and better lives for our citizens” (Kewanee,

1974, p. 480).

The rationales behind trade secret protection are not that different.59 Under the

Restatement (First) of Torts, trade secret protection was incumbent upon use: information

that would otherwise qualify, but was not being used in industry, would not receive

59There is some debate among trade secret scholars about the history and development of trade secret
protection, and the implications of that history on identifying the original rationales behind the common law
rules. Bone (1998) offers a particularly insightful summary and analysis of this debate.

191
protection. The rule reflects a concern, also at the heart of the patent law quid pro quo, that

any monopoly on information could close off too much knowledge that should be in the

hands of the public. The Supreme Court agreed in Kewanee Oil v. Bicron (1974), noting

explicitly that one of the purposes of trade secret law was to encourage innovations that

benefit the public (p. 481-482). And many trade secret scholars, while also noting other

policy goals like maintaining community and industry norms and protecting the presumption

of good faith and fair dealing, have acknowledged the social focus of any justification from

trade secret protection (Chiappetta, 1999).

And respecting the reality of social relationships only helps to achieve this delicate

balance and advance the social goals of these intellectual property regimes. As discussed

above, acknowledging that disclosures are relationship-dependent would encourage

experimentation and incentivize cooperation among innovators. It would balance the need

for inventor cooperation with the public’s right to information by protecting only those

pieces of information and inventions that are legitimately not yet part of the public sphere.

And it would be more responsive to the latest social science evidence on how and why

individuals share with each other.

It also gives judges a practical tool for evaluating “public use” cases. Although Lior

Strahilevitz’s (2005) social network theory for adjudicating limited disclosure cases would

indeed respect the relationship-contingent social context of pre-patenting uses, the

weaknesses discussed in Chapter 5 apply in the patent context, as well. Information is likely

to travel in different ways in different networks depending on industry, knowledge base, and

area of expertise. Inventions that might be burdened with social stigma, though no less

entitled to patents, may receive short shrift from a mainstream judge. And inventions

192
exposed to strangers would necessarily extinguish patent rights. These problems demand an

alternative approach that respects the relational context of disclosure on a case-by-case basis.

Using cues of trust to determine whether a pre-patenting use or disclosure was a

public use under Section 102(a) addresses the deficiencies of the current regime. It is clear

that Beachcombers (1994) and MIT (2008) would have come out differently. In Beachcombers, the

Federal Circuit found that use and demonstration at a cocktail party in the invention

designer’s home, of 20-30 friends and colleagues, and for the purposes of soliciting feedback

was a public use under the Act. As discussed above, the court made much of the lack of any

confidentiality agreement and the designer’s failure to take steps to ensure control over the

invention. That holding is strikingly blind. The patentee did take steps to create a social

context of trust and confidentiality: she only invited friends and colleagues, social networks

that retain norms of trust; she hosted the party in her home, a symbol of security; and she

made a point of telling her guests that the purpose of the party was to improve the

invention. To suggest that she had lost control of the invention is to ignore the cues of trust

from experience, expertise, and strong overlapping networks. And in MIT, student

researchers asked their friends to help test drive cars carrying their reduced-to-practice GPS

device. Focusing on the lack of a confidentiality agreement between the researchers and the

drivers, the court ignored the social bonds encourage us to share personal information with

our friends in the first place. These results not only make more sense, but support the goals

of patent law. Lough, however, is a closer call that probably would not have ended differently

under a trust analysis. Although Lough did install his device on friends’ boats, there were no

other indicia of trust of the context. The cocktail party for friends and the experimental

environment in Beachcombers and MIT, respectively, supported expectations of trust in those

cases. There were no similar supporting facts in Lough.


193
But changes in results are only one implication of respecting relationships of trust in

“public use” bar jurisprudence. Analytical changes may be just as important. Judicial

acknowledgment that the boundary between private and public is not crossed merely

because an inventor shared her invention with another will have an expressive effect on

inventors. Cooperation, partnership, and joint experimentation will be given tacit

endorsement by a court that has long privileged lone wolves who have the money and power

to develop innovations quickly and win races to the Patent and Trademark Office. The

words courts use matter and the language of trust may be an effective tool for realizing the

Framers’ goals of the Progress Clause.

194
CHAPTER EIGHT:
Conclusion and Next Steps
Can information disclosed to a limited, select few be protected against wider

dissemination? Current privacy scholarship has a difficult time answering this question with

any intellectual honesty or practical advice. And yet, the problem is pervasive: every mouse

click on an e-commerce website, every electronic banking transaction, every engagement

with an online social network, and every call dialed or text sent or email received on a

smartphone exposes some data to third parties. Untold terabytes of personal information are

in the hands of others; technologies—from the internet to sensory enhancing investigative

tools—are making it easier for others to track, spy on, and learn about us. In this thesis, I

have sought to provide a socio-legal response to this problem based on particularized social

trust.

Section 8.1: What We Have Learned

For over a century, the right to privacy has been understood as an individual right to

keep others out. Warren and Brandeis (1890) used it as a tool against an intrusive media. The

progressive Warren Court used it as a shield against government overreach and abuse of

power by the police (Katz v. United States, 1967). And countless scholars, from Alan Westin

(1967) to Julie Cohen (2000) saw it as a way to protect personal autonomy, a fundamental

right in a pluralistic, democratic society. These are indeed important values. But, as discussed

in Chapter 1, the development of the law of privacy along these lines was more the result of

historical accident and social construction than any inherent intellectual imperative. I also

argued that these perspectives universally lacked an appreciation for how privacy operates on

the ground.

195
Throughout my research, I considered countless situations that are traditionally

considered invasions of privacy, some of which are considered in Chapter 3: from barging

into a bathroom and reading a diary to asking an impertinent question and making data and

records easily accessible online. The scenarios seemed wildly different. At first blush, it

appeared that Dan Solove (2002) was correct when he argued that there is no one common

denominator to explain privacy; rather, privacy invasions in different contexts raised

different, yet sometimes overlapping concerns. For Professor Solove, it made more sense to

conceive of privacy not as a single principle, but as a series of “family resemblances.” I

found this proposal insightful, yet troubling. If there was no coherent scheme to capture

privacy, it would be subject to attack and erosion by more clearly defined conflicting rights.

But I hypothesized that social interaction can tell us much about what we mean and

the values embraced by privacy. Considering the scenarios discussed in Chapter 3 from a

relational context suggested that a breach of information privacy may be synonymous with a

breach of trust. As a norm of interaction, trust is everywhere. It refers to the expectations we

develop about others’ future behavior and thereby respects relationships of disclosure. And

yet, a comprehensive analysis of privacy case law suggested that trust and respect for social

relationships have been, at most, an underappreciated element of privacy jurisprudence.

It seemed so natural to think that a legal principle based on expectations—generally,

a right to privacy exists when individuals manifest a subjective expectation of privacy that

society is willing to recognize as reasonable (Katz, 1967, p. 361)—should be based on the

social expectations of others’ behavior. Still, the nature of those social expectations, how

they develop, and their value as socio-legal tools remained obscure. A review of the social

science literature on trust suggested that particularized social trust can develop among

intimates and friends as well as strangers as long as indicia of strong overlapping networks,
196
an important shared identity, and expertise were also present and capable of transference. To

my knowledge, this theory and evidence had never been applied to problems of privacy

before this thesis. I designed a case study of online social sharing to test the validity of that

research and reported those results in Chapter 4. The survey found that sharing increases

when trust increases, as modeled by the sharing of personal and impersonal information on

Facebook. I also found that this correlation exists for those who have both high and low

levels of general social trust. Finally, it appeared that a statistically significant relationship

existed between a willingness to share information with a stranger, the proxy for which was

accepting a Facebook “friend” request from someone the respondent had never met, and

strong overlapping social networks. As a result of this theoretical and empirical research, this

thesis developed into an argument for a broad reorientation of the right to privacy around

social principles of trust, discretion, and respect in relationships wherever they develop.

Although the law is no stranger to social theory, this thesis endeavors to break new ground

by suggesting several identifiable indicia of trust and applying them to contexts of sharing

and privacy.

This proposal has many implications, only three of which I discuss in this thesis. I

began with tort law, or the legal regime that governs interactions between private parties. In

Chapter 5, I discussed how many third parties—from the media to our friends, from bank

websites to strangers at parties—receive information about us either through voluntary or

required disclosures. Traditional conceptualizations of privacy have proven generally unable

to protect that information against wider disclosure. Privacy-as-trust offers a different path

by creating a broad tort for breach of confidentiality. This tort, influenced by British tort law,

would extend beyond formal legal relationships and create a dome of protection for

disclosures made in contexts of particularized social trust.


197
The same principle of trust may also help protect our privacy vis-à-vis the

government. In Chapter 6, I discussed how limiting the reach of the Fourth Amendment to

protect only information that is either secret (Solove, 2004) or protected by traditional

property principles (Kerr, 2005) threatens to erode fundamental personal privacy protections

of the Constitution. Although the jurisprudence of the Fourth Amendment suggests that

trust is competing for a place in a fight for interpretation, search and seizure law based on

privacy-as-trust will more effectively protect citizens in a world of sensory enhancing

investigative technology and the internet. It also provides a coherent basis for burying the

privacy-defeating third-party doctrine. Finally, I used Chapter 7 to step outside the confines

of privacy law to show that privacy-as-trust can help define the boundary between public

and private in others contexts. Patent law’s “public use” bar could more effectively advance

the purpose and goals of the patent system and provide adequate protection for solo

entrepreneurs by respecting relationships of trust.

Section 8.2: Response to Objections

Although this thesis represents only the first few words on the role of particularized

social trust in privacy law, it is worth responding to several initial objections to my argument.

1. Sample Bias. This is an internal objection to the methodology of the empirical

study. The survey discussed in Chapter 4 asked members of the Facebook community to

respond to a series of questions about what they share, why they share, and to whom they

share personal information. Over a period of several months, more than 600 Facebook users

responded from a variety of demographic groups. Although the sample may have come

close to a random sample of Facebook users, that sample is biased in several ways. It skews

younger than the general United States population and, more importantly, it is entirely

constructed of individuals who have voluntarily joined a web-based network the entire
198
purpose of which is to encourage sharing personal information. That may indicate an

inherent sharing bias among the population. Furthermore, Facebook’s business model,

which involves selling advertising space that can be tailored to specific clusters of users, is

based on encouraging its users to share personal information so Facebook can learn how

best to reach its clients’ potential audiences. This suggests that platform architecture might

also bias respondents toward sharing.

That the sample may be biased would be problematic if I sought to make broad

conclusions from the data about all populations. Instead, I offer an analysis of sharing on

Facebook for the limited purpose of suggesting that trust is important for how this

population shares personal information. It is the beginning of a longer study; it reflects

ongoing research in the field and is a springboard for privacy scholars to start thinking about

privacy from a relational perspective. Furthermore, as discussed in Chapter 3, there is an

emerging (but incomplete) social science literature on particularized social trust that demands

further discussion and needs to be applied to the online social space. When many of the

studies on general and particularized trust were first conducted, the internet, not to mention

Facebook, was not around. Although this objection properly cautions against attempting to

prove too much with one study, it does not prove fatal to this thesis.

2. First Amendment objection. One could argue that privacy-as-trust would run afoul of

the First Amendment’s guarantee of free speech. This argument is external, coming as it

does from competing values that may challenge privacy-as-trust, and suggests that too broad

a conception of relational privacy and a broad tort of breach of confidentiality would impede

the media’s right to disseminate information and penalize too much constitutionally

protected speech.

199
Although it is beyond the scope of this thesis to discuss the full breadth of First

Amendment protection, it is beyond cavil that the First Amendment’s protection of free

speech is not absolute. Several categories of speech—obscenity, defamation, fraud,

incitement, and speech integral to criminal conduct, for example—are not protected (United

States v. Stevens, 2010). What’s more, matters of private concern get significantly reduced First

Amendment protection, as do tortious speech and conduct (Dun & Bradstreet v. Greenmoss

Builders, 1985). Privacy law has long sought to comply with the sometimes-competing First

Amendment by including a newsworthiness exception to several of Prosser’s (1960) privacy

torts. This affirmative defense to tort liability holds that dissemination of information that is

of great interest to the public cannot be the basis for privacy tort liability. A tort for breach

of confidentiality would not touch the newsworthiness exception and fits well within the

First Amendment’s other exceptions.

Furthermore, laws restricting disclosure of truly private information have a positive

effect on speech freedoms. As Danielle Citron (2009a) has shown, unrestricted speech

online tends to silence minority voices as aggressors use technological tools to crowd out

dissident and different voices. Non-disclosure laws serve this and other important “privacy

and speech-related objectives,” Justice Breyer noted in his concurrence in Bartnicki v. Vopper

(2000), suggesting that protecting speech means much more than just taking a laissez faire

approach.

3. Impracticality. Chapter 5 recommended the creation of a new tort that has been

moribund in American law for more than 150 years. Chapter 6 proposed re-orienting

decades of Fourth Amendment law. And Chapter 7’s proposal would change more than a

century of precedent from the Federal Circuit Court of Appeals. Such radical change is

impractical, the argument goes. If a sample bias concern is an internal objection and a First
200
Amendment conflict is an external doctrinal objection, a third challenge to privacy-as-trust

may be categorized as an external practical objection.

There are at least two responses to this objection. First, my proposals are not as

radical as they seem. The tort of breach of confidentiality existed in this country long before

Warren and Brandeis (1890) wrote The Right to Privacy in the Harvard Law Review. As

Richards and Solove (2007) have shown, there was quite an active jurisprudence of

confidentiality both here and in England in the decades before 1890. And, as I argued in

Chapter 1, that American privacy law developed the way it did is more of an accident of

history than any predetermined necessity. Furthermore, I showed in Chapter 6 that privacy-

as-trust is already being reflected in some Fourth Amendment cases like United States v.

Maynard (2010) and Florida v. Riley (2014). If trust was indeed able to win closure as a

governing interpretation of the Fourth Amendment, it would indeed result in a major shift in

search-and-seizure law—the third-party doctrine, among other tools of police overreach,

would be tossed to the ash bin of history—but it would be in line with long-standing Fourth

Amendment jurisprudence dating back to Katz v. United States (1967). Similarly, the Federal

Circuit has already shown itself willing to respect some personal relationships when

considering “public use” bar cases. Extending the doctrine of privacy-as-trust to the patent

context would less change the law than bring coherence to a currently haphazard corner of

patent jurisprudence.

Second, any supposed radical change proposed in this thesis is warranted in the

name of protecting fundamental due process rights. The assumption of risk doctrine is doing

violence to our personal privacy in a world of increasingly invasive technologies; current law

is not up to the task to respond. The third-party doctrine is threatening to erode the Fourth

Amendment to almost nothing; as more data is in the hands of third parties, fewer warrants
201
will be needed and fewer protections for personal privacy vis-à-vis the government will be

available. That at least one federal court has blessed the National Security Agency’s

telephony metadata spying program as permissible under the third-party doctrine is just one

example of this ongoing erosion of rights (ACLU v. Clapper, 2014). Although I resist

categorizing any of my proposals as radical, the dangers posed to our privacy by new

technologies certainly require a proportional response.

Section 8.3: Steps for Future Research

I have already proposed several avenues for future research. Empirical studies on

particularized social trust must be expanded to include larger populations of online and

offline sharers. Those studies must be compared to existing research on particularized and

general social trust so broader conclusions about how people share can inform more tailored

privacy law proposals. Larger data sets will also be helpful in understanding sharing behavior.

We need to know more about when we trust strangers, and surveys and experiments can be

designed to approximate that information regardless of reporting biases. To fill the gaps left

by the narrow focus of the survey analyzed in this thesis, additional surveys or, preferably,

experimental interfaces can be developed to test the impact of trust on sharing as compared

to other factors, including but not limited to coercion—namely, the need to share to

participate in modern social and professional life—and identity building—namely, sharing to

create a public narrative of the self.

As a matter of policy, I plan to build on privacy-as-trust and research doctrinal and

policy implications in other pressing questions of privacy law. Recall Chapter 5’s discussion

of limited disclosures. Cases like Y.G., Kubach, Duran, Nader, and many other cases discussed

in this dissertation predate Facebook, YouTube, Reddit, 4Chan, and even Google. The

problem of privacy law highlighted by those cases, however, is no stranger to the


202
cybersphere. In fact, when disclosures originally made in small, trusted contexts are

subsequently disseminated over the internet, the effects are arguably worse than if they were

published in a newspaper or broadcast on television: they are amplified, permanent, and

unavoidable (Franks, 2011; Boyd, 2014). They are amplified because they can reach a wider

audience faster and at little to no cost. They are permanent because it is nearly impossible to

scrub the web. And, they are unavoidable because online distribution of information can

happen anywhere and it can get linked, cross-linked, hyper-linked, and collected into a search

report; you cannot avoid Google like you avoid a traffic jam.

As I argued in Chapter 5, widely disseminating content meant for a trusted few is an

invasion of privacy because the dissemination violates expectations of particular social trust.

Because the violation antedates any effects of dissemination on the individual and is blind to

the particular content involved, the remedy must do the same. That is why the tort for

breach of confidence both flows directly from the principles of privacy-as-trust and

represents an effective weapon against the unauthorized dissemination of previously

disclosed content. But not all content is fungible. Some content—harassment, bullying, and

so-called “revenge porn,” for example—is particularly harmful. These dark phenomena pose

myriad legal problems and questions, many of which have been addressed in the legal and

social science literature (Citron, 2009a; Waldman, 2012; Waldman, 2013; Franks, 2012). In

forthcoming projects, I would like to apply the lessons of privacy-as-trust to the problems of

cyberbullying of LGBT youth, “revenge porn,” and tortious and aggressive behavior by

anonymous online actors. “Revenge porn,” or nonconsensual use of intimate images, is

particularly open to a privacy-as-trust analysis because it most often involves the

dissemination of intimate images that were freely given or shared to a limited audience,

usually within the trusting context of a sexual or long-term relationship. It also raises the
203
privacy problem of online anonymity, which is perhaps the most important and vexing issue

plaguing privacy scholars today.

Through this thesis, I have sought to raise several calls to action. I ask privacy

scholars to consider the relational aspect of privacy invasions and the sharing of personal

information. I ask courts to remain actively involved in the interpretation of privacy tort law

and the Fourth Amendment. I ask social scientists to look at the relationship between

particularized social trust and sharing on online social networks. And I ask lawyers and

policymakers to consider new tools to address privacy problems in a networked world. This

dissertation represents only the beginning of that work.

204
TABLES AND FIGURES

Table 4.4.1:
Comparison of Sample to Facebook Population, Generally

Facebook Age Quintile, % of Facebook


Age Quintile % of Sample generally Population

< 18 4 < 18 5
18-25 19 18-24 23
26-35 36 25-34 25
36-55 30 35-54 31
> 55 12 ≥ 55 16

205
Figure 4.5.1
Relationship Between Trust in Facebook and Sharing, Generally

25
Number of Items Shared on Facebook

20

15

10

0
0 2 4 6 8 10 12
Level of Trust in Facebook

206
Figure 4.5.2
Relationship Between Trust and Sharing Intimate Information

10
9
Number of Intimate Items Shared

8
7
6
5
4
3
2
1
0
0 2 4 6 8 10 12
Level of Trust in Facebook

207
Table 4.5.3:
Demographic Correlations with Sharing on Facebook

Total Sharing Total Intimate Sharing

Networked Level Pearson Correlation .228** .068


Significance (2-tailed) .000 .181
n 386 386

Trust in Facebook Pearson Correlation .722** .577**


Significance (2-tailed) .000 .000
n 386 386

** Correlation is significant at the 0.01 level (2-tailed)

208
Table 4.5.4
Multiple Regression: Total Sharing on Facebook

Model Summary

Std. Error of
Model R R Square Adj. R Square Estimate

1 .738 .545 .538 2.7234

ANOVA

Model Sum of Squares df Mean Square F Sig.

1
Regression 3366.423 6 561.071 75.649 .000
Residual 2810.947 379 7.417
Total 6177.370 385

Coefficients

Standardized
Unstandardized Coefficients Coefficients
Model B Std. Error Beta t Sig.

1
(Constant) 2.123 .789 2.693 .007
Gender .270 .309 .033 .874 .383
Age .005 .111 .002 .043 .966
Sexual Orientation -.253 .333 -.028 -.758 .449
Education Level .073 .255 .011 .284 .777
Networked Level .425 .102 .149 4.184 .000
Trust in Facebook 1.228 .061 .708 20.126 .000

209
Table 4.5.5
Multiple Regression: Total Intimate Sharing on Facebook

Model Summary

Std. Error of
Model R R Square Adj. R Square Estimate

1 .588 .346 .336 1.5155

ANOVA

Model Sum of Squares df Mean Square F Sig.

1
Regression 460.813 6 76.802 33.438 .000
Residual 870.513 379 2.297
Total 1331.326 385

Coefficients

Standardized
Unstandardized Coefficients Coefficients
Model B Std. Error Beta T Sig.

1
(Constant) .056 .439 .129 .898
Gender .088 .172 .023 .510 .611
Age .162 .062 .117 2.614 .009
Sexual Orientation .069 .186 .017 .374 .709
Education Level -.228 .142 -.072 -1.604 .110
Networked Level .018 .057 .014 .327 .744
Trust in Facebook .467 .034 .580 13.748 .000

210
Table 4.5.6:
Predicting Importance of Sharing Same Sexual Orientation for Willingness to Accept
Friend Requests from Strangers

Model Fitting Information

Model -2 Log Likelihood Chi-Sqaure df Sig.

Intercept Only 208.029


Final 101.657 106.372 9 .000

Goodness-of-Fit

Chi-Square df Sig.

Pearson 48.602 40 .165


Deviance 40.782 40 .436

Pseudo R-Square

Cox and Snell .254


Nagelkerke .384
McFadden .271

Parameter Estimates

Estimate Std. Error Wald Sig.


Threshold Same sexual orientation as stranger .133 .593 .050 .823

Location Age = 1.0 1.643 1.123 2.143 .143


Age = 2.0 .867 .605 2.053 .152
Age = 3.0 .138 .532 .067 .796
Age = 4.0 .061 .603 .010 .919
Age = 5.0 -.738 .642 1.323 .250
Age = 6.0 0
Gender = 0 1.079 .322 11.206 .011
Gender = 1.0 0
Education Level = 1.0 -1.461 .924 2.496 .114
Education Level = 2.0 -.621 .331 3.517 .061
Education Level = 3.0 0
Sexual Orientation Demographic = 0 -2.356 .317 55.133 .000
Sexual Orientation Demographic = 1.0 0

211
Table 7.1.1
The Relationship Between Inventor Control and “Public Use”

Control? Public
Use?

Moleculon Research Corp. v. CBS, Inc. (1986) Yes No


Baxter International, Inc. v. Cobe Laboratories, Inc. (1996) No Yes
Beachcombers Int’l v. WildeWood Creative Products (1994) No Yes
American Seating Co. v. USSC Group, Inc. (2008) Yes No
Dey, L.P. v. Sunovion Pharmaceuticals, Inc. (2013) Yes No
Lough v. Brunswick Corp. (1996) No Yes
MIT v. Harman International Industries, Inc. (2008) No Yes
Minnesota Mining & Manufacturing Co. v. Appleton Papers, Inc. (1999) No Yes
Netscape Communications Corp. v. Konrad (2002) No Yes
Bernhardt LLC v. Collezione Europa USA, Inc. (2004) Yes No
Pronova Biopharma Norge v. Teva Pharmaceuticals (2013) No Yes

212
Table 7.1.2
The Impact of Confidentiality Agreements on Findings of “Public Use”

Confidentiality Public
Agreement? Control? Use?

Moleculon Research Corp. v. CBS, Inc. (1986) No Yes No


Baxter International, Inc. v. Cobe Laboratories, Inc. (1996) No No Yes
Beachcombers Int’l v. WildeWood Creative Products (1994) No No Yes
American Seating Co. v. USSC Group, Inc. (2008) No Yes No
Dey, L.P. v. Sunovion Pharmaceuticals, Inc. (2013) Yes Yes No
Lough v. Brunswick Corp. (1996) No No Yes
MIT v. Harman International Industries, Inc. (2008) No No Yes
Minnesota Mining & Mfr. Co. v. Appleton Papers, Inc. (1999) No No Yes
Netscape Communications Corp. v. Konrad (2002) No No Yes
Bernhardt LLC v. Collezione Europa USA, Inc. (2004) No Yes No
Pronova Biopharma Norge v. Teva Pharmaceuticals (2013) No No Yes

213
REFERENCES
Primary Sources

United States Constitution

U.S. Const. amend. III.

U.S. Const. amend. IV.

U.S. Const. amend. V.

Statutes and Uniform Codes

Electronic Communications Privacy Act, 18 U.S.C. §§ 2510-2522, 2701-2709 (1986).

Fair Credit Reporting Act, 15 U.S.C. § 1681 et seq. (1970).

Family Educational Rights and Privacy Act, 20 U.S.C. § 1232(g) (1974).

Federal Communications Act, 47 U.S.C. § 151 et seq. (1934).

Health Insurance Portability and Accountability Act, 42 U.S.C. §§ 1320d-1320d-8 (1996).

N.Y. Civil Rights Act § 51 (2012).

Privacy Act, 5 U.S.C. § 552a (1974).

Restatement (Second) of Contracts, § 205 (1981).

Right to Financial Privacy Act, 12 U.S.C. §§ 3401-3422 (1978).

Video Privacy Protection Act, 18 U.S.C. §§ 2710-2711 (1988).

Case Law

Abernethy v. Hutchinson, [Ch. 1825] 26 Eng. Rep. 1313.

American Seating Co. v. USSC Group, Inc., 514 F.3d 1262 (Fed. Cir. 2008).

Baxter International, Inc. v. Cobe Laboratories, Inc., 88 F.3d 1054 (Fed. Cir. 1996).

Beachcombers Int’l v. WildeWood Creative Products, Inc., 31 F.3d 1154 (Fed. Cir. 1994).

Berger v. New York, 388 U.S. 41 (1967).

Bernhardt LLC v. Collezione Europa USA, Inc., 386 F.3d 1371 (Fed. Cir. 2004).
214
Bilski v. Kappos, 130 S. Ct. 3218 (2010).

Boyd v. United States, 116 U.S. 616 (1886).

California v. Ciraolo, 476 U.S. 207 (1986).

California v. Greenwood, 486 U.S. 35 (1988).

CIA v. Sims, 471 U.S. 159 (1985).

City of Elizabeth v. Am. Nicholson Pavement Co., 97 U.S. 126, 137 (1877).

City of Ontario v. Quon, 130 S. Ct. 2619 (2010).

Coco v. Clark, [1969] R.P.C. 41.

Dey, L.P. v. Sunovision Pharm., Inc., 715 F.3d 1351 (Fed. Cir. 2013).

Doe v. Southeastern Pennsylvania Transportation Authority, 72 F.3d 1133 (3rd Cir. 1995).

Dow Chemical Co. v. United States, 476 U.S. 227 (1986).

Dow Chemical Co. v. United States, 536 F. Supp. 1355 (E.D. Mich. 1985).

Duke of Queensberry v. Shebbeare, [Ch. 1758] 28 Eng. Rep. 924.

Duran v. Detroit News, Inc., 504 N.W.2d 715 (1993).

Dwyer v. American Express, 652 N.E.2d 1351 (Ill. App. 1995).

E.I. DuPont deNemours v. Christopher, 431 F.2d 1012 (5th Cir. 1970).

EEMSO, Inc. v. Compex Tech., Inc., No. 3:05-CV-0897, 2006 WL 2583174 (N.D. Tex.
Aug. 31, 2006).

Egbert v. Lippmann, 104 U.S. 333 (1881).

Enzo Biochem, Inc. v. Gen-Probe Inc., 323 F.3d 956 (Fed. Cir. 2002).

Ex Parte Jackson, 96 U.S. 727 (1877).

Florida v. Riley, 488 U.S. 445 (1989).

Food Lion, Inc. v. ABC, 194 F.3d 505 (4th Cir. 2001).

Galella v. Onassis, 353 F. Supp. 196 (S.D.N.Y. 1972).

215
Gonzales v. Zamora, 791 S.W.2d 258, 265 (Texas Ct. App. 1990).

Griswold v. Connecticut, 318 U.S. 479 (1965).

Hamberger v. Eastman, 106 N.H. 107 (1964).

In re Pharmatrak, Inc. Privacy Litigation, 220 F. Supp. 2d 4 (D. Mass. 2002).

In re U.S. for An Order Authorizing the Release of Historical Cell-Site Data, 809 F. Supp.
2d 113 (E.D.N.Y. 2011).

In re United States for Historical Cell Site Data, 724 F.3d 600 (5th Cir. 2013).

Katz v. United States. 389 U.S. 347 (1967).

Kewanee Oil Co. v. Bicron Corp., 416 U.S. 470 (1974).

Kyllo v. United States, 533 U.S. 27 (2001).

Lawrence v. Texas, 539 U.S. 558 (2003).

Leonard v. State, 767 S.W.2d 171 (Tex. Ct. App. 1989).

Lough v. Brunswick Corp., 86 F.3d 1113 (Fed. Cir. 1996).

McIntyre v. Ohio Election Commission, 514 U.S. 334 (1995).

Metallurgical Industries v. Fourtek, 790 F.2d 1195 (5th Cir. 1986).

Minnesota Mining & Manufacturing Co. v. Appleton Papers, Inc., 35 F. Supp. 2d (D. Minn.
1999).

MIT v. Harman Int’l Industries, Inc., 584 F. Supp. 2d (D. Mass 2008).

Moleculon Research Corp. v. CBS, 793 F.2d 1261 (Fed. Cir. 1986).

Multimedia WMAZ, Inc. v. Kubach, 443 S.E.2d 491 (1994).

Murphy v. Steeplechase Amusements, 166 N.E. 173 (N.Y. 1929).

Nader v. General Motors, 255 N.E.2d 765 (1970).

Nardone v. United States, 302 U.S. 379 (1937).

Nardone v. United States, 308 U.S. 338 (1939).

Netscape Communications Corp. v. Konrad, 295 F.3d 1315 (Fed. Cir. 2002).
216
New State Ice v. Liebmann, 285 U.S. 262 (1932).

New York v. Weaver, 909 N.E.2d 1195 (N.Y. 2009).

Olmstead v. United States, 277 U.S. 438 (1928).

Pavesich v. New England Life Insurance Company, 50 S.E. 68 (Ga. 1905).

Phillips v. Frey, 20 F.3d 623 (5th Cir. 1994).

Planned Parenthood v. Casey, 505 U.S. 833 (1992).

Pollard v. Photographic Co., [1888] 40 Ch. D. 345.

Prince Albert v. Strange, [1848] 41 Eng. Rep. 1171.

Pronova Biopharma Norge AS v. Teva Pharmaceuticals USA, Inc., 549 Fed. Appx. 934 (Fed.
Cir. 2013).

Riley v. California, 134 S. Ct. 2473, 2489-91 (2014).

Roberson v. Rochester Folding Box Co., 64 N.E. 442 (N.Y. 1902).

Roe v. Wade, 410 U.S. 113 (1973).

Sanders v. ABC, Inc. 978 P.2d 67 (1999).

Shibley v. Time, 341 N.E.2d 337 (Ohio Ct. App. 1975).

Smith v. Maryland, 425 U.S. 435 (1976).

Stephens v. Avery, [1988] I Ch. 449.

Syncsort v. Innovative Routines Int’l, Civ. No. 04-3623, 2011 WL 3651331 (D. N.J. Aug. 18,
2011).

Talley v. California, 362 U.S. 60 (1960).

T-N-T Motorsports, Inc. v. Hennessey Motorsports, Inc., 965 S.W.2d 18, 22 (Tex. Ct. App.
1998).

Tone Brothers v. Sysco, 28 F.3d 1192 (Fed. Cir. 1994).

Union Pacific Railway Co. v. Botsford, 141 U.S. 250 (1891).

Union Pacific Railway Co. v. Botsford, 141 U.S. 250, 251 (1891).

217
United States v. Garcia, 474 F.3d 994 (7th Cir. 2007).

United States v. Hambrick, 55 F. Supp. 2d 504 (W.D. Va. 1999).

United States v. Karo, 469 U.S. 705 (1984).

United States v. Kennedy, 81 F. Supp. 2d 1103 (D. Kan. 2000).

United States v. Knotts, 460 U.S. 276 (1983).

United States v. Marquez, 605 F.3d 604 (8th Cir. 2010).

United States v. Maynard, 615 F.3d 544 (D.C. Cir. 2010).

United States v. Miller, 425 U.S. 435 (1976).

United States v. Nerber, 222 F.3d 597 (9th Cir. 2000).

United States v. Pineda-Moreno, 591 F.3d 1212 (9th Cir. 2010).

United States v. Place, 462 U.S. 696 (1983).

United States v. Warshak, 631 F.3d 266 (6th Cir. 2010).

Upjohn Co. v. United States, 449 U.S. 383, 389 (1981).

Williams v. White, 4 F. Supp. 102 (D. Mass. 1870)

Wofle v. United States, 291 U.S. 7 (1934).

Wong Sun v. United States, 371 U.S. 471 (1949).

Y.G. v Jewish Hospital, 795 S.W.2d 488 (1990).

Yovatt v. Winyard, [Ch. 1820] 37 Eng. Rep. 425.

Secondary Sources

Abbott, A. (1995). Things of Boundaries. Social Research, 62(4), 862-882.

Acquisti, A., & Grossklags, J. (2005). Privacy and Rationality in Individual Decision Making.
IEEE Security and Privacy Magazine, 24-30.

Adams, S. (2011, June 7). Networking Is Still The Best Way To Find A Job, Survey Says.
Retrieved February 10, 2015, from
http://www.forbes.com/sites/susanadams/2011/06/07/networking-is-still-the-
best-way-to-find-a-job-survey-says/.
218
Alfino, M., & Mayes, R. (2003). Reconstructing the Right to Privacy. Social Theory and Practice,
29, 1-18.

Allen, A. (1988). Uneasy Access: Privacy for Women in a Free Society. Totowa, NJ: Rowman &
Littlefield.

Allen, A. (2001). Is Privacy Now Possible: A Brief History of an Obsession. Social Research,
68(1), 301-306.

Amar, A. R. (1994). Fourth Amendment First Principles. Harvard Law Review, 107, 757-818
(1994).

American Bar Association Canons of Ethics. (n.d.). Retrieved February 11, 2015, from
http://www.americanbar.org/content/dam/aba/migrated/cpr/mrpc/Canons_Ethic
s.authcheckdam.pdf.

Anderson, N. (2013, August 28). What Does “Natural Male Enhancement” Have to Do
With Email Privacy? A Lot. Retrieved February 9, 2015, from
http://www.slate.com/articles/technology/future_tense/2013/08/enzyte_steven_w
arshak_the_surprising_case_that_helped_improve_email_privacy.html.

Anheier, H., & Kendall, J. (2002). Interpersonal Trust and Voluntary Associations:
Examining Three Approaches. British Journal of Sociology, 53, 343-362.

Ardia, D. S. (2010). Free Speech Savior or Shield for Scoundrels: An Empirical Study of
Intermediary Immunity Under Section 230 of the Communications Decency Act.
Loyola Los Angeles Law Review, 43, 373-505.

Arendt, H. (1958). The Human Condition. Chicago, IL: University of Chicago Press.

Baker, L. (1984). Brandeis and Frankfurter: A Dual Biography. New York, NY: Harper & Row.

Ball, D. (1975). Privacy, Publicity, Deviance and Control. The Pacific Sociological Review, 18(3),
259-278.

Barak, A. (2005). Sexual Harassment on the Internet. Social Science Computer Review, 23, 77-92.

Barbaro, M. & Zeller, Jr., T. (2006, August 9). A Face is Exposed for AOL Searcher No.
4417749. New York Times.

Barlow, J. (1996, February 8). A Declaration of Independence of Cyberspace. Retrieved


December 16, 2014.

Barron, J. (1979). Warren and Brandeis: The Right to Privacy, 4 Harv. L. Rev. 193 (1890):
Demystifying a Landmark Citation. Suffolk University Law Review, 13(4), 875-922.

219
Bates, A. (1964). Privacy—A Useful Concept? Social Forces, 42(4), 429-434.

Bazelon, E. (2010, April 30). Bullies Beware. Slate.

Bazelon, E. (2013). Sticks and Stones: Defeating the Culture of Bullying and Rediscovering the Power of
Character and Empathy. New York: Random House.

Beatson, J. & Friedman, D. (1997). Good Faith and Fault in Contract Law. New York: Oxford
University Press.

Benn, S. (1971). Privacy, Freedom, and Respect for Persons. In Pennock, J. R. and
Chapman, J. W. (Eds.), Nomos XIII: Privacy. New York: Atherton Press.

Bezanson, R. P. (1992). The Right To Privacy Revisited: Privacy, News, and Social Change,
1890-1990. California Law Review, 80, 1133-1175.

Bijker, W. (1995). Of Bicycles, Bakelites, and Bulbs: Toward a Theory of Sociotechnical Change.
Cambridge, MA.: MIT Press.

Bijker, W., Hughes, T., & Pinch, T. (1987). The Social Construction of Technological Systems: New
Directions in the History and Sociology of Technology. Cambridge, MA.: MIT Press.

Blau, P. (1964). Exchange and Power in Social Life. New York: J. Wiley.

Bloustein, E. (1964). Privacy as an Aspect of Human Dignity: An Answer to Dean Prosser.


New York University Law Review, 39, 962-1008.

Bok, S. (1983). Secrets: On the Ethics of Concealment and Revelation. New York, NY: Pantheon
Books.

Bone, R. G. (1998). A New Look at Trade Secret Law: Doctrine in Search of Justification.
California Law Review, 86, 241-313.

Booth, R. (2014, June 29). Facebook Reveals News Feed Experiment to Control Emotions.
Guardian.

Bordieu, P. (1987). The Force of Law: Toward a Sociology of the Juridical Field. Hastings
Law Journal, 38, 805-853.

Boyd, D. (2006, December 4). Friendsters, and Top 8: Writing Community into Being on
Social Network Sites. Retrieved February 7, 2015, from
http://firstmonday.org/article/view/1418/1336.

Boyd, D. (2014). It’s Complicated: The Social Lives of Networked Teens. New Haven, CT: Yale
University Press.

220
Bradach, J., & Eccles, R. (1989). Price, Authority, and Trust: From Ideal Types to Plural
Forms. Annual Review of Sociology, 15, 97-118.

Branscomb, A. W. (1995). Anonymity, Autonomy and Accountability: Challenges to the


First Amendment in Cyberspaces. Yale Law Journal, 104, 1639-1679.

Brenner, S. W. & Clarke, L. L. (2006). Fourth Amendment Protection for Shared Privacy in
Store Transactional Data. Journal of Law and Policy, 14, 211-280.

Burns, K. (Director). (2011). Prohibition [Documentary]. United States: PBS.

Cate, F. (1997). Privacy in the Information Age. Washington, D.C.: Brookings Institution Press.

Chambliss, W. (1979). On Lawmaking. British Journal of Law and Society, 6, 149-171.

Chang, M. (2003). Compelling Interest: Examining the Evidence on Racial Dynamics in Higher
Education (D. Witt, J. Jones, & K. Hakuta, Eds.). Palo Alto, CA: Stanford Education
Press.

Chiappetta, V. (1999). Myth, Chameleon, or Intellectual Property Olympian? A Normative


Framework Supporting Trade Secret Law. George Mason Law Review, 8, 69-165.

Citron, D. K. (2009a). Cyber Civil Rights. Boston University Law Review, 89, 61-125.

Citron, D. K. (2009b). Law’s Expressive Value in Combatting Cyber Gender Harassment.


Michigan Law Review, 108, 373-415.

Clancy, T. (2008). The Fourth Amendment: Its History and Interpretation. Durham, NC:
Carolina Academic Press, 2008.

CNN/ORC Poll. (2013, November 18). Retrieved February 11, 2015, from
http://i2.cdn.turner.com/cnn/2013/images/12/04/cnn.poll.gun.control.pdf.

Cohen, J. (2001). The Necessity of Privacy. Social Research, 68, 318-327.

Cohen, J. E. (2000). Examined Lives: Informational Privacy and the Subject as Object.
Stanford Law Review, 52, 1373-1438.

Cohen, J. E. (2003). DRM and Privacy. Berkeley Technology Law Journal, 18, 575-617.

Cohen, J. E. (2012). Configuring the Networked Self: Law, Code, and the Play of Everyday Practice.
New Haven, CT: Yale University Press.

Colb, S. F. (2004). A World Without Privacy: Why Property Does Not Define the Limits of
the Right Against Unreasonable Searches and Seizures. Michigan Law Review, 102,
889-903.

221
Coleman, J. (1990). Foundations of Social Theory. Cambridge, MA: Belknap Press of Harvard
University Press.

Collier, A. (2013, May 13). The Girls Are All Right: Girls Not as Vulnerable to Sexting as
Media Says. Retrieved February 10, 2015, from http://www.csmonitor.com/The-
Culture/Family/Modern-Parenthood/2013/0513/The-girls-are-all-right-Girls-not-
as-vulnerable-to-sexting-as-media-says.

Collins, R. (1982). Sociological Insight: An Introduction to Non-Obvious Sociology. New York:


Oxford University Press.

Collins, R., & Skover, D. (2006). Curious Concurrence: Justice Brandeis's Vote in Whitney v.
California. Supreme Court Review, 333.

Cotterrell, R. (2005). The Sociology of Law: An Introduction. New York, NY: Oxford
University Press.

Currie, D. (1990). The Constitution in the Supreme Court. Chicago: University of Chicago Press.

Dash, S. (1971). The Eavesdroppers. New York: De Capo Press.

De Montesquieu, B. (1900). The Spirit of the Laws (Vol. XIX) (T. Nugent, Trans., 2010).
Digireads.com.

Deflem, M. (2008). Sociology of Law: Visions of a Scholarly Tradition. Cambridge: Cambridge


University Press.

Diekema, D. (1992). Aloneness and Social Form. Symbolic Interaction, 15(4), 389-536.

Diffie, W. & Landau, S. (1998). Privacy on the Line. Cambridge, MA: MIT Press.

Doney, P., Cannon, J., & Mullen, M. (1998). Understanding the Influence of National
Culture on the Development of Trust. The Academy of Management Review, 23(3), 601-
620.

Douglas, M. (1966). Purity and Danger: An Analysis of Concepts of Pollution and Taboo. London:
Routledge & Kegan Paul.

Douglas, S. (1987). Inventing American Broadcasting, 1899-1922. Baltimore, MD: Johns Hopkins
University Press.

Duggan, M. (2013, September 12). It’s a Woman’s (Social Media) World. Retrieved February
1, 2015, from http://www.pewresearch.org/fact-tank/2013/09/12/its-a-womans-
social-media-world/.

222
Duggan, M., Ellison, N., Lampe, C., Lenhart, A., & Madden, M. (2015, January 9). Social
Media Update 2014. Retrieved January 29, 2015, from
http://www.pewinternet.org/2015/01/09/social-media-update-2014/.

Durkheim, E. (1893). The Division of Labor in Society (W. D. Halls, Trans., 1984). New York:
Free Press.

Durkheim, E. (1895). Rules of Sociological Method (W. D. Halls, Trans., 1984). New York: Free
Press.

Durkheim, E. (1912). The Elementary Forms of Religious Life (C. Cosman, Trans., 2001). New
York: Oxford University Press.

Dworkin, R. (1977). Taking Rights Seriously. Cambridge, MA: Harvard University Press.

Ekins, E. (2013, September 10). Reason-Rupe September 2013 National Survey. Retrieved
January 5, 2015, from http://reason.com/poll/2013/09/10/reason-rupe-september-
2013-national-surv.

Ellsworth, P., Carlsmith, J., & Henson, A. (1972). The Stare as a Stimulus to Flight in
Human Subjects: A Series of Field Experiments. Journal of Personality and Social
Psychology, 302-311.

Ely, Jr., J. (2008). The Guardian of Every Other Right A Constitutional History of Property Rights.
(3rd ed.). New York: Oxford University Press.

Etzioni, A. (1999). The Limits of Privacy. New York: Basic Books.

Facebook ‘Face Recognition’ Feature Draws Privacy Scrutiny. (2011, June 8). New York
Times. Retrieved January 19, 2015, from
http://www.nytimes.com/2011/06/09/technology/09facebook.html.

Feldman, S. (2008). Free Speech, World War I, and Republican Democracy: The Internal
and External Holmes. First Amendment Law Review, 6.

Fischer, C. (1992). America Calling: A Social History of the Telephone to 1940. Berkeley, CA:
University of California Press.

Fitzgerald, B. (2012, July 9). More Women On Facebook, Twitter And Pinterest Than Men.
Huffington Post.

Fox, S. (2000, January 1). Trust and Privacy Online: Why Americans Want to Rewrite the
Rules. Retrieved January 5, 2015, from
http://www.pewinternet.org/~/media//Files/Reports/2000/PIP_Trust_Privacy_R
eport.pdf.pdf.

223
Franks, M. A. (2011). Unwilling Avatars: Idealism and Discrimination in Cyberspace.
Columbia Journal of Gender & Law, 20, 224-261.

Franks, M. A. (2012). Sexual Harassment 2.0. Maryland Law Review, 71, 655-704.

Fried, C. (1968). Privacy. The Yale Law Journal, 77(3), 475-493.

Friedman, L. (1993). Crime and Punishment in American History. New York: Basic Books.

Fukuyama, F. (1993). The End of History and the Last Man. New York: The Free Press.

Fung, B. (2013, December 31). Facebook wants to know if you trust it. But it’s keeping all
the answers to itself. Retrieved January 5, 2015, from
http://www.washingtonpost.com/blogs/the-switch/wp/2013/12/31/facebook-
wants-to-know-if-you-trust-it-but-its-keeping-all-the-answers-to-itself/.

Garfinkel, H. (1964). Studies in the Routine Grounds of Everyday Activities. Social Problems,
11, 225-250.

Gates, G. & Newport, F. (2013, February 13). LGBT Percentage Highest in D.C., Lowest in
North Dakota. Retrieved January 11, 2015, from
http://www.gallup.com/poll/160517/lgbt-percentage-highest-lowest-north-
dakota.aspx.

Gavison, R. (1980). Privacy and the Limits of Law. The Yale Law Journal, 89(3), 421-471.

Gerety, T. (1977). Redefining Privacy. Harvard Civil Rights - Civil Liberties Law Review, 12(2),
233-296.

Gernsheim, H. (1969). The History of Photography From the Camera Obscura to the Beginning of the
Modern Era. New York, NY: McGraw-Hill.

Gerstein, R. (1984). Intimacy and Privacy. In Philosophical Dimensions of Privacy (Ferdinand D.


Schoeman ed.). Cambridge: Cambridge University Press.

Gilles, S. (1995). Promises Betrayed: Breach of Confidence as A Remedy for Invasions of


Privacy. Buffalo Law Review, 43, 1-84.

Glanville, J., & Paxton, P. (2007). How do We Learn to Trust? A Confirmatory Tetrad
Analysis of the Sources of Generalized Trust. Social Psychology Quarterly, 70, 230-242.

Glater, J. (2008, March 1). Judge Reverses His Order Disabling Websites. New York Times.

Goffman, E. (1959). The Presentation of Self in Everyday Life. Garden City, NY: Doubleday.

Goffman, E. (1963a). Stigma: Notes on the Management of Spoiled Identity. Englewood Cliffs, NJ:
Prentice-Hall.
224
Goffman, E. (1963b). Behavior in Public Places; Notes on the Social Organization of Gatherings. New
York: Free Press of Glencoe.

Goffman, E. (1967). Interaction Ritual: Essays On Face-to-Face Behavior. Garden City, NY:
Doubleday.

Goffman, E. (1972). Relations in Public; Microstudies of the Public Order. New York: Basic Books.

Good, D. (1988). Individuals, Interpersonal Relations, and Trust in, Trust: Making and breaking
cooperative relations (Diego Gambetta ed.). New York: B. Blackwell.

Granovetter, M. (1985). Economic Action and Social Structure: A Theory of


Embeddedness. American Journal of Sociology, 91(3), 481-510.

Gross, H. (1967). The Concept of Privacy. New York University Law Review, 42, 34-54.

Gross, R. & Acquisti, A. (2005). Information Revelation and Privacy in Online Social
Networks (The Facebook Case). ACM Workshop on Privacy in the Electronic
Society.

Gurry, F. (1984). Breach of Confidence. Oxford: Clarendon Press.

Guthrie, Jr., J. (1998). Keepers of the Spirits: The Judicial Response to Prohibition Enforcement in
Florida, 1885-1935. Praeger.

Ha, Tu Thanh. (2006, April 7). “Star Wars Kid” Cuts a Deal with His Tormentors. Globe & Mail
(Toronto).

Hampton, K. N., Goulet, L. S., Marlow, C., & Rainie, L. (2012, February 3). Why Most
Facebook Users Get More Than They Give. Retrieved January 30, 2015, from
www.pewinternet.org/2012/02/03/why-most-facebook-users-get-more-than-they-
give/.

Hampton, K. N., Goulet, L. S., Rainie, L., & Purcell, K. (2011, June 16). Social Networking
Sites and Our Lives. Retrieved January 10, 2015, from
www.pewinternet.org/2011/06/16/social-networking-sites-and-our-lives/.

Hardin, R. (2000). The Public Trust, in Disaffected Democracies: What’s Troubling the Trilateral
Countries (Susan J. Pharr & Robert D. Putnam ed.). Princeton, NJ: Princeton
University Press.

Hartzog, W. (2014). Reviving Implied Confidentiality. Indiana Law Journal, 89, 763-806.

225
Heggestuen, J. (2013, December 18). The Results So Far From Holiday Shopping Point To
Huge Gains For Mobile Commerce This Year. Retrieved January 12, 2015, from
http://www.businessinsider.com/mobile-and-e-commerce-growth-in-2013-2013-
12#ixzz3M4PNAqQn.

Hellman, D. (2000). The Expressive Dimension of Equal Protection. Minnesota Law Review,
85, 1-69.

Henderson, D. (1985). Congress, courts, and criminals: The development of federal criminal law, 1801-
1829. Westport, Conn.: Greenwood Press.

Ho, T., & Weigelt, K. (2005). Trust Building Among Strangers. Management Science, 51, 519-
530.

Hogan, B. (2008, January 1). Me, My Spouse, and the Internet: Meeting, Dating, and
Marriage in the Digital Age. Retrieved February 9, 2015, from
http://www.oii.ox.ac.uk/research/projects/?id=47.

Horwitz, M. (1992). The Transformation of American Law, 1870-1960: The Crisis of Legal
Orthodoxy. New York, NY: Oxford University Press.

Hurtado, S., Milem, J., Clayton-Pedersen, A., & Walter, A. (1999). Enacting Diverse Learning
Environments: Improving the Climate for Racial/Ethnic Diversity in Higher Education.
Washington, DC: George Washington University Graduate School of Education and
Human Development.

Inness, J. (1992). Privacy, Intimacy, and Isolation. New York: Oxford University Press.

Johnson, J. (1988). Mixing Humans and Nonhumans Together: The Sociology of a Door-
Closer. Social Problems, 35, 298-310.

Jones, G., & George, J. (1998). The Experience and Evolution of Trust: Implications for
Cooperative Teamwork. The Academy of Management Review, 23(3), 531-546.

Jourard, S. (1966). Some Psychological Aspects of Privacy. Law and Contemporary Problems,
31(2), 307-318.

Joyce, C. (1986). Book Review: Keepers of the Flame: Prosser and Keeton on the Law of
Torts (Fifth Edition) and the Prosser Legacy. Vanderbilt Law Review, 39, 851-876.

Kadri, A. (2012, July 18). Research Shows Texting Now More Popular Than Calling.
Retrieved February 7, 2015, from http://www.bbc.co.uk/newsbeat/18892128.

Kant, I. (1785). Groundwork of the Metaphysics of Morals (T. Abbott, Trans., 2005). eBook for
iPad.

Kateb, G. (2001). On Being Watched and Known. Social Research, 68(1), 269-298.
226
Kerr, O. (2004). The Fourth Amendment and New Technologies: Constitutional Myths and
The Case for Caution. Michigan Law Review, 102, 801-888.

Kilpatrick, J. (1967, April 1). The Walls Have Eyes, A Review of The Intruders by Senator
Edward V. Long. Retrieved February 2, 2015, from
http://www.unz.org/Pub/SaturdayRev-1967apr01-00031.

Kim, D., Subramanian, S., & Kawachi, I. (2006). Bonding Versus Bridging Social Capital and
Their Associations with Self-Rated Health: A Multilevel Analysis of 40 U.S.
Communities. Journal of Epidemiology & Community Health, 60(2), 116-122.

Kinsey, A. (1953). Sexual Behavior in the Human Female. Philadelphia, PA: Saunders.

Kline, R., & Pinch, T. (1996). Users as Agents of Technological Change: The Social
Construction of the Automobile in the Rural United States. Technology and Culture, 37.

Konvitz, M. (2000). Nine American Jewish Thinkers. New Brunswick, NJ: Transaction.

Konvitz, M. R. (1966). Privacy and the Law: A Philosophical Prelude. Law and Contemporary
Problems 31, 272-280.

Korsgaard, C. (2004, February 6). Fellow Creatures: Kantian Ethics and Our Duties to
Animals. Lecture conducted from The Tanner Lecture on Human Values, Ann
Arbor, MI.

Kosoff, M. (2014, June 13). Here’s How To Opt Out Of Facebook’s New Plan To Sell Your
Browser Data. Retrieved February 7, 2015, from
http://www.businessinsider.com/how-to-opt-out-of-facebook-plan-to-sell-your-
browser-data-2014-6.

Lane, F. (2009). American Privacy: The 400-year History of Our Most Contested Right. Boston,
Mass.: Beacon Press.

LaPierre, W. (2013, July 30). Criticize the Obama Administration, Pay a Terrible Price.
Retrieved February 7, 2015, from http://dailycaller.com/2013/07/30/wayne-
lapierre-criticize-obama-administration-pay-a-terrible-price/.

Latour, B. (1992). Where Are the Missing Masses? The Sociology of a Few Mundane
Artifacts. In Bijker, W. E. and Law, J. (Eds.), Shaping Technology/Building Society: Studies
in Sociotechnical Change (pp. 225-258). Cambridge, MA: MIT Press.

Latowska, G. (2008). Google’s Law. Brooklyn Law Review, 73, 1327-1410.

Laufer, R., & Wolfe, M. (1977). Privacy as a Concept and a Social Issue: A Multidimensional
Developmental Theory. Journal of Social Issues, 33, 22-42.

227
Lemley, M. (2014, February 11). Does 'Public Use' Mean the Same Thing It Did Last Year?
Retrieved February 9, 2015, from
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2394153.

Lenhart, A., & Madden, M. (2007, April 18). Teens, Privacy, and Online Social Networks,
Pew Internet and American Life Project. Retrieved February 8, 2015, from
http://www.pewinternet.org/files/old-
media//Files/Reports/2007/PIP_Teens_Privacy_SNS_Report_Final.pdf.pdf.

Lessig, L. (1995). The Regulation of Social Meaning. University of Chicago Law Review, 62, 943-
1044.

Lessig, L. (2002). Privacy as Property. Social Research, 69, 247-269.

Lewis, J., & Weigert, A. (1985). Social Atomism, Holism, And Trust. The Sociological Quarterly,
26(4), 455-471.

Lewis, J., & Weigert, A. (1985). Trust as a Social Reality. Social Forces, 63(4), 967-985.

Lidsky, L. & Cotter, T. (2007). Authorship, Audiences, and Anonymous Speech. Notre Dame
Law Review, 82, 1537-1603.

Linden, G. et al. (2003). Amazon.com Recommendations: Item-to-Item Collaborative


Filtering. IEEE Internet Computing. Retrieved from
http://www.cs.umd.edu/~samir/498/Amazon-Recommendations.pdf.

Liptak, A. & Stone, B. (2008, February 20). Judge Shuts Down Web Site Specializing in
Leaks, Raising Constitutional Issues. New York Times.

Lipton, J. (2010). “We, The Paparazzi”: Developing a Privacy Paradigm for Digital Video.
Iowa Law Review, 95, 919-984.

Locke, J. (1689). Second Treatise of Government. The Gutenberg Project. Retrieved from
http://www.gutenberg.org/files/7370/7370-h/7370-h.htm.

Luhmann, N. (1979). Trust and Power. New York: John Wiley & Sons.

Lukes, S., & Sculls, A. (Eds.). (2013). Durkheim: Durkheim and the Law (2nd ed.). New York:
Palgrave Macmillan.

Lynch, R. (2014, August 14). ‘F-Bomb,’ ‘Sexting’ Among New Merriam-Webster Dictionary
Words. Retrieved February 7, 2015, from
http://articles.latimes.com/2012/aug/14/nation/la-na-nn-f-bomb-dictionary-
20120814.

MacKinnon, C. (1989). Toward a Feminist Theory of the State. Cambridge, Mass.: Harvard
University Press.
228
Macy, M., & Skvoretz, J. (1998). The Evolution of Trust and Cooperation Among Strangers:
A Computational Model. American Sociological Review, 63(5), 638-660.

Madden, M. (2013, May 21). Teens, Social Media, and Privacy, Pew Internet and American
Life Project. Retrieved February 9, 2015, from
http://www.pewinternet.org/2013/05/21/teens-social-media-and-privacy/.

Madden, M., Lenhart, A., Cortesi, S., Gasser, U., Duggan, M., Smith, A., & Beaton, M.
(2013, May 21). Teens, Social Media, and Privacy. Retrieved January 29, 2015, from
http://www.pewinternet.org/files/2013/05/PIP_TeensSocialMediaandPrivacy_PD
F.pdf.

Martin, M. (1991). Hello Central?: Gender, Technology and Culture in the Formation of Telephone
Systems. Montreal: McGill-Queen’s University Press.

Mason, A. (1946). Brandeis: A Free Man’s Life. New York: Viking Press.

Matthews, C. (2014, January 14). More Than 11 Million Young People Have Fled Facebook
Since 2011. Retrieved February 7, 2015, from
http://business.time.com/2014/01/15/more-than-11-million-young-people-have-
fled-facebook-since-2011/.

Matthews, S. (2010). Anonymity and the Social Self. American Philosophical Quarterly, 47(4),
351-363.

Maxwell, R. (1967). Onstage and Offstage Sex: Exploring a Hypothesis. Cornell Journal of
Social Relations, 1, 75-84.

Mcknight, D., Cummings, L., & Chervany, N. (1998). Initial Trust Formation In New
Organizational Relationships. Academy of Management Review, 23(3), 473-490.

Mendoza, M. (2014, February 14). Facebook Adds New Gender Options. Retrieved
February 7, 2015, from http://www.huffingtonpost.com/2014/02/13/facebook-
gender_n_4782477.html.

Mensel, R. (1991). “Kodakers Lying in Wait”: Amateur Photography and the Right of
Privacy in New York, 1885-1915. American Quarterly, 43, 24-45.

Merges, R. P. (2012). Priority and Novelty Under the AIA. Berkeley Technology Law Journal, 27,
1023-1046.

Milem, J., & Hakuta, K. (2000). The Benefits of Racial and Ethnic Diversity in Higher Education, in
Minorities in Higher Education: Seventeenth Annual Status Report (Deborah J. Wilds ed.).

Miles, S. (2003). The Hippocratic Oath and the Ethics of Medicine. New York, NY: Oxford
University Press.
229
Milliman, R., & Fugate, D. (1988). Using Trust Transference as a Persuasion Technique: An
Empirical Field Investigation. The Journal of Personal Selling and Sales Management, 8(2),
1-7.

Mims, C. (2010, June 2). How iTunes Genius Really Works. Technology Review.

Misztal, B. (1996). Trust in Modern Societies: The Search for the Bases of Social Order. Cambridge,
MA: Polity Press.

Möllering, G. (2001). The Nature of Trust: From Georg Simmel to a Theory of Expectation,
Interpretation and Suspension. Sociology, 35(2), 403-420.

Moran, C. (2014, June 12). Facebook Is Now Selling Your Web-Browsing Data To
Advertisers. Retrieved February 7, 2015, from
http://consumerist.com/2014/06/12/facebook-is-now-selling-your-web-browsing-
data-to-advertisers/.

Mossoff, A. (2002). Locke’s Labor Lost. University of Chicago Law School Roundtable, 9, 155-164.

Mott, F. (1950). American Journalism. New York: The MacMillan Company.

Murray, R. (2014, September 26). Ello Might or Might Not Replace Facebook, But the Giant
Social Network Won’t Last Forever. Retrieved February 7, 2015, from
http://www.theguardian.com/commentisfree/2014/sep/26/ello-might-or-might-
not-replace-facebook-but-the-giant-social-network-wont-last-forever.

Nannestad, P. (2008). What Have We Learned About Generalized Trust, If Anything?


Annual Review of Political Science, 413-436.

Neal, R. W. (2014, January 16). Facebook Gets Older: Demographic Report Shows 3 Million
Teens Left Social Network in 3 Years. International Business Times.

Nelson, D. (2002). Pursuing Privacy in Cold War America. New York: Columbia University
Press.

Newton, K. (1999). Social and Political Trust in Established Democracies, in Critical Citizens: Global
Support for Democratic Governance. New York: Oxford University Press.

Newton, K., & Zmerli, S. (2011). Three Forms of Trust and Their Association. European
Political Science Review, 169-200.

Nissenbaun, H. (1998). Protecting Privacy in the Information Age: The Problem of Privacy
in Public. Law & Philosophy, 17, 559-596.

Nissenbaum, H. (2004). Privacy as Contextual Integrity. Washington Law Review, 79(1), 101-
139.
230
Nissenbaum, H. (2010). Privacy in Context Technology, Policy, and the Integrity of Social Life.
Stanford, CA: Stanford Law Books.

Nozick, R. (1974). Anarchy, State, and Utopia. New York, NY: Basic Books.

Nye, D. (1990). Electrifying America: Social Meanings of a New Technology, 1880-1940. Cambridge,
MA: MIT Press.

O’Brien, D. (1902). The Right to Privacy. Columbia Law Review, 2, 437-448.

O’Brien, D. M. (1979). Privacy, Law, and Public Policy. New York, NY: Praeger.

Offe, C. (1999). How Can We Trust Our Fellow Citizens, in Democracy and Trust (Mark Warren
ed.). Cambridge: Cambridge University Press.

Organization for Economic Cooperation and Development. (2010). The Economic and
Social Role of Internet Intermediaries. Retrieved from
http://www.oecd.org/dataoecd/49/4/44949023.pdf.

Packard, V. (1964). The Naked Society. Brooklyn, NY: IG Publishing.

Parsons, T. (1978). Action Theory and the Human Condition. New York: Free Press.

Pasquale, F. (2010). Beyond Innovation and Competition: The Need for Qualified
Transparency in Internet Intermediaries. Northwestern University Law Review, 104, 105-
173.

Pasquale, F. (2014a). The Black Box Society: The Secret Algorithms That Control Money and
Information. Cambridge, MA: Harvard University Press.

Pasquale, F. (2014b). Redescribing Health Privacy: The Importance of Information Policy.


Houston Journal of Health Law and Policy, 14, 95-128.

Pember, D. (1972). Privacy and The Press: The Law, The Mass Media, and The First Amendment.
Seattle: University of Washington Press.

Peskin, M., Markham, C., Addy, R., Shegog, R., Thiel, M., & Tortolero, S. (2013). Prevalence
and Patterns of Sexting Among Ethnic Minority Urban High School Students.
Cyberpsychology, Behavior, and Social Networking, 454-459.

Peterson, A. (2013, December 27). How a failed Supreme Court bid is still causing
headaches for Hulu and Netflix. Retrieved February 7, 2015, from
http://www.washingtonpost.com/blogs/the-switch/wp/2013/12/27/how-a-failed-
supreme-court-bid-is-still-causing-headaches-for-hulu-and-netflix/.

231
Pinch, T. (2008). Technology and Institutions: Living in a Material World. Theory and Society,
37(5), 461-483.

Pinch, T. (2010). The Invisible Technologies of Goffman’s Sociology: From the Merry-Go-
Round to the Internet. Technology & Culture, 51(2), 409-424.

Pinch, T., & Bijker, W. (1984). The Social Construction of Facts and Artifacts. Social Studies of
Science, 14(3), 399-441.

Pinch, T., & Bijker, W. (2012). The Social Construction of Facts and Artifacts: Or How the Sociology
of Science and the Sociology of Technology Might Benefit Each Other, in The Social Construction of
Technological Systems: New Directions in the Sociology and History of Technology. Cambridge,
MA: MIT Press.

Posner, R. (1976). An Economic Theory of Privacy. Regulation, 2, 19-26.

Posner, R. (1978). The Right to Privacy. Georgia Law Review, 12, 393-435.

Posner, R. (1981). The Economics of Justice. Boston: Harvard University Press.

Post, R. C. (2001). Three Conceptions of Privacy. Georgetown Law Journal, 89.

Post, R. C. (1989). The Social Foundations of Privacy: Community and Self in the Common
Law Tort. California Law Review, 77, 957-1010.

Prosser, W. (1960). Privacy. California Law Review, 48, 383-423.

Protalinski, E. (2015, January 29). Facebook Passes 1.23 Billion Monthly Active Users, 945
Million Mobile Users, and 757 Million Daily Users. Retrieved January 30, 2015, from
http://thenextweb.com/facebook/2014/01/29/facebook-passes-1-23-billion-
monthly-active-users-945-million-mobile-users-757-million-daily-users/.

Putnam, R. (2000). Bowling Alone: The Collapse and Revival of American Community. New York,
NY: Simon & Schuster.

Rabin, R. (2008, September 29). You Can Find Dr. Right, With Some Effort. Retrieved
February 9, 2015, from
http://www.nytimes.com/2008/09/30/health/30find.html?_r=2&.

Rachels, J. (1984). Why Privacy is Important. In Schoeman, F. D. (Ed.), Philosophical Dimensions of


Privacy (p. 290-299). New York: Cambridge University Press.

Rawls, J. (1971). A Theory of Justice. Cambridge, MA: Belknap Press.

Regan, P. M. (1995). Legislating Privacy: Technology, Social Values, and Public Policy. Chapel Hill:
University of North Carolina Press.

232
Reiman, J. (1976). Privacy, Intimacy, and Personhood. Philosophy & Public Affairs, 6(1), 26-44.

Reiman, J. (1984). Privacy, Intimacy, and Personhood. In Schoeman, F. D. (Ed.), Philosophical


Dimensions of Privacy (pp. 300-316). New York: Cambridge University Press.

Rempel, J., Ross, M., & Holmes, J. (1985). Trust And Communicated Attributions In Close
Relationships. Journal of Personality and Social Psychology, 49(1), 57-64.

Richards, N. & Solove, D. (2007). Privacy’s Other Path: Recovering the Law of
Confidentiality. Georgetown Law Journal , 96, 123-182.

Richinick, M. (2014, December 14). Newtown Families ‘Dedicated for Life’ to Reducing
Gun Violence,. Retrieved February 7, 2015, from
http://www.msnbc.com/msnbc/newtown-families-dedicated-life-sandy-hook-
reducing-gun-violence.

Roman Catholic Canon Law 983 §1. (n.d.). Retrieved February 9, 2015, from
http://www.vatican.va/archive/ENG1104/_P3G.HTM.

Romm, T. (2014, November 19). Uber’s PR Stumble Drives New Privacy Woes. Retrieved
January 5, 2015, from http://www.politico.com/story/2014/11/uber-david-plouffe-
113049.html#ixzz3JdSI1M00.

Rosen, J. (2000). The Unwanted Gaze: The Destruction of Privacy in America. New York, NY:
Random House.

Rosen, J. (2001). The Purposes of Privacy. Social Research, 68, 209-220.

Ryan, J. (2013). A History of the Internet and the Digital Future. London: Reaktion Press.

Rykwert, J. (2001). Privacy in Antiquity. Social Research, 68(1), 29-40.

Sako, M. (1992). Prices, Quality, and Trust: Inter-Firm Relations in Britain and Japan. Cambridge:
Cambridge University Press.

Sandel, M. (1988). Liberalism and the Limits of Justice. New York: Cambridge University Press.

Sandel, M. (1996). Democracy’s Discontent: America in Search of a Public Philosophy. Cambridge,


Mass.: Belknap Press of Harvard University Press.

Scherer, F. M. & Ross, D. (1990). Industrial Market Structure and Economic Performance. New
York: Houghton Mifflin Company.

Schudson, M. (1981). Discovering the News: A Social History of American Newspapers. New York:
Basic Books.

233
Schwartz, A. (2013). Chicago’s Video Surveillance Cameras: A Pervasive and Poorly
Regulated Threat to Our Privacy. Northwestern Journal of Technology & Intellectual
Property, 11(2), 47-60.

Schwartz, M. (2008, August 3). Inside the World of Online Trolls, Who Use the Internet to
Harass, Humiliate and Torment Strangers: Malwebolence. New York Times
(Magazine).

Scott, G. (1995). Mind Your Own Business: The Battle for Personal Privacy. Plenum.

Scott, J. (2000). Social Network Analysis: A Handbook. London: SAGE Publications.

Sengupta, S. (2013, February 26). Staying Private on the New Facebook. Retrieved February
7, 2015, from
http://www.nytimes.com/2013/02/07/technology/personaltech/protecting-your-
privacy-on-the-new-facebook.html?pagewanted=all&_r=2&.

Sengupta, S. (2013, March 30). Letting Down Our Guard With Web Privacy. Retrieved
February 10, 2015, from http://www.nytimes.com/2013/03/31/technology/web-
privacy-and-how-consumers-let-down-their-guard.html?pagewanted=all&_r=0.

Shandeen, S. (2006). Relative Privacy: What Privacy Advocates Can Learn from Trade Secret
Law. Michigan State Law Review 2006, 667-707.

Shelley, G. et al. (1995). Who Knows Your HIV Status? What HIV+ Patients and Their
Network Members Know About Each Other. Social Networks, 17, 189-217.

Shils, E. (1966). Privacy: Its Constitution and Vicissitudes. Law and Contemporary Problems,
31(2), 281-283.

Siminite, T. (2012, June 13). What Facebook Knows. MIT Technology Review.

Simmel, A. (1971). Privacy Is Not An Isolated Freedom. In Pennock, J. R. & Chapman, J.


W. (Eds.), Nomos XIII: Privacy. New York: Atherton Press.

Simmel, G. (1906). The Sociology Of Secrecy And Of Secret Societies. American Journal of
Sociology, 11(4), 441-498.

Simon, M. (2000). Prosecutorial Discretion and Prosecution Guidelines: A Case Study in


Controlling Federalization. New York University Law Review, 75, 893-964.

Six, F., Nooteboom, B., & Hoogendoorn, A. (2010). Actions that Build Interpersonal Trust:
A Relational Signaling Perspective. Review of Social Economy, 68(3), 285-315.

Slobogin, C. (2002). Public Privacy: Camera Surveillance of Public Places and the Right to
Anonymity. Mississippi Law Journal, 72.

234
Smith, B. (2014, November 17). Uber Executive Suggests Digging Up Dirt On Journalists.
Retrieved January 5, 2015, from http://www.buzzfeed.com/bensmith/uber-
executive-suggests-digging-up-dirt-on-journalists.

Smith, R. (1990). Liberalism and American Constitutional Law. Cambridge, Mass.: Harvard
University Press.

Solove, D. (2001). Privacy and Power: Computer Databases and Metaphors for Information
Privacy. Stanford Law Review, 53, 1393-1462.

Solove, D. (2002a). Conceptualizing Privacy. California Law Review, 90(4), 1087-1154.

Solove, D. (2002b). Digital Dossiers and the Dissipation of Fourth Amendment Privacy.
Southern California Law Review, 75, 1083-1167.

Solove, D. (2004). The Digital Person: Technology and Privacy in the Information Age. New York:
New York University Press.

Solove, D. (2005). Fourth Amendment Codification and Professor Kerr’s Misguided Call for
Judicial Deference. Fordham Law Review, 74, 747-777.

Solove, D. (2006). A Taxonomy of Privacy. University of Pennsylvania Law Review, 154, 477-564.

Solove, D. (2007a). ‘I’ve Got Nothing to Hide’ And Other Misunderstandings of Privacy.
San Diego Law Review, 44, 745-772.

Solove, D. (2007b). The Future of Reputation: Gossip, Rumor, and Privacy on the Internet. New
Haven, CT: Yale University Press.

Solum, L. B. & Chung, M. (2004). The Layers Principle: Internet Architecture and the Law.
Notre Dame Law Review, 79, 815-948.

Strahilevitz, L. (2005). A Social Networks Theory of Privacy. University of Chicago Law Review,
72, 919-988.

Strub, P., & Priest, T. (1976). Two Patterns of Establishing Trust: The Marijuana User.
Sociological Focus, 9, 399-411.

Subramanian, S. (2002). Social Trust and Self-Rated Health in U.S. Communities: A


Multilevel Analysis. Journal of Urban Health, 79, S21-34.

Subramanian, S., Subramania, S., & Kawachi, I. (2001). Does the State You Live In Make a
Difference: Multi-Level Analysis of Self-Rated Health in the U.S. Social Science and
Medicine, 53(1), 9-19.

Sullivan, K. (1998). First Amendment Intermediaries in the Age of Cyberspace. University of


California, Los Angeles Law Review, 45, 1653-1681.
235
Sunstein, C. (1996). On the Expressive Function of Law. University of Pennsylvania Law Review
144, 2021-2053.

Sunstein, C. (2001). Republic.com. Princeton, NJ: Princeton University Press.

Szabo, C. (2014, November 1). Should Government Get a Master Key to Your
Smartphone,. Retrieved February 7, 2015, from http://thehill.com/blogs/congress-
blog/technology/222484-should-governments-get-a-master-key-to-your-
smartphone.

Taylor, P. (Ed.) (2013, June 13). A Survey of LGBT Americans: Attitudes, Experiences and
Values in Changing Times. Retrieved November 11, 2014, from
http://www.pewsocialtrends.org/files/2013/06/SDT_LGBT-Americans_06-
2013.pdf.

Terhune, C. (2008, July 22). They Know What’s In Your Medicine Cabinet. Business Week.

Thomson, J. (1984). The Right to Privacy. In Schoeman, F. D. (Ed.), Philosophical Dimensions of


Privacy (p. 272-289). New York: Cambridge University Press.

Tolbert, C., Lyson, T., & Irwin, M. (1998). Local Capitalism, Civic Engagement, and
Socioeconomic Well-Being. Social Forces, 77(2), 401-427.

Toulson, R. G. & Phipps, C. M. (1996). Confidentiality. London, UK: Sweet and Maxwell.

Towle, A. (2011, February 17). Facebook Adds ‘Civil Union’ and ‘Domestic Partnership’ to
User ‘Relationship Status’ Options. Retrieved February 5, 2015, from
http://www.towleroad.com/2011/02/facebook-adds-civil-union-and-domestic-
partnership-to-user-relationship-status-options.html.

Transcript of the New York City Taxi and Limousine Commission. (2014, October 16).
Retrieved January 5, 2015, from
http://www.nyc.gov/html/tlc/downloads/pdf/transcript_10_16_14.pdf.

Tully, J. (1993). An Approach to Political Philosophy: Locke in Contexts. Cambridge: Cambridge


University Press.

Tverdek, E. (2008). What Makes Information “Public”? Public Affairs Quarterly, 22(1), 63-77.

Uslaner, E. (1999). Democracy and trust (Mark E. Warren ed.). Cambridge: Cambridge
University Press.

Uslaner, E. (2000). Producing and Consuming Trust. Political Science Quarterly, 115, 569-590.

Uslaner, E. (2002). The Moral Foundations of Trust. New York: Cambridge University Press.

236
Uslaner, E. (2014). The Real Reason Why Millenials Don’t Trust Others. Washington Post.

Uslaner, E., & Conley, R. (2003). Civic Engagement and Particularized Trust: The Ties that
Bind People to Their Ethnic Communities. American Politics Research, 31, 331-360.

Van Den Haag, E. (1971). On Privacy. In Pennock, J. R. and Chapman, J. W. (Eds.), Nomos
XIII: Privacy. New York: Atherton Press.

Vaughan, D. (1990). Uncoupling: Turning Points in Intimate Relationships. New York, NY:
Vintage.

Vaughan, D. (1996). The Challenger Launch Decision: Risky Technology, Culture, and Deviance at
NASA. Chicago, IL: University of Chicago Press.

Veenstra, G. (2000). Social Capital, SES, and Health: An Individual-Level Analysis. Social
Science & Medicine, 50(5), 619-629.

Volokh, E. (1995). Cheap Speech and What It Will Do. Yale Law Journal, 104, 1805-1850.

Volokh, E. (2012). Unradical: “Freedom of the Press” as the Freedom of All to Use Mass
Communication Technology. Iowa Law Review, 97, 1275-1282.

Waldman, A. (2012). Tormented: Antigay Bullying in Schools. Temple Law Review, 84, 385-
442.

Waldman, A. (2013). Durkheim’s Internet: Social and Political Theory in Online Society.
New York University Journal of Law and Liberty, 7, 345-430.

Waldman, A. (2014, November 19). The Case for Uber Data Sharing. Retrieved January 5,
2015, from http://www.gothamgazette.com/index.php/opinions/5443-the-case-for-
uber-data-sharing-ari-waldman.

Waldman, A. (2015). Destabilizing Privacy: How Trust Helps Define Our Expectations
Under the Fourth Amendment. Manuscript submitted for publication.

Walker, S. (2012). Presidents and Civil Liberties from Wilson to Obama: A Story of Poor Custodians
(2nd ed.). New York: Cambridge University Press.

Walterscheid, E. (2002). The Nature of the Intellectual Property Clause. Buffalo, NY: W.S. Hein &
Co.

Warren, S., & Brandeis, L. (1890). The Right to Privacy. Harvard Law Review, 4(5), 193-220.

Weber, M. (1958). The Protestant Sects and the Spirit of Capitalism, in From Max Weber: Essays in
Sociology (H. Gerth and C. Wright Mills ed.). New York: Oxford University Press,
Galaxy.

237
Weimann, G. (1983). The Strength of Weak Conversational Ties in the Flow of Information
and Influence. Social Networks, 5(3), 245-267.

Welch, M., Sikkink, D., & Loveland, M. (2007). The Radius Of Trust: Religion, Social
Embeddedness And Trust In Strangers. Social Forces, 86(1), 23-46.

Westin, A. (1965). Privacy in Western History: From the Age of Pericles to the American Republic.
Cambridge, MA: Harvard University Press.

Westin, A. (1967). Privacy and Freedom. New York, NY: Atheneum.

White, G. E. (2006). Tort Law in America: An Intellectual History. New York: Oxford University
Press.

White, H. (1951). The Right to Privacy. Social Research, 18(2), 171-202.

Whiteley, P. (1999). The Origins of Social Capital, in Social Capital and European Democracy
(Willem van Deth ed.). London: Routledge.

Williams, M. (2001). In Whom We Trust. The Academy of Management Review, 26(3), 377-396.

Williams, R. (1985). Keywords: A Vocabulary of Culture and Society. New York: Oxford
University Press.

Winn, P. (2002). Confidentiality in Cyberspace: The HIPAA Privacy Rules and the Common
Law. Rutgers Law Journal, 33.

Wortham, J. (2013, December 18). Facebook Responds to Anger Over Proposed Instagram
Changes. Retrieved February 7, 2015, from
http://www.nytimes.com/2012/12/19/technology/facebook-responds-to-anger-
over-proposed-instagram-changes.html.

Yamagishi, T., & Yamagishi, M. (1994). Trust and commitment in the United States and
Japan. Motivation and Emotion, 18, 129-166.

Yamagishi, T., Jin, N., & Miller, A. (1998). In-group Bias and Culture of Collectivism. Asian
Journal of Social Psychology, 1(3), 315-328.

Yoo, C. (2010). Free Speech and the Myth of the Internet as an Unintermediated
Experience. George Washington Law Review, 78, 697-773.

Zarsky, T. (2004). Thinking Outside the Box: Considering Transparency, Anonymity, and
Pseudonymity as Overall Solutions to the Problems of Information Privacy in the
Internet Society. University of Miami Law Review, 58, 991-1044.

Zimmerman, D. L. (1983). Requiem For A Heavyweight: A Farewell to Warren and


Brandeis’s Privacy Tort. Cornell Law Review, 68, 291-365.
238
Zimmerman, R. & Whittaker, S. (2000). Good Faith in European Contract Law. New York:
Cambridge University Press.

Zittrain, J. (2000). What the Publisher Can Teach the Patient: Intellectual Property and
Privacy in an Era of Trusted Privication. Stanford Law Review, 52, 1201-1250.

Zittrain, J. (2006). The Generative Internet. Harvard Law Review, 119, 1974-2040.

239
APPENDIX
Sharing on Facebook, A Survey

This is a survey about what you share on Facebook. It is part of a doctoral


dissertation at Columbia University's Department of Sociology. The goal is to
understand some of the motivations behind sharing and to learn what it is that
Facebook users like to share. Answer honestly.

The survey proceeds in three stages. First, I want to know a little about you and your
internet use. None of your responses here will identify who you are. This is just basic
usage and demographic data. Second, I would like to learn a little about whether and
how much you tend to trust or mistrust others. Third, and finally, you will answer
questions about what kinds of things you share on Facebook.

Please note that the entire survey is anonymous. All responses are logged directly
into a Google spreadsheet when you click "Send Form." It is not possible to identify
responders.

Thank you for your help!

Part I: Who Are You


First, tell me a little bit about who you are. Please note that all responses are anonymous and it is
impossible to discover your identity through your responses.

How old are you?

< 18

18-25

26-35

36-45

46-55

> 55

Gender: Do you identify as ...

Male

Female

Transgender

240
Education: What is the highest level of education you have achieved?

Less than high school

High school or GED

Some college

College

Master's degree

Doctoral degree

Professional degree

Sexual Orientation: Do you identify as ...

Heterosexual

Gay or Lesbian

Bisexual

Queer

Other

On which of the following social networks do you maintain an active profile? Please choose all that
apply. “Active” means that you update or visit regularly.

Facebook

Twitter

Instagram

LinkedIn

Pinterest

OK Cupid

Match.com

Google+

Flickr

MySpace

Other

On average, how long do you spend on the internet per day?

< 1 hour

1-3 hours

> 3 hours

241
Part II: Trust
I’d like to know your opinions about trust and whether you are likely to trust certain groups of people.
Don’t worry too much about a particular definition of the word “trust.” Use the definition you feel fits
best in the following sentences: “I trust that Lucy will keep my secret” or “I trust you to do the right
thing.”

Generally speaking, would you say that most people can trusted or that you can’t be too careful in
dealing with strangers

Most people can be trusted

You can’t be too careful in dealing with strangers

On a scale of 1 to 10, with 1 being “Not at all” and 10 being “Absolute,” please rate how much you
trust Facebook.

1 2 3 4 5 6 7 8 9 10

I DO NOT AT ALL I ABSOLUTELY trust


trust Facebook Facebook

You would trust Facebook more if ___________. Click the FOUR that are most important to your trust
of Facebook. NOTE: Facebook has already taken some of these steps, but not all.
d

all your close friends also used Facebook

it never shared any of your data/information with third parties

it allowed people to use screen names different from their real names

better identified “People You May Know”

informed you when and with whom it will share your information

its privacy policy was clearly stated in plain English

tried to sell you fewer things

worked more closely with police to stop the proliferation of hate and harassment

asked for your consent every time it shared your personal information

allowed you to hide certain pictures and posts from certain people

allowed fewer commercial/sponsored posts on your Timeline

Other:
242
Part III: What Do You/Would You Share on Facebook
The next series of questions asks whether you share or make available for others to see the given
information on Facebook. If you do share the given information or make it available to others, answer
YES. If not, answer NO.

Do you share ... professional accomplishments?

Yes

No

N/A

Do you share ... where and when you went to college or high school?

Yes

No

N/A

Do you share ... your personal email address?

Yes

No

N/A

Do you share ... your location, either via "check in"s or picture geolocation tagging?

Yes

No

N/A

Do you share ... your telephone number?

Yes

No

N/A

Do you share ... general or specific details about your dating or romantic life?

Yes

No

N/A

243
Do you share ... intimate or suggestive pictures of yourself?

Yes

No

N/A

Do you share ... other "selfies"?

Yes

No

N/A

Do you share ... your sexual orientation?

Yes

No

N/A

Do you share ... your relationship status?

Yes

No

N/A

Do you share ... the name of your significant other?

Yes

No

N/A

Do you share ... your place of employment?

Yes

No

N/A

Do you share ... critical comments about others?

Yes

No

244
N/A

Do you share ... pictures of or information about your children?

Yes

No

N/A

Do you share ... pictures of yourself doing something illegal or stigmatized?

Yes

No

N/A

Do you share ... pictures of you kissing another person?

Yes

No

N/A

Do you share ... pictures of yourself drunk or drinking alcohol in excess?

Yes

No

N/A

Do you share ... news items?

Yes

No

N/A

Do you share ... personal opinions about the news?

Yes

No

N/A

Do you share ... jokes or funny videos?

Yes

No
245
N/A

Do you share ... the names of your family members?

Yes

No

N/A

Do you share ... your home address?

Yes

No

N/A

Do you share ... that you're feeling sad or depressed?

Yes

No

N/A

Do you share ... your birth date and year?

Yes

No

N/A

Do you share ... medications you take or medical conditions you have?

Yes

No

N/A

Part V: Sharing With Strangers

This final part of the survey asks about your willingness to accept Facebook “friend requests” from
strangers, or persons you have never met offline, and what factors make it more or less likely that you
would accept such a request.

Have you ever or would you accept a "friend request" from a stranger, i.e., someone you do not know
or have never met offline?

246
Yes

No

If you have or would potentially accept “friend requests” from strangers, would knowing the following
pieces of information make it more likely, less likely, or have no impact on your decision to accept a
given “friend request”?

Please select your answer on the following scale: (1) Much less likely (2) Somewhat less likely (3)
Neither more nor less likely, i.e., no impact (4) Somewhat more likely (5) Much more likely

Evidence of active participation on Facebook.


1 2 3 4 5

Much less likely Much more likely

The stranger would be a good professional contact.


1 2 3 4 5

Much less likely Much more likely

Large number of mutual friends.


1 2 3 4 5

Much less likely Much more likely

Physical attractiveness.
1 2 3 4 5

Much less likely Much more likely

You both attended the same college or university.


1 2 3 4 5

Much less likely Much more likely

Same first or last name, though no familial relation.


1 2 3 4 5

Much less likely Much more likely

Same gender.
1 2 3 4 5

Much less likely Much more likely

247
Same hometown.
1 2 3 4 5

Much less likely Much more likely

Same location.
1 2 3 4 5

Much less likely Much more likely

Same profession.
1 2 3 4 5

Much less likely Much more likely

Same racial or ethnic background.


1 2 3 4 5

Much less likely Much more likely

The stranger is friends with your close friends.


1 2 3 4 5

Much less likely Much more likely

Similar hobbies, interests, and likes.


1 2 3 4 5

Much less likely Much more likely

Same sexual orientation.


1 2 3 4 5

Much less likely Much more likely

Same or similar political views.


1 2 3 4 5

Much less likely Much more likely

You assume you will never meet the stranger.


1 2 3 4 5

248
Much less likely Much more likely

249

You might also like