Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

Chapter Title: PRIVACY AS AN INFORMATION PRODUCT

Book Title: Antitrust Law in the New Economy


Book Author(s): MARK R. PATTERSON
Published by: Harvard University Press

Stable URL: http://www.jstor.com/stable/j.ctvc2rkm6.12

JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide
range of content in a trusted digital archive. We use information technology and tools to increase productivity and
facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
https://about.jstor.org/terms

Harvard University Press is collaborating with JSTOR to digitize, preserve and extend access to
Antitrust Law in the New Economy

This content downloaded from


13.234.96.8 on Thu, 04 Jun 2020 14:22:41 UTC
All use subject to https://about.jstor.org/terms
7
Privacy as an Information Product

I n a r e c e n t s p e e c h that garnered much publicity, Tim Cook, the


CEO of Apple, addressed the business practices of other Silicon
Valley firms.1 Their business models, he said, involve “gobbling up ev-
erything they can learn about you and trying to monetize it.” He said
that Apple takes a different approach: “We believe the customer should
be in control of their [sic] own information. You might like these so-
called free services, but we don’t think they’re worth having your email,
your search history and now even your family photos data mined and
sold off for god knows what advertising purpose. And we think some
day, customers will see this for what it is.”2
However, many commentators argue that consumers are willing to
give up their information in exchange for free, and better, services.
Advertisers pay various online service providers for access to users. 3
More to the point, the advertisers are willing to pay more if the service
provider is able to deliver their advertising to users who have been iden-
tified as more likely to respond to it. Personal information makes that
targeting possible, and thus makes online service providers profitable.
And the more money that the service provider makes from advertising,
the better services that it can provide its users, with no (monetary)
charge.4 None of this necessarily implies that the users have intentionally
made a bargain to give up their information for better free services, but
some evidence indeed suggests that they are willing to do so.5
On the other hand, a recent report from the Annenberg School
for Communication at the University of Pennsylvania suggests that

163

This content downloaded from


13.234.96.8 on Thu, 04 Jun 2020 14:22:41 UTC
All use subject to https://about.jstor.org/terms

3rd Pass Pages


I n f o r m at i o n p r o b l e m s a n d a n t i t r u s t

customers agree with Cook regarding what the report calls the “tradeoff
fallacy.”6 The authors of the report asked Americans a variety of ques-
tions about the data-for-discounts exchange and found that the respon-
dents do not necessarily accept the tradeoff willingly. For example, when
asked, “If companies give me a discount, it is a fair exchange for them to
collect information about me without my knowing it,” 91 percent re-
sponded “no.” Of course, consumers might in fact know about such ex-
changes, so the authors also asked, “Let’s say [the respondent’s usual]
supermarket says it will give you discounts in exchange for its collecting
information about all your grocery purchases. Would you accept the
offer or not?” To that question, 52 percent responded “no,” but the au-
thors also found that many more said they would reject the deal when
asked about specific assumptions that the supermarket might make
about them in response to the data gathered. The authors’ conclusion
was that “the survey reveals most Americans do not believe that ‘data for
discounts’ is a square deal.”
The references to whether giving up data is “worth” what one re-
ceives or whether the exchange is a “square deal” suggest a commercial
transaction. Indeed, Margrethe Vestager, the European Commissioner
for competition, has said that “it’s clear that these are business transac-
tions, not free giveaways.”7 There are human-rights implications of pri-
vacy as well, of course, and much of the legal commentary regarding
privacy focuses primarily on those issues. However, there is increasing
interest and focus on privacy from an antitrust point of view. Not sur-
prisingly, given the novelty of the issues, some argue that the acquisition
of private data by sellers presents a serious antitrust problem, others ar-
gue that it does not, and others take a middle ground. For present pur-
poses, the most interesting perspective, perhaps, is the one that considers
privacy, and “personal information,”8 as a commercial good in itself.
At least three competition issues can arise in the context of “private”
consumer information. One is the failure of competition, either with re-
spect to firm conduct or as the product of mergers, to cause firms to re-
spond to consumers’ privacy concerns, which is a lessening of the quality
provided to consumers. The problem here is not the use of the informa-
tion per se, but the failure of firms to provide consumers with adequate
means of protecting it.9 The second problem is that of price discrimina-
tion, in which a seller’s detailed knowledge of consumers’ personal infor-
mation and preferences allows the seller to charge a price calculated to
each consumer’s willingness to pay. As a result, much or all of the value

164

This content downloaded from


13.234.96.8 on Thu, 04 Jun 2020 14:22:41 UTC
All use subject to https://about.jstor.org/terms

3rd Pass Pages


P r i va c y a s a n I n f o r m at i o n P r o d u c t

of the transaction is transferred from consumers to sellers. The third is-


sue is the difficulty for sellers of competing in a market in which a dom-
inant firm (Google, for example) has detailed knowledge of the consum-
ers for which firms are competing.

Privacy and Personal Information as a Good


The tradeoff perspective treats information about consumers as a com-
mercial good. From this perspective, consumers exchange information
about themselves for services, like search services. But like other infor-
mation that is the subject of this book, the personal information of con-
sumers both constitutes a market in itself and is information that affects
other markets. As Maureen Ohlhausen and Alexander Okuliar write,
“[i]n the online commercial world, consumer data is both an input for
other online services and a commodity asset for advertisers.”10 Unlike
other information in this book, though, the problem with personal in-
formation is not so much its accuracy (though, of course, that is impor­
tant in other contexts) but its availability.
Information about consumers is in fact widely sold in market transac-
tions. So-called “data brokers” buy data from a variety of sources and
resell it for marketing and other purposes. In 2014, the FTC issued a re-
port entitled Data Brokers: A Call for Transparency and Accountability.11
The report describes data brokers as “companies whose primary business
is collecting personal information about consumers from a variety of
sources and aggregating, analyzing, and sharing that information, or in-
formation derived from it, for purposes such as marketing products, veri-
fying an individual’s identity, or detecting fraud.”12
Consumers, however, generally provide their data for free, or at least
in exchange for services without an exchange of cash. Generally speak-
ing, data brokers do not obtain information directly from consumers.
Instead, as the FTC report describes, they rely on other sources, such as
federal and state governments; publicly available sources, such as social
media sites; and commercial sources. From the latter sources, as the re-
port outlines, the brokers can obtain information including “the types of
purchases (e.g., high-end shoes, natural food, toothpaste, items related
to disabilities or orthopedic conditions), the dollar amount of the pur-
chase, the date of the purchase, and the type of payment used.”13 They
also obtain information from other data brokers, and the result is to al-
low them to form fairly detailed pictures of individual consumers.

165

This content downloaded from


13.234.96.8 on Thu, 04 Jun 2020 14:22:41 UTC
All use subject to https://about.jstor.org/terms

3rd Pass Pages


I n f o r m at i o n p r o b l e m s a n d a n t i t r u s t

All of this information can then be used to commercial effect. A pri-


mary use is to predict consumer behavior. “The data brokers can iden-
tify a group of consumers that has already bought the product in which
the data broker wants to predict an interest, analyze the characteristics
the consumers share, and use the shared characteristic data to create a
predictive model to apply to other consumers.”14 Or the broker, which
typically uses data to divide consumers into segments of particular types,
can use those segments predictively, perhaps in cooperation with a re-
tailer: “The retailer gives the data broker its customer list and the data
broker compares its stock segments, such as ‘Persons Interested in High-
End Clothing’ or ‘Sophisticated Shoppers,’ to the retailer’s existing list of
customers to predict which of the retailer’s customers will be interested
in the new fashion line.”15
These practices can be of value to consumers, of course. Many con-
sumers prefer to see advertisements that are tailored to their particular
interests. But whether consumers value tailored advertising more than
they value their privacy simply is not clear. And there are other costs to
consumers resulting from sellers’ possession of detailed information
about their preferences as well. Some of those, such as the risks of data
breach, will not be considered here, because the focus of this book is on
competition, but those costs should be significant parts of the broader
privacy analysis.16

Privacy Per Se
One perspective on privacy as a good is that it is simply a component of
the service provided by, say, a search engine.17 In this respect, privacy
would be just an element of nonprice competition, as is quality or dura-
bility. In fact, privacy could be seen as an element of quality.18 If consum-
ers want a great deal of privacy, and they receive it from a service pro-
vider, then they are receiving a high-quality service. If instead they
receive less privacy than they would prefer, they are receiving a low-­
quality good. Because it is generally recognized that nonprice factors like
quality are as important as price, a loss of privacy should be a consumer
harm that antitrust law deems important.19
Geoffrey Manne and Ben Sperry have resisted this equation of less
privacy with consumer harm that presents an antitrust issue. One of
their points is that even if some consumers want more privacy, others are
willing to exchange it for value received through better targeting from

166

This content downloaded from


13.234.96.8 on Thu, 04 Jun 2020 14:22:41 UTC
All use subject to https://about.jstor.org/terms

3rd Pass Pages


P r i va c y a s a n I n f o r m at i o n P r o d u c t

sellers. That is surely correct, but not justification for a hands-off ap-
proach by antitrust, as discussed below. The more surprising argument
they make is that less privacy is not a problem, even for consumers who
want more, unless the loss causes some other, presumably tangible,
harm:
[C]laims that concentration will lead to a “less-privacy-protective
structure” for online activity are analytically empty. One must
make out a case, at minimum, that a move to this sort of structure
would reward the monopolist in some way, either by reducing its
costs or by increasing revenue from some other source.20
There are at least two flaws in this passage. The first is that a loss of
privacy is in fact a transfer from consumers to producers. If producers
take valuable consumer information without payment, or even without
payment at a competitive price, then producers gain at the expense of
consumers. And if it is market concentration—a “less-privacy-protective
structure”—that is the cause of this exploitation, then it is a competition
issue, not (only) a consumer protection one. 21 What appears to be going
on in this passage is a rejection of the view that consumers are entitled
simply to want privacy; the authors want consumers to justify their de-
sire for it. 22 But Joseph Farrell has pointed out that this is not the usual
approach in antitrust, which simply takes consumer demand as given:
Thus it’s jarring for an (or this) economist to hear the notion that
economics pushes public policy on privacy towards focusing on
quantifiable, tangible, and verifiable specific harms from the loss
of privacy, a notion that is also reflected in some court cases. Eco-
nomics sometimes views intermediate goods that way, but for fi-
nal goods, it normally takes tastes as given and asks how well a
market or an economic system satisfies those tastes. 23
The second flaw is that there is no requirement that harms to consumers
also provide benefits to producers to raise an antitrust issue. In the
merger context, even the possibility of unilateral harm from a firm large
enough to exploit consumers is of concern, and the U.S. agencies’ Merger
Guidelines refer specifically to nonprice harms. 24 And even liability for
single-firm conduct turns only on exclusionary conduct and the lack of
a legitimate business purpose, not on increased profits. It has long been
accepted that “[t]he best of all monopoly profits is a quiet life.”25
But Manne and Sperry are surely correct that privacy harms can

167

This content downloaded from


13.234.96.8 on Thu, 04 Jun 2020 14:22:41 UTC
All use subject to https://about.jstor.org/terms

3rd Pass Pages


I n f o r m at i o n p r o b l e m s a n d a n t i t r u s t

cause other benefits. As they say, the relevant question is whether, on


balance, the seller’s conduct is anticompetitive:

At the same time, alleged harms arising from increased sharing


of data with third parties (typically advertisers) [are] necessarily
ambiguous, at best. While some consumers may view an increase
in data sharing as a degradation of quality, the same or other con-
sumers may also see the better-targeted advertising such sharing
facilitates as a quality improvement, and in some cases “degraded”
privacy may substitute for a (pro-competitive) price increase that
would be far less attractive. 26

The upshot of this point, though, is simply that antitrust law must bal-
ance harms and benefits, not look only at one side of the ledger. That is
an uncontroversial claim, but hardly one that eliminates privacy harms
from antitrust consideration.
In fact, however, the U.S. agencies have avoided informational qual-
ity issues even where they are more straightforward than privacy. This
was particularly significant in the wave of radio mergers in the United
States in the late 1990s. The Department of Justice focused almost ex-
clusively on the price effects of the mergers on advertisers, rather than on
the quality effects on listeners. 27 And it did so despite the fact that listen-
ers care about radio programming more obviously than they care about
privacy protections.
The U.S. antitrust agencies’ relative neglect of information issues con-
tinued in the FTC’s recent review of the Google-DoubleClick merger.
The Commission was rather cryptic about its consideration of privacy
issues in that case:

Although such issues may present important policy questions for


the Nation, the sole purpose of federal antitrust review of mergers
and acquisitions is to identify and remedy transactions that harm
competition. Not only does the Commission lack legal authority
to require conditions to this merger that do not relate to antitrust,
regulating the privacy requirements of just one company could
itself pose a serious detriment to competition in this vast and rap-
idly evolving industry. That said, we investigated the possibility
that this transaction could adversely affect non-price attributes of
competition, such as consumer privacy. We have concluded that
the evidence does not support a conclusion that it would do so. We

168

This content downloaded from


13.234.96.8 on Thu, 04 Jun 2020 14:22:41 UTC
All use subject to https://about.jstor.org/terms

3rd Pass Pages


P r i va c y a s a n I n f o r m at i o n P r o d u c t

have therefore concluded that privacy considerations, as such, do


not provide a basis to challenge this transaction. 28

The FTC’s decision not to focus on privacy issues was especially striking
given the call for such a focus in the dissenting statement of Commissioner
Pamela Jones Harbour:

Traditional competition analysis of Google’s acquisition of Dou-


bleClick fails to capture the interests of all the relevant parties.
Google and DoubleClick’s customers are web-based publishers
and advertisers who will profit from better-targeted advertis-
ing. From the perspective of these customers, the more data the
combined firm is able to gather and mine, the better (assuming,
as the majority presumably does, that the financial benefits of
highly-­targeted advertising outweigh any harm caused by reduced
competition). But this analysis does not reflect the values of the
­consumers whose data will be gathered and analyzed. 29

The approach in Europe has been similar, though European


Commissioner Vestager has signaled greater attention to the issue. 30 In
the Commission’s 2014 review of the acquisition by Facebook of
WhatsApp, it specifically declined antitrust consideration of privacy
issues:

For the purposes of this decision, the Commission has analysed


potential data concentration only to the extent that it is likely to
strengthen Facebook’s position in the online advertising market or
in any sub-segments thereof. Any privacy-related concerns flow-
ing from the increased concentration of data within the control
of Facebook as a result of the Transaction do not fall within the
scope of the EU competition law rules but within the scope of the
EU data protection rules.31

To be sure, quantifying harm from privacy losses is a challenging


task. But the existence of firms like Datacoup, which pay for data, sug-
gests that the task is possible. Datacoup offers to pay individuals for
their information, with individuals able to receive up to $10 per month. 32
Bing’s rewards program offers similar possibilities.33 Although these
forms of compensation are probably better measures of the value of in-
formation to search engines than they are of the value to users of keeping
the information private, they seem at least to indicate that when users

169

This content downloaded from


13.234.96.8 on Thu, 04 Jun 2020 14:22:41 UTC
All use subject to https://about.jstor.org/terms

3rd Pass Pages


I n f o r m at i o n p r o b l e m s a n d a n t i t r u s t

are giving up their data without payment, they are receiving less from
the search engine than effective competition would provide them.

Privacy and Consumer Protection


Of course, for the market for personal information to work, the value of
information would have to be determinable by consumers, not just anti-
trust courts. Somewhat remarkably, Manne and Sperry express no con-
cerns regarding this requirement: “Consumers, with the assistance of
consumer protection agencies like the FTC itself, are generally able to
assess the risks of disclosure or other misuse of their information, and to
assess the costs to themselves.” Katherine Strandburg states, more plau-
sibly, that “Internet users do not know the ‘prices’ they are paying for
products and services supported by behavioral advertising because they
cannot reasonably estimate the marginal disutility that particular in-
stances of data collection impose on them.”34
Surely Strandburg is correct. As will be discussed below, such infor-
mation can be used to price-discriminate among buyers, charging higher-​
demand buyers higher prices. Therefore, to determine what they are
“paying” by giving up their personal information consumers would have
to determine what future purchases they will make, what the overcharges
will be, etc. To do this with any accuracy seems impossible, as Strandburg
explains:35

Significantly, while a consumer who barters away the harvest of


her vegetable garden to a neighbor must estimate the expected
value of the uses she herself might make of the vegetables in order
to decide whether to make the trade, she knows what vegetables
she has traded away and does not much care what the neighbor
does with them. The information needed to assess the expected
disutility from a “payment” in data is of a different order. An In-
ternet user’s potential disutility from data collection is almost en-
tirely due to future uses or misuses of the data to which the actions
of the data recipient, in combination with the actions of unknown
others, might expose the user. Internet users do not know, and of-
ten cannot know, the likelihood or magnitude of various potential
disutilities that might result from a particular stream of data col-
lection. They do not know, and generally cannot know, sufficient
detail about what data is collected by the companies with which

170

This content downloaded from


13.234.96.8 on Thu, 04 Jun 2020 14:22:41 UTC
All use subject to https://about.jstor.org/terms

3rd Pass Pages


P r i va c y a s a n I n f o r m at i o n P r o d u c t

they interact, how it will be secured, and what uses eventually will
be made of it. If user data is a “payment” for online services, one
might call it a “credence payment,” since users cannot determine
the price they are paying either before or after they have paid it.36

Strandburg therefore advocates a ban on behavioral advertising. She states


that “[t]he most straightforward approach would be to ban data collection
and processing for behaviorally targeting advertising,”37 but she also con-
siders other options, such as a “Do Not Track” option through which
consumers could individually opt out of the collection of their data.
Frank Pasquale offers a somewhat different approach to consumer pro-
tection in the privacy market. He too thinks it unlikely that individual
exchanges of data for services will function well as a market, because
“[w]hen a service collects information about a user, the situation is so far
from the usual arm’s-length market transaction that transactional ap-
proaches can only be misleading.”38 He therefore proposes a regime that
would depend on disclosure and monitoring of the uses of personal infor-
mation, in an approach that echoes the disclosure remedy for information
asymmetries discussed in the previous chapter. But the required disclo-
sures would not be related to individual transactions with consumers but
to disclosures of data. Modeling disclosures on those in the health-care
industry, Pasquale argues that “it would not be unreasonable to expect big
data firms to make ‘accountings of disclosures’ of the data they hold in the
same way that entities covered under the Health Insurance Portability and
Accountability Act (‘HIPAA’) are required to.”39
This approach is echoed in the FTC report on data brokers. The
Commission’s recommendations regarding the use of data for marketing
are to require data brokers “to give consumers (1) access to their data
and (2) the ability to opt out of having it shared for marketing pur-
poses.”40 Although, in itself, this approach does not seem clearly one of
either antitrust or consumer protection, the FTC goes further to recom-
mend that “legislation could also require the creation of a centralized
mechanism, such as an Internet portal, where data brokers can identify
themselves, describe their information collection and use practices, and
provide links to access tools and opt outs.”41 These sorts of disclosure
requirements seem to reflect an approach based on consumer protection
rather than antitrust.
A concern that straddles the antitrust–consumer protection line has
been raised by the late former FTC Commissioner J. Thomas Rosch.

171

This content downloaded from


13.234.96.8 on Thu, 04 Jun 2020 14:22:41 UTC
All use subject to https://about.jstor.org/terms

3rd Pass Pages


I n f o r m at i o n p r o b l e m s a n d a n t i t r u s t

When the FTC announced its decision to close its investigation into pos-
sible Google search bias,42 Commissioner Rosch issued a separate state-
ment.43 Among the concerns he expressed was that Google might use
“‘half-truths’—for example, that its gathering of information about the
characteristics of a consumer is done solely for the consumer’s benefit.”44
Commissioner Rosch’s concern, though, was not the usual consumer-­
protection one of gaining an advantage in a particular transaction, but
rather that Google might be gathering the information “to maintain a
monopoly or near-monopoly position.”45 Although he did not articulate
this theory more fully, it points to a relationship between control over
personal information and antitrust.

Privacy and Antitrust


As Ohlhausen and Okuliar say, “privacy protection has emerged as a
small, but rapidly expanding, dimension of competition among digital
platforms.”46 Nevertheless, they express doubts about using antitrust as
a legal remedy for privacy violations. They say that “using the modern
antitrust laws, which are empirically-focused on economic efficiency, to
remedy harms relating to normative concerns about informational pri-
vacy contradicts the specialized nature of these laws and risks distorting
them in ways that would leave both the law and consumers worse off.”47
Although they do not explain why “normative” concerns about privacy
are not just “non-price” components of straightforward business trans-
actions, they conclude that consumer protection, not antitrust, is the
proper source for privacy protection.
This focus on consumer protection, rather than antitrust, remedies per-
haps derives from the limited nature of the goals of antitrust. In the merger
context of Google​/ DoubleClick, the potential antitrust harm at issue would
be a less competitive market. The fundamental privacy problem, however, is
not primarily one that would be worsened by the merger of two such firms.
The lead author of the Annenberg Center study described the problem in
this way: “But what is really going on is a sense of resignation. Americans
feel that they have no control over what companies do with their informa-
tion or how they collect it.”48 That, presumably, is a harm Americans are
already suffering, and it is not clear that it would be worsened by the merger
of two large firms. Users have no control over their information now, and
they will have no control over it after the merger. As Scott McNealy fa-
mously put it, “You have zero privacy anyway. Get over it.”49

172

This content downloaded from


13.234.96.8 on Thu, 04 Jun 2020 14:22:41 UTC
All use subject to https://about.jstor.org/terms

3rd Pass Pages


P r i va c y a s a n I n f o r m at i o n P r o d u c t

But as Ohlhausen and Okuliar say, firms are competing on privacy,


so to the extent that firms’ actions lessen privacy protections, there seems
no reason it would not be an antitrust matter. Search engines like
DuckDuckGo, for example, use the fact that they do not gather con-
sumer information as a marketing point.50 If DuckDuckGo were ac-
quired by Bing or Google, and it were folded into the acquirer’s own
search engines or its no-data policy were changed, that would presum-
ably be the same sort of harm that is suffered whenever any “maverick”
firm is acquired and its disruptive effect in the market eliminated.51
There are other sorts of conduct that also threaten what could be
called privacy competition. There are various applications that allow
users to protect their privacy. 52 Some allow users to monitor how their
data is being used.53 More to the point, from a competition point of
view, some of these apps have been blocked by Google at its Play Store, 54
and this has resulted in a competition-law complaint to the European
Commission.55 The reason offered for the blocking is that the apps inter-
fere with other apps, which is necessary if the blocking apps are to
achieve their goals of protecting privacy. Some contend, however, that
Google blocks the apps because they, to some extent, put at risk its busi-
ness model of selling ads. If that were true, one could view the blocking,
or, actually, both versions of blocking—the blocking of data gathering,
and Google’s blocking of the apps—as competition with respect to over-
all business models.
That is not to say, though, that business conduct could not also cause
more specific harms. In this sense, Manne and Sperry are correct in that
there is value in articulating the specific harms caused by a lessening of
privacy. In that respect, it is important to consider the specific harms
that a loss of privacy is said to cause, not just the general loss in quality
that consumers suffer by being provided lower-quality services. There
are two such harms to which commentators usually point. First is sellers’
use of consumers’ personal information to discriminate among buyers,
charging buyers with higher demand higher prices and those with lower
demand lower prices. Second is the consolidation of such information in
few hands, which may make it difficult for new entrants to compete.

Price Discrimination
Consumers have reason to care about privacy not just in the abstract, as
a good of which they want more, but also because of potential price

173

This content downloaded from


13.234.96.8 on Thu, 04 Jun 2020 14:22:41 UTC
All use subject to https://about.jstor.org/terms

3rd Pass Pages


I n f o r m at i o n p r o b l e m s a n d a n t i t r u s t

effects, the traditional focus of antitrust. Price discrimination is the


use of information about individual consumers to charge them approxi-
mately the amount at which they value the products they are purchasing.
Antitrust law prohibits some anticompetitive price discrimination in the
United States, and more is condemned in Europe, but economics does
not have a clear position on whether price discrimination is desirable or
undesirable. It is generally accepted that imperfect price discrimination
can have undesirable effects, but the effects of perfect price discrimina-
tion are more ambiguous. What seems clear is that the quantity of infor-
mation collected by online information providers makes the potential of
price discrimination much more significant.
The competitive danger at issue in the previous chapter was informa-
tion asymmetry; in the context of price discrimination, it is, in a sense,
information symmetry. Traditionally, sellers had available to them much
less information about their consumers than is now available. Therefore,
consumers were able to benefit from not revealing their personal infor-
mation, which forced sellers, at least in some instances, to set terms that
were more favorable than needed to attract buyers. Now, to the extent
that sellers have access to more personal information of consumers, this
advantage of consumers could disappear.56 As discussed in Chapter 2,
sellers, and particularly online sellers, have engaged in price discrimina-
tion on various bases.
And, of course, the information that is potentially available to sellers
is much more extensive and finer-grained than the simple identification
of which computer a consumer is using or in what geographic area they
live, examples that were discussed in Chapter 2. “Some marketing com-
panies, for instance, segment individuals into clusters like ‘low-income
elders’ or ‘small town, shallow pockets’ or categorize them by waistband
size.”57 These sorts of categories can be used to improve dramatically
sellers’ information regarding how much consumers are willing to pay
for products. Because, however, price discrimination’s economic effects
are ambiguous, the implications of this information are dismissed, or
least discounted, by some authors.58
However, the markets at issue here are different in other ways from
the markets that are typically considered in assessing price discrimina-
tion. The markets on which these discussions typically focus are those in
which sellers have market power, or monopoly power. In that context, a
seller that must charge every buyer the same price will charge a price
above cost to maximize its profits. Although it could profit by charging

174

This content downloaded from


13.234.96.8 on Thu, 04 Jun 2020 14:22:41 UTC
All use subject to https://about.jstor.org/terms

3rd Pass Pages


P r i va c y a s a n I n f o r m at i o n P r o d u c t

only slightly above its cost, it profits more by charging a higher price.
That is so because the lost profits from sales that do not occur because
of the higher price are more than compensated for by the greater profits
on sales that do occur. So the profits of the seller are greater, but the
benefits to society are less. That is so because the increased profits on the
sales that do occur are merely transfers from buyers—the buyers lose
while the seller gains—making those exchanges neutral from a societal
point of view, but the sales that do not occur create a loss to those cus-
tomers who would have gained from their purchases, but no compensat-
ing benefit to the sellers. This is the so-called “deadweight loss” to soci-
ety that is the harm of higher prices. Price discrimination can eliminate
this loss because it can permit the seller to charge the value of the good
to each buyer. In that way, the seller gains from the higher-priced sales
while still making sales at lower prices. The key point here is that the
price discrimination enables the seller to raise prices to some consumers
while lowering the price to others. But that happens only if the seller was
charging higher than its cost before the price discrimination.
The price discrimination that could be made possible in online mar-
kets by consumers’ personal information would not necessarily involve
sellers that have preexisting power. For example, in the Orbitz case de-
scribed in Chapter 2, where Orbitz showed higher-price hotels to users
of Apple computers, hotels would not generally be thought to have mar-
ket power. They presumably operate in more-or-less competitive mar-
kets in which they face effective competition from other hotels, at least
for consumers about whom they do not have detailed information. In
those markets, having more personal information of consumers would
not likely lead the hotels to lower their prices to any consumers, to whom
they presumably would already be charging a competitive price. Instead,
it would only raise prices to those consumers about whom they have
personal information. That is, the hotels would present higher prices to
those consumers—perhaps Mac users—that they believe would be will-
ing to pay those higher prices.
If sellers were able to price-discriminate perfectly, this would all mat-
ter less. In that case, money would be transferred from buyers to sellers,
but society as a whole would not lose. But perfect price discrimination is
not possible, or even likely. Not all Mac users want to, or are willing to,
pay more for hotels. Yet if they are shown higher-priced hotels, and it is
difficult for them to find lower-priced hotels, they might not choose the
hotel they prefer or, worse, they might not travel at all. The same would

175

This content downloaded from


13.234.96.8 on Thu, 04 Jun 2020 14:22:41 UTC
All use subject to https://about.jstor.org/terms

3rd Pass Pages


I n f o r m at i o n p r o b l e m s a n d a n t i t r u s t

be true for typical online buyers of consumer goods who are shown
prices that are near to, but perhaps slightly above, the amounts they are
willing to pay. The losses these buyers, or potential buyers, suffer could
then be losses to society.
In cases where a price-discriminating seller has market power, these
losses from imperfect price discrimination might be balanced by gains to
buyers who are able to make purchases at lower prices. But if the seller
is operating in a competitive market and has no ability to make large-
scale reductions in prices to some buyers, any imperfections in its price
discrimination will create lost sales. And the seller may not be sensitive
to those lost sales because it will be making greater profits from its other
sales. That is, its overall profit picture could improve so dramatically
that its loss of some sales that it could have made—indeed, could still
make—will not necessarily be noticed.
Given the unhappiness of consumers when it is revealed that they
have been subject to price discrimination—the most commonly used ex-
ample is payment of different amounts for comparable airplane seats59 —
it seems at least plausible that the widespread use of such practices, if
known, would prompt calls for relief. The U.S. Supreme Court recently
stated quite forcefully that price discrimination is not an entitlement of
sellers, at least for copyrighted goods,60 which suggests that consider-
ation of this price-increasing effect of the availability of personal infor-
mation may be an appropriate subject for antitrust law.

Personal Information as a Competitive Advantage


A final potential competitive harm is that large accumulations of per-
sonal data could lessen competition among firms for which data is an
important input. This can happen in two ways. First, in some cases, data
is simply important to providing high-quality service. In search, for ex-
ample, consumer data enables search engines to provide results tailored
to the characteristics of their users.61 It is difficult for a firm, particularly
a new entrant in a market, to compete when its competitor, particularly
an incumbent, has much more complete information about the consum-
ers for whom both are competing.
Manne and Sperry argue that “the incumbent almost certainly had to
go through the same process and overcome the same barriers.” But that
is not the case. Google is surely a more formidable competitor for new

176

This content downloaded from


13.234.96.8 on Thu, 04 Jun 2020 14:22:41 UTC
All use subject to https://about.jstor.org/terms

3rd Pass Pages


P r i va c y a s a n I n f o r m at i o n P r o d u c t

search engines than Google itself faced when it was entering the market.
The D.C. Circuit Court of Appeals made a similar point about the ad-
vantages Microsoft possessed once it became established in the market:
“When Microsoft entered the operating system market with MS-DOS
and the first version of Windows, it did not confront a dominant rival
operating system with as massive an installed base and as vast an exist-
ing array of applications as the Windows operating systems have since
enjoyed.”62
Determining whether Google or Facebook63 possesses similarly for-
midable advantages now would require considerable research.64 It is
clear, though, that Google similarly has a vast “installed base” (of users)
and “array of applications” (such as gmail). It is therefore inappropriate
to claim, without further evidence, that a new entrant in the search or
social network would face no greater obstacles overcoming Google or
Facebook than they faced overcoming AltaVista or MySpace. Surely the
vast amounts of personal information possessed by Google and Facebook
are part of the competitive landscape that newcomers would face.
Moreover, it was to a large extent Google that made data the impor­
tant contributor to search that it is now. “Early search engines, like
Yahoo! and AltaVista, found results based only on key words.
Personalized search, as pioneered by Google, has become far more com-
plex with the goal to ‘understand exactly what you mean and give you
exactly what you want.’”65 Thus, if indeed a large collection of data is a
competitive advantage, then new entrants now would face Google when
Google has that advantage, while Google would have entered the market
at a time when the established firms did not have such an advantage.
It is true, though, as Manne and Sperry point out, that there may not
be particular conduct by the dominant firms in these data-intensive mar-
kets that excludes competitors. That is, the firms might indeed possess
more data than their competitors, and might have power as a result of
the data, but they might not be acting anticompetitively. There is no
reason to think that these firms have gathered information about cus-
tomers in order to exclude competitors (though it is possible that it aids
in that goal, as discussed below). On the contrary, there is every reason
to think that they gather that information to improve their own services.
To bring antitrust into play, at least outside the merger context, it would
be necessary to develop a theory under which they could use this power
to anticompetitive effect.

177

This content downloaded from


13.234.96.8 on Thu, 04 Jun 2020 14:22:41 UTC
All use subject to https://about.jstor.org/terms

3rd Pass Pages


P r i va c y a s a n I n f o r m at i o n P r o d u c t

Using Data to Exclude Competition


One possible exclusionary technique would be to use data to enable the
delivery of higher-quality services to customers that are at risk of leaving
for a competitor. For example, if personal information is used by a shop-
ping site to deliver higher prices to high-demand consumers, the site
might instead deliver lower prices to consumers who, the site’s informa-
tion suggests, might switch to a competing site. So long as the lower
prices are not below cost, this would not constitute predatory pricing,
but it is the sort of price discrimination with anticompetitive effects that
can be an antitrust violation.
Moreover, the importance of such competitively significant informa-
tion about customers has been recognized by competition law in other
contexts. A number of vertical-merger cases have allowed the mergers to
be consummated only when the merging firms committed that neither
would be permitted to use information obtained by the other. For exam-
ple, when Google acquired ITA, which provided pricing and shopping
systems to online travel sites, like Kayak and Orbitz, the Department of
Justice and Google entered into a consent decree preventing Google from
using data about ITA customers for its own benefit.66 The market rela-
tionship is similar, even in the consumer context, in that a firm in pos-
session of information about consumers can use that information to gain
or preserve an advantage over its competitors.
Even if a dominant firm like Google did not obtain data through
merger (though the data that Google obtained through its acquisition of
DoubleClick could have been useful in this way), the competitive harm
that it could produce with it is similar to that which the agency was con-
cerned. Moreover, this is presumably the sort of harm that Commissioner
Rosch had in mind in his statement regarding Google search bias. As
described above, he expressed concern in that statement regarding
Google’s “gathering of information about the characteristics of a con-
sumer” in part “to maintain a monopoly or near-monopoly position.”67
This sort of conduct has also been of concern in Europe. Both article
101 TFEU, which governs concerted practices, and article 102, which
governs abuses of dominance, condemn practices that apply “dissimilar
conditions to equivalent transactions with other trading parties, thereby
placing them at a competitive disadvantage.” Although this description
appears to be aimed at so-called secondary-line price discrimination—
price discrimination that injures competition among the discriminating

178

This content downloaded from


13.234.96.8 on Thu, 04 Jun 2020 14:22:41 UTC
All use subject to https://about.jstor.org/terms

3rd Pass Pages


P r i va c y a s a n I n f o r m at i o n P r o d u c t

firm’s customers—it has been applied by the European Commission and


courts to primary-line discrimination—discrimination that injures com-
petition between the discriminating firm and its own competitors.68 So
if a firm used information about consumers to price-discriminate in such
a way that it injured its competitors, for example, by using low prices
only where it faced threats from the competitors, that could be seen as a
competition problem. But the EU cases, like analogous U.S. ones, have
generally involved discrimination among business customers, not con-
sumers. It is not clear whether this is a result of the terms of the provi-
sions at issue or the relative infrequency, so far, of price discrimination
among consumers. If it is the latter, the increase in the amount of infor-
mation sellers possess could bring competition law into play.

Collective Action on Privacy


Privacy issues on the Internet are further complicated by the source of
specifications for communication protocols, which, to some extent, de-
termine the ease with which information can be protected. Much of this
work is done by private organizations such as the World Wide Web
Consortium (W3C) or the Internet Engineering Task Force (IETF).
These organizations may or may not provide technical approaches that
serve consumers’ interests, and firms operating on the Internet may or
may not comply with the approaches that they provide. The dynamic
here is a complicated one involving Internet firms, organizations like the
W3C and the IETF, and governments, which could—but often do not—
adopt legal privacy protections. Thus, antitrust may have only a limited
role to play in what is a larger systemic problem, but the competitive
implications here are significant.
Two examples come from the specifications for HTTP (hyper-text
transfer protocol), the communications protocol for the World Wide
Web. HTTP provides several means for a website to obtain information
about a user’s past browsing history. The website then can use this infor-
mation to tailor the user’s experience based on that history. One means
is Referer,69 an HTTP-specified field that provides information to a web-
site about the previous site the user visited. With that information, web-
sites can pay each other for referrals through links from one site to an-
other. David Post outlined the use of Referer in his book In Search of
Jefferson’s Moose, where he aptly characterized Referer as a primary
determinant of Internet commerce.70 Post stated, for example, that

179

This content downloaded from


f:ffff:ffff:ffff:ffff:ffff on Thu, 01 Jan 1976 12:34:56 UTC
All use subject to https://about.jstor.org/terms

3rd Pass Pages


I n f o r m at i o n p r o b l e m s a n d a n t i t r u s t

Google “makes most of its money from the Referrer field.”71 Although
other techniques can be used for the same purpose, they generally rely on
the technical infrastructure that enables this sort of tracking.72
The commercial role played by Referer and related techniques seems
innocuous, or at least reasonable, but there are problematic possibilities.
For example, a seller could offer different prices to buyers depending on
what site they came from. A recent study showed that consumers are
unaware of this possibility,73 yet the author of that study noted that “[a]
retail photography Web site, for example, charged different prices for
the same digital cameras and related equipment, depending on whether
shoppers had previously visited popular price-comparison sites.”74 And
although some web browsers allow users to turn off Referer, at least
with add-ons,75 it seems unlikely that many users take advantage (or
disadvantage76) of this feature.
Cookies are a better-known method of tracking browsing history. A
cookie is text a web server sends and stores on a user’s computer by way
of the user’s web browser. The cookie can then be retrieved later by the
server. But some websites deliver information from multiple servers, in-
cluding those providing advertising banners. When those banners are
delivered by the same firm to multiple sites that a user visits, the firm can
gather information from visits to all the sites and accumulate consider-
able information about users’ browsing histories. This is a key technique
advertisers and others use to track users online. Because cookies are
much better known than Referer, though, most, if not all, browsers pro-
vide the ability to deny cookies, even if doing so can cause problems with
websites that depend on them.
The original HTTP specification for cookies was adopted in 1997,77
and current approaches are outlined in a proposed standard from 2011.78
The proposed standard describes the use of cookies, but makes a distinc-
tion between cookies sent by the server the user visited and so-called
third-party cookies, which are those sent by other servers (for example,
those delivering advertising on the site the user visited).79 An earlier ver-
sion of the cookie specification forbad third-party cookies,80 but the new
version allows them, despite calling them “[p]articularly worrisome.”81
It is true that even under the superseded specification that prohibited
third-party cookies, many website designers ignored the prohibition,82
but it is nevertheless troubling that the IETF has abandoned its prohibi-
tion, despite its expressed concern. As one website notes, some websites

180

This content downloaded from


13.234.96.8 on Thu, 04 Jun 2020 14:22:41 UTC
All use subject to https://about.jstor.org/terms

3rd Pass Pages


P r i va c y a s a n I n f o r m at i o n P r o d u c t

provide cookies for more than one hundred third-party domains, gener-
ally without the awareness of users.83
Generally speaking, the IETF, which sets technical standards in this
area, gets considerable deference, as David Post describes.84 The IETF is
an organization open to all, but like many standard-setting organiza-
tions, much of its work is done by representatives of organizations active
in the industry because they support the work financially. Post and oth-
ers85 are enthusiastic about the IETF’s role, but its composition and the
history here are problematic. Like other standard-setting organizations
discussed in Chapter 4, the IETF determines the nature of products pro-
vided to users. Unlike most other standard-setting organizations, its
products—HTTP specifications, among others—define the fundamen-
tal nature of the online environment.86 Furthermore, that environment
has greater public-policy implications than do many standards (though
perhaps some medical standards could be seen as comparable). And, of
course, most users are entirely unaware of the choices that the IETF is
making for them.
“The mission of the IETF is to make the Internet work better by pro-
ducing high quality, relevant technical documents that influence the way
people design, use, and manage the Internet.”87 As with standards gen-
erally, though, what it is to “work better” can be a highly contested
matter and can evolve over time. To be sure, the controversial aspects of
the IETF’s work have been recognized within the IETF, which has even
issued an RFC that “identifies specific uses of Hypertext Transfer
Protocol (HTTP) State Management protocol [like cookies] which are
either (a) not recommended by the IETF, or (b) believed to be harmful,
and discouraged.”88 And participation in the organization’s activities is
open to all. Nevertheless, its work has significant competitive implica-
tions, particularly with regard to the control of user information, and
thus could be subject to the antitrust laws.

181

This content downloaded from


13.234.96.8 on Thu, 04 Jun 2020 14:22:41 UTC
All use subject to https://about.jstor.org/terms

3rd Pass Pages


This content downloaded from
13.234.96.8 on Thu, 04 Jun 2020 14:22:41 UTC
All use subject to https://about.jstor.org/terms

3rd Pass Pages

You might also like