Professional Documents
Culture Documents
Measurement, Modeling, and Analysis of A Peer-to-Peer File-Sharing Workload
Measurement, Modeling, and Analysis of A Peer-to-Peer File-Sharing Workload
314
Web pages. Video-on-demand (VoD) systems also distribute trace length 203 days, 5 hours, 6 minutes
multimedia files; we contrast our Kazaa measurements and # of requests 1,640,912
# of transactions 98,997,622
analysis with related work on VoD systems in Section 3.
# of unsuccessful transactions 65,505,165 (66.2%)
This paper presents an in-depth analysis of a modern P2P 252KB (all transactions)
average transaction size
multimedia file-sharing workload, considering the “peer-to- 752KB (successful transactions only)
peer” and the “multimedia” aspects of the workload inde- # of users 24,578
pendently. Our goals are: # of unique objects 633,106 (totaling 8.85TB)
bytes transferred 22.72TB
1. to understand the fundamental properties of multime- content demanded 43.87TB
dia file-sharing systems, independent of the design of
the delivery system, Table 1: Kazaa trace summary statistics, 5/28/02 to
12/17/02. A transaction refers to a single Kazaa HTTP
2. to explore the forces driving P2P file-sharing work- transfer; a request refers to the set of transactions a
loads so we can anticipate the potential impacts of client issues while downloading an object, potentially
future change, and in pieces from many servers. Clients are identified by
Kazaa username. We only present statistics for down-
3. to demonstrate that significant opportunity exists to loads made by university-internal clients for data on
optimize performance in current file-sharing systems university-external peers.
by exploiting untapped locality in the workload.
To meet these goals, we employ several approaches. First,
we analyze a 200-day trace of Kazaa [19] traffic that we ies have described high-level characteristics of P2P work-
collected at the University of Washington between May and loads [6, 29, 30, 32]. Our goals in this section are to: (1)
December of 2002. We traced over 20 terabytes of traffic dig beneath these high-level studies to uncover the processes
during that period, from which we distill several key lessons that drive the workloads, and (2) demonstrate ways in which
about the Kazaa workload. Second, we derive a model of these processes fundamentally differ from those of the Web.
this multimedia traffic based on our analysis. The model 2.1 Trace Methodology
helps to explain the root causes of many of the trends shown
by Kazaa and to predict how the trends may change as the The data in this section are based on a 200-day trace
workload evolves. Third, we use trace-driven simulation to of Kazaa peer-to-peer file-sharing traffic collected at the
quantify the significant potential that exists to improve the University of Washington between May 28th and December
performance of multimedia file-sharing in our environment. 17th, 2002. The University of Washington (UW) is a large
Our analysis reveals that the Kazaa workload is driven campus with over 60,000 faculty, students, and staff. Table 1
by considerably different forces than the Web. Kazaa ob- describes the trace, which saw over 20 terabytes of incoming
jects are immutable, and as a result, the vast majority of data resulting from 1.6 million requests. Our trace was long
its objects are fetched at most once per client; in contrast, enough for us to observe seasonal traffic variations, includ-
Web pages (e.g., Google or CNN) can be fetched thousands ing the end of spring quarter in June, the summer months,
of times per client. Our measurements show that the pop- and the start and end of the fall quarter. We also observed
ularity distribution of Kazaa objects deviates substantially the impact of bandwidth rate-limiting instituted by the uni-
from the Zipf curves we commonly see for the Web, and versity’s networking organization midway through the trace
our model confirms that the “fetch-at-most-once” behavior in an attempt to control the cost of Kazaa traffic.1
of Kazaa clients is the cause. Our model demonstrates an- We collected our trace using hardware and software in-
other consequence of object immutability: unlike the Web, stalled at the network border between the university and
whose workload is driven by document change, the primary the Internet. Our hardware consists of a 2.0 GHz Pentium-
forces in Kazaa are the creation of new objects and the addi- III workstation that monitored traffic flowing between the
tion of new users. Without these forces, fetch-at-most-once university and the Internet. Our workstation had sufficient
behavior would drive the system to stagnation. CPU and network capacity to ensure that no packets were
The structure of this paper follows the multi-tiered ap- dropped, even during peak load. An adjacent workstation
proach cited above. Section 2 describes our trace method- acted as a one-terabyte file store for archiving trace data.
ology and presents our trace-based analysis of Kazaa. In Our software used a kernel packet filter [24] to deliver TCP
Section 3, we analyze the popularity distribution of Kazaa packets to a user-level process, which identified HTTP re-
requests and the forces that shape it. Section 4 uses our quests within the TCP flows. Throughout our trace, the
observations and analysis to develop an analytical model; packet filter reported packet drop rates of less than 0.0001%.
we then use the model to explore the processes that drive We made all sensitive information anonymous – including
Kazaa’s behavior in greater depth. Section 5 considers the IP addresses, URLs, usernames, and object names – before
performance potential of bandwidth-saving techniques sug- compressing and storing the trace. Overall, our tracing and
gested by our modeling and analysis. We describe research analysis software consists of over 30,000 lines of code.
related to our study in Section 6, while Section 7 summarizes Our hardware monitored all incoming and outgoing traf-
our results and presents conclusions. fic. However, the data presented in this paper (including
Table 1) are for one direction only: requests made by uni-
versity-internal peers to download data stored on university-
2. THE MEASURED PROPERTIES OF P2P
1
FILE-SHARING WORKLOADS The imposed rate limits bounded upload traffic out of the
university’s dormitory population and had little effect on
This section uses our trace data to identify key properties download traffic (to the dorms or to the university as a
of the Kazaa multimedia file-sharing system. Recent stud- whole), which is the focus of our research.
315
external peers. This unidirectional trace captures all re- 1
quests issued by a stable, complete user population over a 0.9
<10MB objects
period of time. Kazaa control traffic, which consists primar- 0.8
% of requests (CDF)
ily of queries and their responses, is encrypted and was not 0.7
captured as part of our trace. 0.6
Throughout this paper, the term “user” refers to a person 0.5
100MB+ objects
and “client” refers to the application instance running on be- 0.4
0.3
half of a user. We assume there is largely a one-to-one corre-
0.2
spondence between users and specific application instances
0.1
in our environment (although this may not always be true); 0
therefore, we draw conclusions about users based on observa- 5 -3
mins 1 hour 1 day 1 week 150 days
tions of clients in our trace. Note, however, that client-side download latency
caches may absorb some requests from users, meaning that
the client request rate, which we observe in our trace, may Figure 1: Users are patient. A CDF of object transfer
be lower than the true user request rate, which we cannot times in Kazaa for small (<10 MB) and large (100 MB+)
directly observe. objects. The X-axis is on a log-scale.
Kazaa clients supply Kazaa-specific “usernames” as an
HTTP header in each transaction. We use these usernames
(rather than IP addresses) to distinguish between differ- filtering removed 0.4% of total transactions (0.3% of bytes)
ent users in our trace. Unfortunately, an unofficial ver- from our trace.
sion of Kazaa, called “KazaaLite,”2 became popular dur-
ing our tracing period and is compiled with a predefined 2.2 User Characteristics
username embedded in the application itself. We “special- Our first slice through the trace data focuses on the prop-
case” requests from KazaaLite, resorting to distinguishing erties of Kazaa users. Previous studies [29, 2] have shown
between KazaaLite users by their IP addresses. Although that peer-to-peer users in general are “greedy” (i.e., most
DHCP is used in portions of our campus, and identifying users consume data but provide little in return) and have
users by IP address is known to have issues when DHCP is poor availability [30]. We confirm some of these character-
present [6], only 5.7% of transactions in our trace were from istics, but we also explore others, such as user activity.
KazaaLite clients. Furthermore, KazaaLite clients did not
appear within the first 59 days of our 203 day trace. 2.2.1 Kazaa Users Are Patient
Kazaa file-transfer traffic consists of unencrypted HTTP
As any Web-based enterprise knows, users are very sensi-
transfers; all transfers include Kazaa-specific HTTP head-
tive to page-fetch latency. Beyond a certain small threshold
ers (e.g., “X-Kazaa-IP”). These headers make it simple to
(measured in seconds), they will abandon a site and move
distinguish between Kazaa activity and other HTTP activ-
to another, possibly competing, site. For this reason, many
ity. They also provide enough information for us to identify
online businesses engage services such as Keynote [20] to tell
precisely which object is being transferred in a given trans-
them quickly if their servers are not sufficiently responsive.
action. When a client attempts to download an object, that
In the world of the Web, users expect instant gratification
object may be downloaded in pieces (often called “chunks”)
and are unforgiving if they do not receive it.
from several sources over a long period of time. We define a
In this context, the behavior of Kazaa users is surprising.
“transaction” to be a single HTTP transfer between a client
Figure 1 shows the distribution of transfer times in Kazaa;
and a server, and a “request” to be the set of transactions a
transfer time is defined as the difference between the start
client participates in to download an entire object. A failed
time of the first transaction and the end time of the last
transaction occurs when a client successfully contacts a re-
transaction of a given user request. We filtered out partial
mote peer, but that remote peer does not return any data,
requests (i.e., we only counted transfers for which the user
instead returning an HTTP 500 error code.
eventually obtained the entire object). To deal with edge
A single request may span many minutes, hours, or even
effects, we ignored requests for which at least one transac-
days, since the Kazaa client software will continue to at-
tion occurred during the first month of the trace; note that
tempt to download the object long after the user has asked
this will tend to result in an underestimate of user patience.
for it. Occasionally, a client may download only a subset of
We present our results in terms of requests for “small” ob-
the entire object (either because the user gives up or because
jects (less than 10MB, typically audio files) and requests for
the object becomes permanently inaccessible in the middle
“large” objects (more than 100MB, typically video files). As
of a request). We call this a “partial request.”
we will show in Section 2.3.1, this is a natural and represen-
The Kazaa application has an auto-update feature, mean-
tative way to decompose the overall workload.
ing that a running instance of Kazaa will periodically check
The results show incredible patience on the part of Kazaa
for updated versions of itself. If found, it downloads the new
users. Even for small objects, 30% of the requests take over
executable over the Kazaa network. We chose to filter out
an hour, and 10% take nearly a day. For large requests, less
these auto-update transactions from our logs, as they are
than 10% complete in an hour, 50% take more than a day,
not representative of multimedia requests from users. Such
and nearly 20% of users are willing to wait a week for their
2 downloads to complete! From this graph, it is clear that the
Many more unofficial versions of Kazaa that use generic dynamics of multimedia downloads differ totally from Web
usernames have appeared since our trace period finished;
precisely distinguishing between peer-to-peer users will be- usage. The Web is an interactive system where users want
come very difficult, given that neither IP addresses nor an immediate response. Kazaa is a batch-mode delivery sys-
application-specific usernames are unique. tem, where downloads are made in the background and the
316
8 6000 1.4 1
TB requested data requested
7 population size 1.2 probability of request
5000 0.8
probabilty a client
live client (GB)
1
TB requested
4000
5 0.6
0.8
4 3000
0.6 0.4
3
2000 0.4
2 0.2
1000 0.2
1
0 0
0 0
1 2 3 4 5 6 7 8 9 10 11
1 2 3 4 5 6 7 8 9 10 11
week week
Figure 2: Bytes requested by the population and attrition Figure 3: Older clients have slower request rates. On av-
as a function of age. “Older” clients request a smaller erage, clients use the system equally often as they age
fraction of bytes than newer clients. There are fewer old (having approximately a 50% chance of using the system
clients than young clients, but attrition occurs at a more any given week), but they request less data per session
gradual rate than the slowdown in bytes requested. as they age. Note that the point corresponding to new
clients (week 1) is artificially high, since by definition
every new client requests an object immediately.
content is examined later. Users do not wait for their con-
tent to arrive. They go about their business and eventually
return to review the content after it has been received. request rates for two reasons: (1) they may use the system
less often, or (2) they may ask for less when they use the
2.2.2 Users Slow Down As They Age system. Figure 3 shows that clients are equally likely to use
An interesting question we explored is how user interest the system regardless of age: on average, clients have about
in Kazaa varies over time. Do users become hungrier for a 50% chance of making a request on any given week. Older
content as they gain experience with Kazaa? Are their re- clients slow down because they ask for less each time they
quest rates relatively constant? Do they lose interest over use the system, not because they use the system less often.
time? The answer to such questions significantly affects the In summary, new clients generate most of the load in
growth of the system. Kazaa, and older clients consume fewer bytes as they age.
To understand user behavior over time, we first calcu- In part, this is because of attrition: clients leave the system
lated the average number of bytes consumed by clients as a permanently as they grow older. Also, older clients tend to
function of age. The methodology for this measurement is interact with the system at a constant rate, but ask for less
complex: our trace has finite length, so we must avoid end during each interaction.
effects that would overcount short-lived or undercount long-
lived users. We compensate, first, by counting transferred
2.2.3 Client Activity
bytes only from clients whose “births” we could observe. Be- Quantifying the availability of clients in a peer-to-peer sys-
cause there are no detectable birth events in our trace, we tem is notoriously difficult [6]: no one metric can accurately
used the heuristic of treating the first observed download capture availability in this environment, since any individ-
from a client as a birth event if at least a full month had ual client might exist only for a fraction of the traced time
elapsed in our trace before seeing that first download. To period. Given our passive tracing methodology, we faced
compensate at the end of the trace, we counted bytes only an additional methodological problem: we can detect that
from clients born prior to the last 11 weeks of the trace. users are participating in the system only when their clients
Because of this “end threshold,” we could draw definitive transfer data (either by downloading or uploading files). If a
conclusions about clients’ behavior only during the first 11 client is on-line but not active, we could not observe them.
weeks of their lifetimes. Because of this, we report statistics about client activity
Figure 2 shows the total number of bytes requested by only, which is a lower bound on availability.3
the population as a function of its age. From this graph, we We use two specific metrics to quantify the amount of
can see that older clients consume fewer bytes than newer client transfer activity: (1) activity fraction, which measures
clients. There are two reasons for this effect: (1) attrition the fraction of time a client is transferring content over the
reduces the number of older clients, since clients may “die” client’s lifetime or over the duration of the entire trace, and
(i.e., leave the system forever) over time, and (2) some clients (2) average session length, in which a session is defined as
may continue to issue requests but do so at a slower rate as an unbroken period of time during which a client has one or
they age. We explore each of these in turn. more active transactions. Average session length measures
Attrition. To understand attrition in the system, we an- the typical duration of the periods during which a client is
alyzed the number of clients that remain alive as a function receiving or transmitting data. Our measurements indicate
of age (also shown in Figure 2). Population size declines at that the distributions of average session length and activity
a more gradual rate than bytes requested, at least over the fraction over the measured population are heavy-tailed.
first 11 weeks of clients’ lifetimes. Attrition therefore only 3
partially explains why older clients demand less in aggregate P2P software is often designed to make it difficult to close
the program once it starts, “fooling” users into making their
from the system. To fully explain this phenomenon, clients clients more available than intended. Accordingly, we sug-
must also slow down their request rates as they age. gest that client activity is a more universally comparable
Slowing down over time. Older clients may have slower and stable indicator of “availability” than other metrics.
317
1 1
0.9 requests
0.9
bytes transferred
% of requests/bytes
0.8 0.8
0.7 0.7
% requests
0.6 0.6
(CDF)
0.5 0.5
0.4 0.4
0.3 0.3
0.2 0.2
% bytes consumed
0.1 0.1
0 0
0.1 1 10 100 1000 <= 10 10-100 >100
object size (MB) object size (MB)
(a) (b)
Figure 4: Bandwidth consumed vs. object size. (a) CDFs of the total bandwidth consumed and the number of requests
generated, as a function of object size, and (b) bandwidth consumed and requests generated as a function of object
size, grouped into three regions.
318
small objects large objects cently born objects. Given that there is significant turn-
(primarily audio) (primarily video)
over in popularity within Kazaa, we wanted to understand
top 10 top 100 top 10 top 100
whether objects that become popular are old objects that
overlap in the most
popular objects have grown in popularity over time or recently born objects
0 of 10 5 of 100 1 of 10 44 of 100
between first and last that enjoy sudden popularity. Using the same month-long
30 days of trace
segments as before, we calculated the fraction of objects that
# of newly popular
objects that are 6 of 10 73 of 95 2 of 9 47 of 56 were newly popular (i.e., in the top-10 or top-100 in the last
recently born month of the trace but not the first month of the trace), but
did not receive any requests at all during the first month of
Table 3: Object popularity dynamics in Kazaa. There is
the trace (i.e., they were likely “born” after the first month
significant turnover in the set of most popular objects
of the trace). Table 3 shows the results: newly popular ob-
between the first 30 days and the last 30 days of the
jects tend to be recently born in Kazaa, although this is
trace. The newly popular objects (those in the set of
more true for audio objects than video objects.
most popular objects over the last 30 days but not in
the set over the first 30 days) tend to be recently born. Most requests are for old objects. The previous ex-
periments confirmed that the most popular objects tend to
decay in popularity, and that the newly popular objects that
2.3.2 Kazaa Object Dynamics replace them tend to be newly born. A related, but different,
question is whether most requests go to old or new objects.
A simple but crucial difference between multimedia and We categorize an object as “old” if at least a month has
Web workloads is that multimedia objects are immutable, passed since we observed the first request for that object.
while Web pages are not. Though obvious, this fact and We categorize it as “new” if it has been less than a month
its implications have not been discussed in the research lit- since it was first requested. Note that we can be sure that
erature. A video clip of “Bambi Meets Godzilla” will be an object is old, but we can never be sure that an object
the same clip tomorrow or the next day: it never changes. is new, since we may have missed requests for the object
On the other hand, the Web page CNN.com may change before our trace began. To deal with edge effects, we do not
every hour, or upon every access if the page is personal- include the first month of requests in our statistics, but we
ized for clients. Web workloads are thus strongly driven do use them to help distinguish between old and new objects
by dynamic content creation. It has been shown that the in subsequent months.
rate of document change is a key factor in Internet behav- Using this methodology, 72% of requests for large objects
ior and has enormous implications for caching, performance, go to old objects, while 28% go to new objects. For small
and content delivery in general [11, 37]. We now show how objects, 52% of requests go to old objects, and 48% go to new
immutability affects object dynamics. objects. This shows that a substantial fraction of requests
Kazaa clients fetch objects at most once. Because are for the old objects. Large objects requested tend to
objects are immutable and take non-trivial time to down- be older than small objects, reinforcing our assertion that
load, we believe that users typically download a Kazaa ob- Kazaa is really a mixture of workloads: the pace of life is
ject only once. Our traces confirm that 94% of the time, slower for large objects than for small objects.
a Kazaa client requests a given object at most once; 99% From the above discussion, it is clear that the forces driv-
of the time, a Kazaa client requests a given object at most ing the Kazaa workload differ in many ways from those driv-
twice. In comparison, based on a Web trace we gathered ing the Web. Web users may download the same pages many
during the first nine days of our Kazaa trace period, a Web times; Kazaa users tend to download objects at most once.
client requests a given object at most once only 57% of the The arrival of new objects plays an important role in P2P
time. file-sharing systems, while changes to existing pages are a
The popularity of Kazaa objects is often short- more important dynamic in the Web. We discuss implica-
lived. Object immutability also has an impact on object tions of these differences in Sections 3 and 4.
popularity dynamics. The set of most popular pages remains
relatively stable for the Web and these pages account for 2.3.3 Kazaa Is Not Zipf
a significant fraction of overall accesses [26]. In contrast, Much has been written about the Zipf-like qualities of
many of the most popular audio/video objects are routinely the WWW [7]. In fact, researchers commonly quote the
replaced by newly released objects, often in only a few weeks. Zipf parameter of the popularity distributions seen in their
To illustrate this change in popular Kazaa objects, we traces [14, 27], in part to demonstrate that their results
compared the first 30 days of the trace to the last 30 days of are “correct.” This Zipf property of Web access patterns
the trace. For each of these 30-day (month-long) segments, is thought to be a basic fact of nature: a small number of
we identified the top-10 most popular and the top-100 most objects are extremely popular, but there is a long tail of
popular objects (Table 3). For small objects, there was no unpopular requests. Zipf’s law states that the popularity of
overlap between the top-10 most popular objects: the most the ith-most popular object is proportional to i−α , where α
popular small objects had changed completely in the space is the “Zipf coefficient” or “Zipf parameter.” Zipf distribu-
of only six months. For large objects, there was only one tions look linear when plotted on a log-log scale.
object in common in the top-10 across these segments. The Figure 5 shows the Kazaa object popularity distribution
top-100 objects show only 5 small objects in common across on a log-log scale for large (>100MB) objects, along with
the segments, and 44 objects in common across the large the best-fit Zipf curve; a qualitatively similar curve exists
objects. Popularity is fleeting in Kazaa, and popular audio for small (<10MB) objects. This figure also shows the pop-
files tend to lose popularity faster than popular video files. ularity distribution of Web objects, drawn from our Web
The most popular Kazaa objects tend to be re- trace. Unlike the WWW, the Kazaa object request distribu-
319
10,000,000 servers and the Web. Following this, we present a model of
1,000,000
multimedia workloads in Section 4, and we use this model
to explore implications of non-Zipf behavior.
100,000
# of requests
WWW objects
10,000
1,000
3. ZIPF’S LAW AND MULTIMEDIA WORK-
100
LOADS
Previous studies of multimedia workloads examined ob-
10
100MB+ Kazaa objects ject popularity and found conflicting results, noting both
1 Zipf and non-Zipf behavior [4, 7, 8, 12, 33]. This section
1 10 100 1000 10000 100000 1E+06 1E+07 1E+08
examines previous work in the context of our own, with the
object rank
goal of explaining the similarities, differences, and causes of
the behavior observed both by us and others. We begin by
Figure 5: Kazaa is not Zipf. The popularity distribution presenting a hypothesis that explains the non-Zipf behav-
for large objects is much flatter than Zipf would predict, ior in Kazaa. Next, we discuss previous studies that have
with the most popular object being requested 100x less observed or modeled non-Zipf workloads and contrast our
than expected. Similarly shaped distributions exist for hypothesis with previous explanations. Finally, we attempt
small objects and the aggregate Kazaa workload. For to show the generality of our claim by revealing non-Zipf be-
comparison, we show the popularity distribution of Web havior in previously studied workloads. Section 4 then sup-
requests during a subset of the Kazaa trace period: the ports our hypothesis through the use of a generative work-
Web is well described by Zipf. load model whose output closely matches our observations.
320
100000 show up many times (potentially as many times as there
fetch-repeatedly are users) in a fetch-at-most-once workload such as Kazaa,
10000
even with individual client caches. Both result in non-Zipf
# of requests
321
100 1000
90
80
rental frequency
rental frequency
70
100
60
50
40
30 10
20
10
0 1
0 50 100 150 200 250 1 10 100 1000
movie index movie index
(a) (b)
500 1000000
450 100000
400 10000
box office sales
($millions)
1000
300
100
250
200 10
150 1
100 0.1
50 0.01
0 0.001
0 50 100 150 200 250 1 10 100 1000
movie index movie index
(c) (d)
Figure 7: Video rental and box office sales popularity. (a) The popularity distribution from a 1992 video rental data set
used to justify Zipf ’s law in many video-on-demand papers, along with a Zipf curve fit with α = 0.9, and (b) the same
data set and curve fit plotted on a log-log scale. Contrary to the assumption of many papers, video rental data does
not appear to follow Zipf ’s law. (c) The distribution of 2002 U.S. box office ticket sales on a linear scale, along with a
Zipf-fit with α = 2.28, and (d) on a log-log scale. This data set also appears to be non-Zipf.
322
1
tem with 1,000 clients rather than the roughly 7,000 large- fetch-repeatedly
object clients in our trace. We verified that the predictions 4000 1000
0.8 2000
of our model were not affected by this difference. To simplify
our model, we also assumed that all objects in the system
hit rate
0.6
were of equal size.
Our model captures key aspects of our P2P file-sharing 0.4
workload, in particular, the differences between file-sharing
0.2
and Web workloads. In a Web workload, clients select ob- 1000 2000
4000
jects from a Zipf distribution, P (x), in an independent and fetch-at-most-once
0
identically distributed fashion. In contrast, the object selec- 0 200 400 600 800 1000
tion process in a file-sharing system depends on three fac- days
tors: (1) the Zipf distribution, P (x), (2) the way in which
new objects are inserted into that distribution, A(x), and Figure 8: File-sharing effectiveness diminishes with client
(3) the clients’ fetch-at-most-once behavior. age. After an initial cache warm-up period, the request
Our model generates requests as follows. On average, a stream generated by fetch-at-most-once client behavior
client requests two objects per day, choosing which object to experiences rapidly decreasing cache performance, while
fetch from a Zipf probability distribution with parameter 1.0 the stream generated by fetch-repeatedly clients remains
(“Zipf(1)”).4 We hypothesize that the underlying popularity stable. This effect is shown for various cache sizes (1000,
of objects in a fetch-at-most-once file-sharing system is still 2000, and 4000 objects).
driven by Zipf’s law, even though the observed workload
becomes non-Zipf because of fetch-at-most-once clients. In
our model, subsequent requests from the same client obey the optimistic assumption that all objects are cachable and
distributions obtained by removing already fetched objects are never updated.
from the candidate object set and re-scaling so the total
probability is 1.0. Given two previously unrequested objects, 4.2 File-Sharing Effectiveness Diminishes with
the ratio of the probabilities that the client will request these Client Age
objects next is identical to their ratio in the original Zipf
Imagine an organization experiencing current demand for
distribution. For fetch-repeatedly systems, each request is
external bandwidth due to fetches of P2P file-sharing ob-
made according to the original Zipf distribution.
jects. How should this organization expect bandwidth de-
When modeling fetch-at-most-once systems, λO > 0 is the
mand to change over time, given a shared proxy cache? We
object arrival rate. When an object is born in a fetch-at-
address this question using the “static” case in which no new
most-once system, its popularity rank is determined by se-
clients or objects arrive. This static analysis allows us to fo-
lecting randomly from the Zipf(1) distribution. Pre-existing
cus on one factor at a time (fetch-at-most-once behavior, in
objects of equal or lesser popularity are “pushed down” one
this case). We relax these assumptions in later subsections
Zipf position, and the resulting distribution is re-normalized
to show the impact of other factors, such as new object and
so the total probability is again 1.0. In fetch-repeatedly sys-
client arrivals.
tems, we set the object arrival rate to 0. Objects may be
Figure 8 shows hit rate against time for various shared
updated, but for simplicity we ignore the second-order effect
cache sizes, assuming that at time zero no clients have fetched
of completely new objects on request behavior.
any objects. After a brief cache warm-up period, hit rate de-
While our trace shows all requested objects, it cannot ob-
creases as clients age, even for a cache that can hold 10% of
serve the total object population, since many available ob-
all file-sharing objects. This is because fetch-at-most-once
jects were never accessed. However, total object population
clients consume the most popular objects early. Later re-
is a key parameter of our model, as it influences the amount
quests are selected with nearly uniform likelihood from an
of overlap that will likely occur in requests from different
increasingly large number of objects. That is, the system
clients. We therefore estimated a base value for total object
evolves towards one in which there is no locality, and ob-
population by back-inference: how many large objects are
jects are chosen at random from a very large space.
most likely to have existed in total, given that we saw about
This behavior suggests that if client request rates remain
18,000 distinct large objects requested in the trace? We find
constant over time, the external bandwidth load they present
that a total population of about 40,000 large-media objects
increases, since more of their requests are directed to objects
is consistent with the trace data; therefore, we use this num-
that are only available externally. Conversely, if we hope to
ber as the base value. This number is also comparable to
have stable bandwidth demand over time, clients must be-
statistics that describe commercial movie releases: the In-
have in a way that reduces the intensity of their requests in
ternet Movie Database reports between 50,000 and 60,000
a manner consistent with the shape of the hit-rate decreases
movie releases world-wide over the past 20 years [34].
shown in Figure 8.
To quantify file-sharing effectiveness, we use the hit rate
The decrease in hit-rate over time is a strong property of
that the aggregate workload experiences against a 100%
fetch-at-most-once behavior. The underlying popularity dis-
available shared cache with LRU replacement, whose size
tribution need not be heavy tailed for it to occur. We have
we vary in each experiment. Selected experiments using
performed experiments using initial object popularity distri-
optimal replacement showed no qualitative differences from
butions that have higher locality (i.e., Zipf parameters larger
LRU results, and quantitative differences varied by only a
than 1.0). In a fetch-repeatedly context, the larger skew of
few percent. For Web (fetch-repeatedly) scenarios, we make
these distributions towards the most popular objects makes
4 file sharing easier and hit rates rise. For fetch-at-most-once
Attempting to best-fit a Zipf curve to our measured non-
Zipf distribution resulted in a Zipf parameter of 0.98. systems, hit rates start out much higher, but as clients age
323
1 1 100000
cache size = 8000
cache size = 4000
0.8 0.8 80000
cache size = 2000
(clients/day)
cache size = 1000
0.6
hit rate
hit rate
0.4 0.4 40000
0
0 0
1 2 3 4 5 6 7 8 9
0 200 400 600 800 1000
# object arrivals per user request
days
Figure 9: Object arrivals improve performance. Cache hit Figure 10: Client arrivals cannot stabilize performance.
rates improve with new object arrival rates for fetch- With constant client arrival rate, hit-rate decreases with
at-most-once clients because they replenish the supply time. To maintain constant hit-rate, client arrival rate
of popular objects. This improvement is shown across must increase exponentially with time.
varying cache sizes (500 to 8000 objects). The X-axis
shows the global object arrival rate relative to the aver-
age client’s request rate. Since the average client makes
therefore be an equivalent rejuvenating force to the infusion
2 requests per day, the point x = 1 implies that 2 new
of new objects. Unfortunately, in any practical sense, this
objects arrive globally per day.
turns out not to be the case.
Figure 10 shows two different results. First, we examine
hit-rate over time when new clients are introduced at a con-
they fall off even more sharply than in Figure 8. Thus, even stant rate. Initially, new clients bring up the hit-rate, but
if the file-sharing system evolves in a way that increases the eventually the constant arrival rate cannot compensate for
popularity of their most requested objects, the hit rate of ex-
the increasing numbers of old clients. Second, Figure 10
isting clients on existing objects will still decrease towards
shows an estimate of the arrival rate needed to keep hit rate
zero as clients age.
constant as the system ages. An arrival rate above this line
improves the average hit rate, while one below allows it to
4.3 New Object Arrivals Improve Performance decrease. This “break-even arrival rate” rises steeply over
The decay of hit rate with client age explains another time, too steeply to be realized in practice over more than a
surprising characteristic of P2P file-sharing systems: while short period. That, plus the fact that the overall bandwidth
Web performance suffers due to object updates, object ar- requirement increases in proportion to population size even
rivals are actually beneficial in a file-sharing system. This when hit rate is stable, leads us to conclude that the intro-
is because arrivals replenish the supply of popular objects duction of new clients cannot compensate for the hit-rate
that are the source of file-sharing hits. penalty of clients aging in a P2P file-sharing system.
Figure 9 shows this effect. We repeated the previous sim-
ulation, but this time introduced a non-zero object arrival 4.5 Model Validation
rate. Over any realistic range of arrival rate5 , hit rates in-
A primary goal of our model was to capture the peculiar
crease, approaching at maximum the hit rate of an equiva-
characteristics of fetch-at-most-once systems and the impor-
lent fetch-repeatedly system. For parameters set to the base
tance of new object and client arrivals. While we are confi-
values of our model (in which clients average two requests
dent about many aspects of our model, some of our assump-
per day), an object arrival rate as small as twelve new ob-
tions cannot be validated against trace data. For example,
jects introduced worldwide per day compensates for nearly
we assume that client requests are governed by a Zipf distri-
all the loss due to client aging.
bution and that object arrivals also obey a variant of Zipf’s
The arrival of new objects in a P2P file-sharing system is
law in terms of where they are placed in the overall popu-
therefore an important rejuvenating force that counterbal-
larity distribution. Even though we cannot directly verify
ances fetch-at-most-once behavior. Without new popular
these assumptions, we can verify that the observed behavior
objects to choose from, existing clients quickly exhaust the
in our trace is consistent with our model.
set of popular objects, after which they are forced to choose
We validated our model by using it to replicate the ob-
from the remaining heavy tail of unpopular objects. With-
ject popularity distribution measured in our trace. To do
out the infusion of new objects, the workload in a fetch-at-
this, we parameterized the model from the trace data to the
most-once system loses its locality over time.
extent possible and then compared the popularity distribu-
4.4 New Clients Cannot Stabilize Performance tions generated by the model (using simulation) with those
observed in the trace. We emphasize that we are not driv-
Because new clients have higher hit rates than old clients, ing the simulation with the detailed trace and simply getting
it might be possible for new clients joining a P2P file-sharing out what we put in. The simulation is driven synthetically
system to compensate for the performance loss due to the from our model, with rate and size parameters set from the
aging of existing clients. The infusion of new clients may average values measured in the trace. We set distributional
5
In the limit, when object arrival rate is high enough, cache parameters that cannot be obtained from any trace to the
hit rate goes back to zero: objects arrive so fast that they are values shown in Table 4.
displaced by even newer objects before two client requests It is not possible to set λO (the arrival rate of new objects)
can be made. with any confidence from our trace data, since our trace can-
324
10000 100%
14.0% 11.5%
fetch-repeatedly
% bytes transferred
(modeled) 80% 35.4%
1000
measured 60%
# requests
miss
1
all objects large objects small objects
1 10 100 1000 10000 100000
object rank Figure 12: Bandwidth savings with an ideal proxy cache.
This graph shows the byte hit rate for a simulated ideal
Figure 11: Predicted versus measured object popularity. cache (i.e., infinite capacity and bandwidth).
The popularity curves from our model and the actual
trace data match remarkably well. This supports our
conjecture that the measured popularity distribution is tools. This section explores an alternative strategy, namely,
non-Zipf because of fetch-at-most-once behavior. the exploitation of locality in the file-sharing workload. By
locality exploitation, we mean the more effective use of con-
tent available within an organization to substantially de-
not measure the worldwide introduction of new objects. For
crease external bandwidth usage. We begin by using a cache
that reason, we leave object arrival rate as a free parameter,
simulation to show the potential for locality exploitation and
adjusting it to obtain as tight a correspondence between the
then explore the benefits of locality-aware P2P file-sharing
model and measured data as possible.
request routing within an organization such as a university.
Figure 11 shows the results. With λO set to 5,475 new
objects per year, the popularity distribution predicted by 5.1 Measuring Locality in the Workload
the model is remarkably close to what we observed. It
The most common technology for capturing locality in
also clearly deviates from Zipf because of the influence of
an Internet workload is a proxy cache placed at an orga-
fetch-at-most-once behavior and object arrivals. The value
nizational border. A proxy guarantees that every object is
λO = 5, 475 is reasonable; in comparison, the Internet Movie
downloaded into the organization at most once, on the cold
Database tracked approximately 10,606 new objects world-
miss. Additional requests for a previously downloaded ob-
wide during 2002.
ject are then satisfied from the proxy without consuming
4.6 Summary external bandwidth. Simulating an ideal cache (i.e., infinite
This section developed a model of P2P file-sharing behav- capacity and bandwidth) therefore gives us an upper bound
ior, with client requests based on an underlying Zipf distri- on the bandwidth savings of any locality-aware mechanism,
bution. Based on this model, our analysis shows that: because the cache captures and serves all content transferred
into the organization.
1. Fetch-at-most-once client behavior, caused by the im- Figure 12 graphs the byte hit rate for an ideal, centralized
mutability of objects in P2P file-sharing systems, leads proxy cache, given our trace as its workload. Over all ob-
to a significant deviation from Zipf popularity distri- jects, a proxy cache would result in an external bandwidth
butions. savings of 86%. In our UW environment, this implies that
86% of the downloaded bytes already existed on other UW-
2. As a result, without the introduction of new objects
local clients at the time they were downloaded from UW-
and clients, P2P file-sharing system performance de-
external clients. It is therefore very clear that substantial
creases over time (bandwidth demands rise), because
untapped locality exists in the Kazaa workload that Kazaa
client requests “slide down the Zipf curve.”
does not exploit. If the university deployed an internal proxy
3. The introduction of new objects in P2P file-sharing cache for P2P file-sharing content, it would save substantial
systems acts as a rejuvenating force that counter-balances bandwidth, and therefore money.
the impact of fetch-at-most-once client behavior. In practice, IT departments may not wish to support a
cache that stores P2P file-sharing content, given the cur-
4. Introducing new clients does not have a similar effect, rent legal and political problems this could present. For this
because they cannot counteract the hit rate penalty of reason, we explore an alternative to the deployment of a cen-
client aging, which occurs at the same rate. tralized proxy cache: the use of organization-based locality-
The next section examines a scheme for reducing the ex- aware mechanisms for reducing external downloads. These
ternal bandwidth consumption predicted by our model and schemes favor organization-internal peers whenever possible
shown by our measurements. to serve data, effectively creating a distributed cache from
local peers. There are many potential implementations of
such a locality-aware architecture, including:
5. EXPLORING LOCALITY-AWARE
REQUEST ROUTING 1. Centralized request redirection: instead of de-
Previous studies have shown that a significant fraction of ploying a cache, an organization could deploy a redi-
Internet bandwidth is consumed by Kazaa file-sharing traf- rector at its boundary. The redirector would index
fic [29]. As a result, many organizations now curb P2P file- the locations of objects on peers within the organiza-
sharing bandwidth consumption through shaping or filtering tion and route internal clients’ requests to other inter-
325
nal peers whenever possible. The redirector should be 100%
11.5%
14.0%
transparent to the P2P file-sharing protocols.
% bytes transferred
80% 35.4%
21.9% 0.8% 19.8% 0.7%
cold
2. Decentralized request redirection: today’s P2P 60%
busy busy
busy
0.7%
file-sharing systems often employ the use of supern- busy
26.9% unavailable
hit
odes, distinguished peers that index content on other 40%
63.2% 67.9%
peers. Current architectures such as Kazaa are locality- 20% 37.1%
unaware, as our data shows. Through the use of topo-
logical distance estimation techniques such as GNP [25], 0%
IDMaps [13], or King [17], it may be possible to infuse all objects large objects small objects
supernodes with locality awareness, resulting in a fully
distributed redirection architecture. Figure 13: Bandwidth savings with ideal locality-aware
request redirection. This graph accounts for all bytes
The following sections use trace-based simulation to assess transferred to peers when using an ideal locality-aware
the potential benefits of these locality-aware mechanisms. scheme. A request hits if an available local host can serve
it. Otherwise, it misses and downloads from an external
5.2 Methodology host. Misses may be cold misses, busy misses (the object
We use trace-based simulation to evaluate a locality-aware exists locally but all available hosts with it are busy), or
scheme in which all requests from clients in the University of unavailable misses (the object exists locally but no hosts
Washington are redirected (when possible) to other univer- with it are available).
sity peers. Our simulated locality-aware mechanism is ideal,
in that it has perfect knowledge about which peers are cur- 1000 1.2
rently up and which objects each peer is willing to serve.
byte hit rate 1
We assume that: (1) all peers have infinite storage capacity,
(2) once a peer downloads an object, it makes that object
# requests
available to other peers when it is up, and (3) each peer can
0.6
serve at most 12 concurrent downloads, a number chosen to # requests
approximate the behavior of many P2P file-sharing systems, 10 0.4
including Kazaa and Gnutella. In our model, each peer has
a finite upload bandwidth of 500 Kb/s that is shared across 0.2
326
1.0 70
bytes served including head, excluding tail
0.9 60
0.8
% of bytes (CDF)
50
Figure 15: Highly available peers carry the load. A CDF Figure 16: Spreading the load around. The “includ-
of bytes served and bytes consumed by the peers under ing head, excluding tail” line shows locality-aware per-
a locality-aware mechanism, with peers sorted by avail- formance with load concentrated on the most available
ability, for large objects. Most of the bytes served and peers; the “excluding head, including tail” line shows
consumed come from highly available peers. performance if the most available peers were excluded
from serving content. All data are for large objects only.
served by the highly available peers, while few bytes were 0.4
0.3
served by the less available peers. We also show a CDF of
0.2
bytes consumed by peers on the same graph. As expected, base availability from trace
0.1
the highly available peers tend to consume most of the bytes 0
in the system, as well as serving them. Highly available peers 0% 20% 40% 60% 80% 100%
have more objects than less available peers, which is another availability (relative to entire trace)
reason why they may end up serving more bytes.
It is intuitive that more available peers will serve more Figure 17: Hit rate vs. availability. The effect of in-
bytes. However, it is conceivable that the set of less avail- creased average peer availability on byte hit rate. The
able peers would also be able to provide adequate object “improve head” line shows the effect of increasing the
availability. To evaluate this, we re-ran our simulation, at- availability of the most available peers, the “improve
tempting to “spread” the load to different subsets of the tail” line shows the effect of increasing the availability of
peer population. First, we concentrated the load on the the least available peers, and the “improve uniformly”
most available peers: the “including head, excluding tail” line shows the effect of increasing availability uniformly
line on Figure 16 shows the redirector hit rate as a function across peers. All data are for large objects only.
of the number of peers that we permitted to serve bytes,
selecting highly available peers for inclusion in the group.
Next, we concentrated the load on the least available peers; able peer until that peer was 100% available, then adding to
the “excluding head, including tail” line shows the hit rate the next-most available peer, and so on; the results of this
as a function of the number of peers that we excluded from are shown as the “improve head” line in Figure 17. Next,
serving bytes, excluding highly available peers first. we added availability to the least available peers, using a
Our results indicate that the highly available peers are both similar methodology; this is the “improve tail” line. Finally,
necessary and sufficient to obtain high hit rates. If only the we spread the availability around the population uniformly
top 1000 most available peers served bytes, we would still across peers; this is the “improve uniformly” line.
obtain a hit rate of 64%. However, if we excluded the top The results show that the impact on hit rate depends on
1000 available peers and relied only on the least available which hosts are made more available. Adding an extra hour
6153 peers, our hit rate would drop to 41%. of availability to the most available host pays a higher hit
rate dividend than adding that hour to the least available
5.5 Benefits of Increased Availability host. We believe this is because the most available hosts
To explore the impact that our conservative estimates of also have more files available, as shown in Figure 15.
availability had on our results, we re-ran our simulations, ar-
tificially augmenting the availability of the peers. To do this, 5.6 Summary
we added a constant number of “hours” of availability to the This section demonstrates that there is a tremendous am-
population to increase the overall average peer availability, ount of untapped locality in the Kazaa workload. As a
but we explored spreading this extra availability across the result, a large percentage (86%) of externally downloaded
population in different ways. bytes in our workload could be avoided by using an or-
First, we added the availability to the most available peers ganizational proxy. As an alternative to proxy caching,
in the population, preferentially adding to the most avail- we used trace-driven simulation to explore locality-aware
327
mechanisms that reduce external bandwidth consumption ness of request redirection at a different scale (within organi-
by maximizing the use of content stored at local peers. Our zations) and to explore different goals (cache performance).
results show that even with very conservative availability as-
sumptions, a locality-aware P2P file-sharing protocol or an
organizational redirector can achieve significant bandwidth 7. CONCLUSIONS
reductions in our environment. Peer-to-peer file sharing now dominates all other sources
of traffic on the Internet, yet the basic forces that drive this
6. RELATED WORK workload are still poorly understood. In this paper, we an-
alyzed a 200-day trace of Kazaa P2P file-sharing traffic col-
Several measurement studies have characterized the ba- lected at the University of Washington in order to dig deeper
sic properties of peer-to-peer file-sharing systems. Saroiu et into the nature of file-sharing workloads. Our results show
al. [30] analyzed the behavior of peers inside the Gnutella that P2P file-sharing workloads are driven by considerably
and Napster file-sharing systems, showing that there is sig- different processes than the Web. Kazaa is a batch-mode
nificant heterogeneity in peers’ bandwidth, availability, and system with extremely patient users who often wait days, or
transfer rates. A study of AT&T’s backbone traffic [32] even weeks, for their objects to fully download. As Kazaa
confirmed these results, and revealed significant skew in the clients age, they demand less from the system, partially be-
distribution of traffic across IP addresses, subnets, and au- cause of attrition. The objects that Kazaa users exchange
tonomous systems. Bhagwan et al. [6] measured the avail- are large, immutable video and audio objects, causing the
ability of hosts in the Overnet file-sharing system to under- typical client to fetch any given object at most once. The
stand the impact of IP aliasing on measurement methodol- popularity of Kazaa objects changes over time: popularity
ogy. Leibowitz et al. [22] performed a cache study based on tends to be short-lived, and popular objects tend to be re-
measurements of FastTrack-based P2P systems, including cently born.
Kazaa. Several studies [21, 23] have explored how host dy- Based on these results, we conclude that client births and
namics within peer-to-peer networks affect performance and object births are the fundamental processes driving P2P file-
reliability. Our Kazaa trace is over a substantially longer sharing workloads; in contrast, the Web is largely driven by
time period than most other peer-to-peer file sharing stud- changes to objects. We demonstrated that the “fetch-at-
ies, which allows us to draw conclusions about long-term most-once” behavior of clients causes the aggregate popular-
behavior. ity distribution of objects in Kazaa to deviate substantially
Because distributed systems and networks have complex from Zipf curves we typically see for the Web.
behavior, many researchers have sought to find high-level We also demonstrated that there is significant locality in
trends or summary statistics that capture essential proper- the Kazaa workload, and therefore substantial opportunity
ties of their workloads. Breslau et al. [7] explore the impact for caching to reduce wide-area bandwidth consumption.
of Zipf’s law with respect to Web caching, showing that Zipf- We evaluated the impact of topological proximity aware-
like popularity distributions cause cache hit rates to grow ness on Kazaa by simulating an ideal version of the system
logarithmically with population size, as well as other effects. in which nearby clients act as a distributed cache of Kazaa
In this paper, we perform a similar analysis, demonstrating objects for each other. Even with extremely conservative
that Kazaa traffic does not exhibit Zipf-like behavior, and trace-driven estimates of client availability, our simulation
that this has a resulting impact on caching. Crovella and results in a 63% cache hit rate over the population. If de-
Bestavros [9] argue that several factors converge to cause ployed in an environment such as the university we traced, a
self-similarity in Web traffic, including document size distri- distributed cache would achieve substantial traffic savings.
butions, caching, and user “think time.” In a similar spirit,
we show how fetch-at-most-once behavior leads to the flat-
tening of the Zipf curve. 8. ACKNOWLEDGEMENTS
Many researchers have proposed models of Web and file- We wish to thank Brian Youngstrom, who helped us with
sharing systems. Barford and Crovella [5] proposed a gen- our tracing infrastructure, and David Richardson, Art Dong,
erative model of Web traffic, based on ON-OFF behavior of and the other members of the Computing and Communica-
Web clients. Ge et al. [15] proposed an analytical model of tions organization at the University of Washington for their
P2P file-sharing networks and used it to explore the impact continued support. The guidance of our shepherd, John
of freeloaders on system performance; however, their model Wilkes, and our anonymous reviewers was invaluable. We
focuses on query characteristics only and is not trace-driven. also gratefully acknowledge Tom Anderson, Brian Bershad,
Wolman et al. [38] derived an analytical model of Web sys- Azer Bestavros, Jeff Chase, Mark Crovella, Peter Druschel,
tems to explore how Web caching performance scales with Anna Karlin, Scott Shenker, and Andrew Whitaker, whose
population size, and to demonstrate the limits of cooper- feedback and discussions sharpened both our research and
ative Web caching. In our work, we proposed a model of the presentation of our results. This material is based upon
P2P file-sharing traffic based on fetch-at-most-once client work supported by the National Science Foundation under
behavior and the rate at which objects and clients join the Grants ITR-0121341 and CCR-0085670 and by a gift from
system. While Wolman’s study shows that Web caching is Intel Corporation.
ultimately limited by the rate of change of documents, our
study shows that file-sharing performance is ultimately lim-
ited by the birth rates of objects and clients. 9. REFERENCES
Request redirection has been explored in the context of [1] S. Acharya, B. Smith, and P. Parnes. Characterizing user
content-distribution networks, most recently by Wang et access to videos on the World Wide Web. In Proceedings of
al. [36], who show how redirection strategies affect load bal- ACM/SPIE Multimedia Computing and Networking,
ancing, locality, and proximity. We consider the effective- January 2000.
328
[2] E. Adar and B. Huberman. Free riding on Gnutella. In systems. In Proceedings of ACM SIGCOMM 1997, Cannes,
First Monday, 5(10), October 2000. France, September 1997.
http://www.firstmonday.dk/issues/issue5_10/adar/. [19] Kazaa. Homepage http://www.kazaa.com, July 2003.
[3] J. Almeida, J. Krueger, D. Eager, and M. Vernon. Analysis [20] Keynote Systems Inc. Homepage at
of educational media server workloads. In Proceedings of http://www.keynote.com, July 2003.
the 11th International Workshop on Network and [21] J. Ledlie, J. Taylor, L. Serban, and M. Seltzer.
Operating Systems Support for Digital Audio and Video Self-organization in peer-to-peer systems. In Proceedings of
(NOSDAV ’01), Port Jefferson, NY, June 2001. the 2002 SIGOPS European Workshop, St. Emilion,
[4] V. A. F. Almeida, M. G. Cesario, R. C. Fonseca, W. M. Jr., France, September 2002.
and C. D. Murta. Analyzing the behavior of a proxy server [22] N. Leibowitz, A. Bergman, R. Ben-Shaul, and A. Shavit.
in light of regional and cultural issues. In Proceedings of the Are file swapping networks cacheable? Characterizing P2P
Third International WWW Caching Workshop, traffic. In Proc. of the 7th Int. WWW Caching Workshop,
Manchester, England, June 1998. August 2002.
http://hermes.wwwcache.ja.net/events/workshop/. [23] D. Liben-Nowell, H. Balakrishnan, and D. Karger. Analysis
[5] P. Barford and M. Crovella. Generating representative Web of the evolution of peer-to-peer networks. In Proceedings of
workloads for network and server performance evaluation. 2002 ACM Conference on the Principles of Distributed
In Proceedings of the ACM SIGMETRICS ’98, Madison, Computing, Monterey, CA, July 2002.
WI, June 1998. [24] S. McCanne and V. Jacobson. The BSD packet filter: A
[6] R. Bhagwan, S. Savage, and G. Voelker. Understanding new architecture for user-level packet capture. In
availability. In Proceedings of the 2nd International Proceedings of the Winter USENIX Conference, pages
Workshop on Peer-to-peer Systems, Berkeley, CA, 259–270, 1993.
December 2002. [25] E. Ng and H. Zhang. Predicting Internet network distance
[7] L. Breslau, P. Cao, L. Fan, G. Phillips, and S. Shenker. with coordinates-based approaches. In Proceedings of IEEE
Web caching and Zipf-like distributions: Evidence and INFOCOM 2002, NewYork, NY, June 2002.
implications. In Proceedings of IEEE INFOCOM 1999, [26] Nielsen Netratings, Inc., August 2003.
March 1999. http://www.nielsen-netratings.com.
[8] L. Cherkasova and G. Ciardo. Characterizing locality, [27] V. N. Padmanabhan and L. Qiu. The content and access
evolution, and life span of accesses in enterprise media dynamics of a busy Web site: Findings and implications. In
server workloads. In Proceedings of the 12th International Proceedings of ACM SIGCOMM 2000, August 2000.
Workshop on Network and Operating Systems Support for [28] D. Plonka. University of Wisconsin-Madison, Napster
Digital Audio and Video (NOSDAV ’02), Miami Beach, traffic measurement, March 2000. Available at
FL, May 2002.
http://net.doit.wisc.edu/data/Napster, March 2000.
[9] M. E. Crovella and A. Bestavros. Self-similarity in world
[29] S. Saroiu, K. P. Gummadi, R. J. Dunn, S. D. Gribble, and
wide Web traffic: Evidence and possible causes. H. M. Levy. An analysis of internet content delivery
IEEE/ACM Transactions on Networking, 5(6):835–846, systems. In Proceedings of the Fifth Symposium on
December 1997. Operating Systems Design and Implementation (OSDI
[10] A. Dan, D. Sitaram, and P. Shahabuddin. Scheduling 2002), Boston, MA, December 2002.
policies for an on-demand video server with batching. In
[30] S. Saroiu, P. K. Gummadi, and S. D. Gribble. A
Proceedings of ACM Multimedia 1994, October 1994. measurement study of peer-to-peer file sharing systems. In
[11] F. Douglis, A. Feldmann, B. Krishnamurthy, and J. C. Proceedings of Multimedia Computing and Networking
Mogul. Rate of change and other metrics: a live study of (MMCN) 2002, January 2002.
the World Wide Web. In Proceedings of the 1997 USENIX [31] J. Segarra and V. Cholvi. Distribution of video-on-demand
Symposium on Internet Technologies and Systems, Dec.
in residential networks. Lecture Notes in Computer
1997. Science, 2158:50–61, 2001.
[12] R. P. Doyle, J. S. Chase, S. Gadde, and A. M. Vahdat. The [32] S. Sen and J. Wang. Analyzing peer-to-peer traffic across
trickle-down effect: Web caching and server request large networks. In Proceedings of the Second SIGCOMM
distribution. In Proceedings of the Sixth International
Internet Measurement Workshop (IMW 2002), Marseille,
Workshop on Web Caching and Content Delivery, Boston, France, November 2002.
MA, June 2000.
[33] W. Tang, Y. Fu, L. Cherkasova, and A. Vahdat. Long-term
[13] P. Francis, S. Jamin, C. Jin, Y. Jin, D. Raz, Y.Shavitt, and streaming media server workload analysis and modeling.
L. Zhang. IDMAPS: a global Internet host distance Technical Report HPL-2003-23, HP Laboratories, January
estimation service. In IEEE/ACM Transactions on
2003.
Networking, October 2001.
[34] The Internet Movie Database, August 2003.
[14] S. Gadde, J. Chase, and M. Rabinovich. Web caching and http://www.imdb.com.
content distribution: A view from the interior. In Proc. of
[35] Video Store Magazine, March 2000. Published by Avanstar
the 5th International Web Caching and Content Delivery
Workshop, May 2000. Communications, http://www.videostoremag.com.
[15] Z. Ge, D. R. Figueiredo, S. Jaiswal, J. Kurose, and [36] L. Wang, V. Pai, and L. Peterson. The effectiveness of
D. Towsley. Modeling peer-peer file sharing systems. In request redirection on CDN robustness. In Proceedings of
Proceedings of INFOCOM 2003, Santa Fe, NM, October the Fifth Symposium on Operating Systems Design and
2003. Implementation (OSDI 2002), Boston, MA, December
2002.
[16] C. Griwodz, M. Bar, and L. C. Wolf. Long-term movie
popularity models in video-on-demand systems. In [37] A. Wolman, G. Voelker, N. Sharma, N. Cardwell,
Proceedings of ACM Multimedia 1997, Seattle, WA, M. Brown, T. Landray, D. Pinnel, A. Karlin, and H. Levy.
Organization-based analysis of Web-object sharing and
November 1997.
caching. In Proceedings of the 2nd USENIX Symposium on
[17] K. P. Gummadi, S. Saroiu, and S. D. Gribble. King: Internet Technologies and Systems, Oct. 1999.
Estimating latency between arbitrary internet end hosts. In
Proceedings of the Second SIGCOMM Internet [38] A. Wolman, G. Voelker, N. Sharma, N. Cardwell, A. Karlin,
Measurement Workshop (IMW 2002), Marseille, France, and H. Levy. The scale and performance of cooperative
Web proxy caching. In Proceedings of the 17th ACM
November 2002.
Symposium on Operating Systems Principles, Dec. 1999.
[18] K. A. Hua and S. Sheu. Skyscraper broadcasting: A new
broadcasting scheme for metropolitan video-on-demand
329