Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 12

-1Question(s): Source: Title: Contact: 17 Working Party: LM Ericsson Capacity, Utilization and Available Bandwidth Andreas Johnsson LM Ericsson

Sweden Svante Ekelin LM Ericsson Sweden Christofer Flinta LM Ericsson Sweden Tel: +46 10 7142509 Email: andreas.a.johnsson@ericsson.com Tel: +46 10 7146641 Email: svante.ekelin@ericsson.com Tel: +46 10 7133140 Email: christofer.flinta@ericsson.com Meeting, date: 3 Dec 2008 interim meeting C

Study Group: 12

Intended type of document (R-C-TD):

Contact:

Contact:

Please dont change the structure of this table, just insert the necessary information.

Abstract
The intent of this contribution is to investigate relations between terminology and interpretation concerning network throughput, bulk transfer capacity, available bandwidth and related performance parameters as defined by ITU-T, MEF, IETF and the academic literature. Further, in this contribution two different definitions of the parameters capacity, utilization and available bandwidth are compared to each other. The first definition has been widely used in the research community for several years, while the second is a recent contribution by IETF. The comparison indicates that the two definitions are in line with each other, but each has its advantages. Therefore, we recommend merging of these definitions. That is, adopting the IETF definitions with the important addition of the tight link capacity definition. This way we provide a comprehensive set of performance parameters in this area. We also recommend communicating a liaison to e.g. IETF, MEF and 3GPP about this in order to align the standardization efforts.

-2Contents 1. Introduction 3 2. Performance parameters 3 2.1. Intuitive concepts of capacity, utilization and available bandwidth....................3 2.2. ITU-T Y.1563 Ethernet Frame Rate....................................................................4 2.3. MEF Throughput..................................................................................................4 2.4. IETF RFC 1242 Throughput................................................................................5 2.5. IETF RFC 3138 Bulk transfer capacity ..............................................................5 3. Capacity, utilization and available bandwidth and their relation to performance parameters as defined by ITU-T, MEF and IETF 5 4. Definitions of capacity, utilization and available bandwidth 7 4.1. Definition 1 from the paper Real-Time Measurement of End-to-End Available Bandwidth using Kalman Filtering............................................7 4.2. Definition 2 from IETF RFC 5136...................................................................8 4.3. Relation between definition 1 and 2....................................................................9 4.3.1. Parameter 1.......................................................................................................9 4.3.2. Parameter 2a and 2b..........................................................................................9 4.3.3. Parameter 3.....................................................................................................10 4.3.4. Parameter 4.....................................................................................................10 4.3.5. Parameter 5.....................................................................................................10 4.3.6. Parameter 6.....................................................................................................10 5. Use cases for capacity and available bandwidth measurements 10 6. Discussion 11 7. References 11

-3-

1. Introduction
In packet-switched networks performance parameters such as one-way delay, delay variation, throughput, bulk transfer capacity and packet loss are vital when studying the behavior of a network. In addition to this, performance indicators such as availability (the fraction of time a network service is available) or serviceresponse time are often used. In this contribution the set of performance parameters capacity, utilization and available bandwidth is discussed and compared to other performance parameters such as throughput and bulk transfer capacity as defined in ITU-T, MEF and IETF. In the ITU-T SG12 meeting in May 2008, contribution 149 BART Available Bandwidth in Real Time was discussed. This contribution is based on the conclusions from that meeting, which recommended further investigation by consulting IETF RFC 5136, Defining Network Capacity, to compare the relevant performance metrics.

2. Performance parameters
In this section, the intent is to investigate existing terminology and interpretation concerning network throughput as defined in ITU-T, MEF and IETF and to compare those to intuitive concepts of capacity, utilization and available bandwidth. This comparison shows that e.g. available bandwidth and tight link capacity can provide important additional information about a network.

2.1. Intuitive concepts of capacity, utilization and available bandwidth


The capacity of a link is the maximum transfer rate at a given protocol layer (e.g. the IP layer) and the utilization is the fraction of the link capacity that is used. Available bandwidth refers to the unused capacity of a link during a certain time period. See Figure 1 for an illustration. The path available bandwidth (often referred to as the end-to-end or point-to-point available bandwidth) is then defined as the minimum value of available bandwidth of the links on the path between two network elements.

Available bandwidth Link capacity Narrow link Utilization Tight link

Figure 1: A sample network path to illustrate the concepts of capacity, utilization and available bandwidth. The network path consists of three consecutive links. In this example the leftmost link has the lowest link

-4capacity and is called the narrow link. The rightmost link has the lowest available bandwidth and is called the tight link (terminology from [5]). The term utilization can have various interpretations; either it is the traffic load measured in bits/second or it is a dimensionless ratio of traffic load to link capacity. The available bandwidth of a network path can be understood as the maximum additional data transfer rate between a sender and a receiver without saturating/congesting the network path in-between.

2.2. ITU-T Y.1563 Ethernet Frame Rate


In ITU-T Y.1563 (note: its under approval) the point-to-point octet-based Ethernet Frame rate (FOR) is defined as follows For a given population of interest, the octet-based Ethernet frame rate at an egress MP is the total number of octets transmitted in Ethernet frame payloads and headers that result in an Ethernet frame transfer reference event at that egress MP during a specified time interval divided by the time interval duration, TPOI (equivalently, the number of octets in the Ethernet frames resulting in Ethernet frame reference events per service-second). That is, the Ethernet Frame rate is a performance parameter that supplies a numeric value of the number of bits sent through the MP given a certain time interval. This definition can intuitively be interpreted as the utilization in Figure 1, if the MP is defined as one of the links. The concept of utilization in Section 2.1 does include also frames and packets not correctly received. The available bandwidth, as outlined above, is rather the unused capacity at the MP

2.3. MEF Throughput


MEF has in a liaison (Liaison to ITU-T SG12/Q17, SG13/Q5 from the MEF Delivered throughput project, 29 Oct 2008) informed ITU-T about their activities regarding delivered throughput. The goal is to provide a live traffic measure to replace customers need to conduct in-service throughput testing (i.e. active probing). Already existing industry standards related to throughput, according to MEF, are for example Ethernet nominal throughput defines the link capacity at the physical layer (a theoretical value of bits / second). Ethernet effective throughput defines the link capacity using a specific protocol (could be Ethernet, IP, et cetera). When calculating this performance parameter e.g. the maximum or minimum frame size can be used.

The definition of the delivered throughput is then a counter of the number of good service frame bits entering and exiting a reference point. That is, The delivered throughput is a performance parameter and measures the number of exiting good bits in real time. The offered throughput is a performance parameter and measures the number of entering good bits in real time.

Both the nominal and effective throughput is comparable to the link capacity in Figure 1.

-5The delivered and offered throughput relates to the utilization measured at the in- or outgoing part of a link (i.e. the number of bits passing by given a certain time frame). The concept of utilization in Section 2.1 does include also frames and packets not correctly received.

2.4. IETF RFC 1242 Throughput


RFC 1242 [4] offers yet another throughput definition: The maximum rate at which none of the offered frames are dropped by the device. In a complementary discussion the definition is further explained: The throughput figure allows vendors to report a single value which has proven to have use in the marketplace. Since even the loss of one frame in a data stream can cause significant delays while waiting for the higher level protocols to time out, it is useful to know the actual maximum data rate that the device can support. Measurements should be taken over a assortment of frame sizes. Separate measurements for routed and bridged data in those devices that can support both. If there is a checksum in the received frame, full checksum processing must be done. That is, the definition provides a numerical value describing the maximum bit rate (using some frame size) that the device and the link can support. This definition of throughput is similar to the effective throughput in MEF and the intuitive concept of link capacity in Section 2.1.

2.5. IETF RFC 3138 Bulk transfer capacity


In RFC 3148 [3] the bulk transfer capacity (BTC) is defined as the ability of a network to transfer data with a single congestion-aware transport connection (e.g. TCP). The data must be received correctly. The BTC is calculated as BTC = D / T where D is the data sent (not including packet headers) and being correctly received. T is the elapsed time. In RFC 3148 it is stated that the intuitive definition of BTC is the expected long term average data rate (bits per second) of a single ideal TCP implementation over the path in question. Since the definition of BTC is affected by higher-layer properties, and the fact that BTC is calculated for one end-to-end or source-to-destination flow at the time, the definition of BTC is different from the throughput definitions in IETF, MEF and ITU-T. So, how is BTC related to e.g. available bandwidth? We elaborate on this issue in Section 3.

3. Capacity, utilization and available bandwidth and their relation to performance parameters as defined by ITU-T, MEF and IETF
This section summarizes the similarities and differences between on one hand the concepts capacity, utilization and available bandwidth from Section 2.1 and on the other hand throughput and BTC as defined by ITU-T, MEF and IETF. In Table 1 a mapping is provided. For discussion see the corresponding sections above.

-6-

# 1 2 3 4 5 6 7

Performance parameter ITU-T Y.1563 Ethernet frame rate MEF Nominal throughput MEF Effective throughput MEF Offered throughput MEF Delivered throughput IETF RFC 1242 throughput IETF RFC 3138 BTC

Corresponding concept in Section 2.1 Utilization Capacity at physical layer Capacity at a given protocol layer Utilization Utilization Capacity at a given protocol layer -

Table 1: a summary of how various performance parameters relate to the concepts of capacity, utilization and available bandwidth as outlined in Section 2.1. As discussed above (and as summarized in Table 1), the definitions of throughput in ITU-T, MEF and IETF are comparable to the link capacity and/or link utilization as discussed in Section 2.1. However, the important performance parameters available bandwidth, narrow and tight link capacity seems to be missing in these definitions.

Figure 2: An example network used to illustrate the difference between available bandwidth and BTC. The fundamental difference between available bandwidth of a network path and the performance parameter BTC is best illustrated with an example, see Figure 2 above in the following discussion. Assume that node 1 performs a measurement of the BTC on the path between node 1 and node 2. Further, assume that several aggregated TCP flows over the path router 1 to router 4 consume the whole capacity of the shared link L before the measurement of BTC is started. That is, the available bandwidth on link L is zero according to Section 2.1 since the whole link capacity is fully utilized by the aggregated TCP flows. Starting the BTC measurement will force the congestion-aware TCP flows to back off in order to let the node 1 node 2 flow get its share of link L. That is, the BTC value is larger than zero and hence different from the available bandwidth which is zero.

-7The conclusion from this section is that the definitions of throughput and BTC provide standardized terminology for several but not all of the concepts discussed in Section 2.1. The missing performance parameters are the available bandwidth, narrow link capacity and tight link capacity.

4. Definitions of capacity, utilization and available bandwidth This section discusses two definitions of available bandwidth/available capacity (terminology differs between the two) and also related performance parameters. The first definition is taken from the paper Real-Time Measurement of End-to-End Available Bandwidth using Kalman Filtering [1] (which is in line with e.g. [5]) and the second is from RFC 5136. 4.1. Definition 1 from the paper Real-Time Measurement of End-to-End Available Bandwidth using Kalman Filtering
In [1] the following definitions are given: In the literature, the term available bandwidth has been used in different ways. To avoid any misinterpretations, we want to make clear what we denote by available bandwidth. For more details, see the review article [see references in the cited paper]. Each link j in a network path has a certain capacity, or nominal bandwidth, C j, determined by the network interfaces in the nodes on each end of the link. This is simply the highest possible bit rate over the link. The nominal bandwidth is quasi-constant, i.e. it typically does not vary. What is varying on short time-scale is the link load, or cross traffic,

X j = X j (t , ).

Here, is

the time resolution at which we are interested in describing traffic fluctuations. So, the cross traffic rate is defined by
X = 1 A j [t , t ]

, t ] is the number of cross traffic bits transmitted over link j during a time interval of where A j [t . length

The time-varying available bandwidth

B j = B j (t , )

of link j is defined as:

Bj = Cj X j
One of the links along the path has the smallest value of available bandwidth. This link is called the bottleneck link (or tight link using terminology from [see references in the cited paper]), and it determines the available bandwidth of the path. In other words, for a network path from sender to receiver, the available bandwidth is defined as the minimum of the available link bandwidths along the path:

B = min (C j X j )
j

-8This is in line with what is denoted by available bandwidth in [see references in the cited paper]. An interpretation of the available bandwidth B is: the smallest increase in traffic load from sender to receiver at time t which causes congestion at some hop on the network path. This interpretation is closely related to the method of measuring the available bandwidth by sending probe traffic at various rates, and determining the threshold rate when the probe traffic in conjunction with the cross traffic transiently experiences congestion, i.e., measurement by self-induced congestion. In fact, one might argue that the rate thus measured can be seen as the definition of available bandwidth.

4.2. Definition 2 from IETF RFC 5136


In the informational IETF RFC 5136 [2] definitions of IP-type-P link capacity, path capacity, link usage, link utilization, available link capacity and available path capacity are provided. IP-type-P means that all definitions are made on the IP-layer (i.e. layer 2 in the TCP/IP stack and layer 3 in the OSI stack) and that each definition is related to a specific type-P flow that could be manifested by a diffserv class, a VLAN tag or other technologies and protocols. For more information about the definitions see RFC 5136. Definition: IP-type-P Link Capacity We define the IP-layer link capacity, C(L,T,I), to be the maximum number of IP-layer bits that can be transmitted from the source S and correctly received by the destination D over the link L during the interval [T, T+I], divided by I. Definition: IP-type-P Path Capacity Using our definition for IP-layer link capacity, we can then extend this notion to an entire path, such that the IP-layer path capacity simply becomes that of the link with the smallest capacity along that path. C(P,T,I) = min {1..n} {C(Ln,T,I)} Definition: IP-type-P Link Usage The average usage of a link L, Used(L,T,I), is the actual number of IP-layer bits from any source, correctly received over link L during the interval [T, T+I], divided by I. Definition: IP-type-P Link Utilization We express usage as a fraction of the overall IP-layer link capacity. Util(L,T,I) = ( Used(L,T,I) / C(L,T,I) ) Definition: IP-type-P Available Link Capacity We can now determine the amount of available capacity on a congested link by multiplying the IPlayer link capacity with the complement of the IP-layer link utilization. Thus, the IP-layer available link capacity becomes: AvailCap(L,T,I) = C(L,T,I) * ( 1 - Util(L,T,I) ) Definition: IP-type-P Available Path Capacity

-9Using our definition for IP-layer available link capacity, we can then extend this notion to an entire path, such that the IP-layer available path capacity simply becomes that of the link with the smallest available capacity along that path. AvailCap(P,T,I) = min {1..n} {AvailCap(Ln,T,I)}

4.3. Relation between definition 1 and 2


In Table 2 below a mapping between definition 1 and 2 is provided.

Parameter # 1 2a 2b 3 4 5 6

Definition 1 Link capacity Tight link capacity Link load (a.k.a. cross traffic) Link load divided by link capacity Link available bandwidth Path available bandwidth (end-to-end)

Definition 2 IP-type-P Link capacity IP-type-P Path capacity IP-type-P Link usage IP-type-P Link utilization IP-type-P Available link capacity IP-type-P Available path capacity

Table 2: mapping between two definitions of bandwidth related performance parameters Both definition 1 and 2 take the time into account when defining parameters 4 6. This is important since the utilization (and thus also the available bandwidth/capacity) may change over time. Definition 2 from RFC 5136 goes one step further when defining also the link capacity as a function of time. This is crucial in scenarios where shared media access links constitute parts of or the whole network path (such as in WiFi 802.11, HSPA or switching technology networks) [6] [7]. In wireless networks the capacity may vary over time due to radio quality variations, distance between sender and receiver, usage of the medium by other nodes et cetera. Below, the parameters are discussed in more detail.

4.3.1. Parameter 1
Both definitions state that the link capacity/IP-type-P link capacity is defined as the highest possible bit rate, however using different wording. Definition 1 states that this parameter is typically constant while definition 2 introduces a time variable. Definition 2 is then more general as discussed above. This parameter is similar to throughput in IETF, or the MEF nominal/effective throughput.

4.3.2. Parameter 2a and 2b


Parameters 2a and 2b in the definitions actually describe two different capacity parameters of a network path. In definition 1 the parameter 2a corresponds to the capacity of the tight link while in definition 2 parameter 2b corresponds to the narrow link. As an illustration, in Figure 1 the tight link is the rightmost link and the narrow the leftmost link. Both parameters are important when characterising a network path.

- 10 4.3.3. Parameter 3
Definition 1 describes the link load/cross-traffic rate X j as the number of bits that is transferred during some time interval j. Definition 2 also has this definition, IP-type-P Used(L, T, I), although making it more clear that it is on the IP-layer the calculations should be done. This parameter is similar to the ITU-T Ethernet frame rate and the MEF offered/delivered throughput.

4.3.4. Parameter 4 Parameter 4 is essentially a normalization of the used capacity by dividing with the link capacity. This gives a number between zero and one. 4.3.5. Parameter 5
Definition 1 describes the available bandwidth B j as (parameter 1 parameter 3). Definition 2 use the equation [parameter 1 * (1 parameter 4)] to define AvailCap(L, T, I). The two definitions of parameter 5 are equivalent.

4.3.6. Parameter 6
Both definitions define parameter 6 as the link in a network path with the minimum time varying available bandwidth/IP-type-P available link capacity. That is, the minimum of several instances of parameter 5 in a network path defines parameter 6.

5. Use cases for capacity and available bandwidth measurements


The capability of measuring e.g. the available bandwidth between two nodes is useful in several contexts; including network monitoring, call-admission control and server selection. For example, measurement of available bandwidth in real-time opens up for adaptation based on available bandwidth directly (rather than first-order measures such as loss or delay) in congestion control and streaming of audio and video. In network monitoring both the available bandwidth and tight link capacity are important. One use case of the performance parameter tight link capacity is to further characterize that link. With knowledge of both the available bandwidth and the capacity of the tight link, the utilization of this link can be calculated. These parameters combined can help identifying common bottlenecks. One additional important application is to use the available bandwidth for service level agreement (SLA) verification. Operators offering for example mobile broadband using HSPA technology traditionally state a theoretical maximum limit of the offered bandwidth, in Sweden that limit is often 7.2 Mbps. Observe that the offered bandwidth is related to the IP-layer bandwidth (not to any radio-related performance parameters). In some countries discussions suggest to also introduce a minimum limit however this is far from incorporated in operator offers to their customers. The point-to-point available bandwidth between the mobile device and a test server located on e.g. the Internet will describe one parameter of the SLA offered by the operator. Having means to measure it gives customers a way to test the selling facts given by their operator (which usually only gives the maximum theoretical capacity limit of the connection). In the case of testing HSPA connections, the available bandwidth may change depending on where the user is located in relation to base stations, the weather conditions as well as the number of users in the cell. One example is a Swedish web-based tool for estimating the up- and downlink speeds of your broadband connection at home or at any other computer connected to the Internet. The service is located at www.bredbandskollen.se and is a Swedish regulator initiative with the intent to let end users test their bandwidth in order to check whether the operator supplies the promised broadband-connection speed. This

- 11 is a very popular service however it is of great interest that such services and tools perform the measurements in a standardized way to allow fair comparisons. Note that the BTC performance parameter can also be used for SLA verification, but it has to be defined what the operator actually offers when selling a service of 7.2 Mbps. Is it related to the BTC or the available bandwidth or something else?

6. Discussion
In this contribution we have investigated the performance parameters capacity, utilization and available bandwidth. We have compared these parameters to throughput and bulk transfer capacity as defined by ITUT, MEF and IETF. This investigation included a comparison of two similar definitions of capacity, utilization and available bandwidth. Definition 1 is from [1], also discussed in ITU-T SG12 Contribution 149, and definition 2 is from IETF RFC 5136. We propose to merge and standardize the definitions in [1] and in IETF RFC 5136 to provide a comprehensive set of performance parameters in this area. That is, adopting the IETF RFC 5136 definitions with the important addition of the tight link capacity definition. We also recommend communicating a liaison to e.g. IETF, MEF and 3GPP about this in order to align the standardization efforts. Our arguments for this are summarized in the following 1. There are several important use cases for the available bandwidth and related performance parameters as discussed in Section 5. 2. The available bandwidth and tight link capacity performance parameters are missing in ITU-T and MEF. 3. RFC 5136 provides a generic framework for the definition of available bandwidth and several related performance parameters that can be applied on an arbitrary network layer. Further, the definitions allow for time dependent performance parameters. They can also be related to an arbitrary timescale. 4. Tight link capacity is missing in IETF RFC 5136. Merging of definitions in [1] and RFC 5136 gives a more comprehensive framework.

7. References
1. Svante Ekelin, Martin Nilsson, Erik Hartikainen, Andreas Johnsson, Jan-Erik Mngs, Bob Melander, Mats Bjrkman, Real-Time Measurement of End-to-end Available Bandwidth using Kalman Filtering, in Proceedings to the IEEE NOMS Conference, Vancouver, Canada, 2006. 2. P. Chimento and J. Ishac, Defining Network Capacity, IETF RFC 5136, http://www.ietf.org/rfc/rfc5136.txt 3. M. Mathis and M. Allman, A Framework for Defining Empirical Bulk Transfer Capacity Metrics, IETF RFC 3148, http://www.ietf.org/rfc/rfc3148.txt 4. S. Bradner, Benchmarking Terminology for Network Interconnection Devices, IETF RFC 1242, http://www.ietf.org/rfc/rfc1242.txt 5. R. Prasad, M. Murray, C. Dovrolis, and K. Claffy, Bandwidth estimation: metrics, measurement techniques, and tools, in IEEE Network, November/December 2003. 6. Andreas Johnsson and Mats Bjrkman, On Measuring the Available Bandwidth in Wireless Networks, in Proceedings to the Local Computer Networking Conference, Montreal, Canada, Oct 2008.

- 12 7. Erik Bergfeldt, Svante Ekelin and Johan M. Karlsson, Available-Bandwidth Measurements over Mobile Broadband Connections, Submitted.

You might also like