Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 8

Bahir Dar Institute of Technology

Faculty of Electrical and Computer Engineering,

Department of Computer Engineering

Advanced Computer Networks

A High Speed TCP Study: Characteristics and Deployment Issues

Authors: Evandro de Souza and Deb Agarwal

Lawrence Berkeley National Lab, Berkeley, CA, USA

Reviewed By :

Name ID No

1. Erkihun Mulu----------------------------------------------------BDU1501434

Submitted to: Henock Mulugeta (PhD)


Submission Date:6/17/2023

Bahir Dar, Ethiopia


Abstract
The current congestion control mechanism used in TCP has difficulty reaching full utilization on
high speed links, particularly on wide-area connections. For example, the packet drop rate
needed to fill a Gigabit pipe using the present TCP protocol is below the currently achievable
fiber optic error rates. High Speed TCP was recently proposed as a modification of TCP’s
congestion control mechanism to allow it to achieve reasonable performance in high speed wide-
area links. in this paper, simulation results showing the performance of High Speed TCP and the
impact of its use on the present implementation of TCP are presented. Network conditions
including different degrees of congestion, different levels of loss rate, different degrees of bursty
traffic and two distinct router queue management policies were simulated. The performance and
fairness of High Speed TCP were compared to the existing TCP and solutions for bulk-data
transfer using parallel streams.

I. INTRODUCTION
Applications demanding high bandwidth such as bulk-data transfer, multimedia web streaming,
and computational grids are becoming increasingly prevalent in high performance computing.
These applications are often operating over wide area networks so, performance over the wide-
area network has become a critical issue. The packet drop rate needed to fill a Gigabit pipe using
the present TCP protocol is beyond the limit of currently achievable fiber optic error rates, and
the congestion control becomes not so dynamic. Without expert attention from network
engineers, most users are unlikely to achieve even 5 Mbps on a single stream wide-area TCP
transfer, despite the fact that the underlying network infrastructure can support rates of 100 Mbps
or more.
HSTCP (High Speed TCP) is a recently proposed revision to the TCP congestion control
mechanism. It is specifically designed for use in high-speed wide-area links. There exist few
studies into the issues of its use. In this paper, the performance of HSTCP and the impact of its
use on the present implementation of TCP is analyzed in different network conditions. These
conditions include different degrees of congestion, different levels of loss rate, different degrees
of bursts traffic and two distinct router queue management policies. It is expected that these
different network conditions present a broad view of the strengths and weaknesses of HSTCP.

II. TCP PERFORMANCE PROBLEMS IN HIGH SPEED LINKS

1
TCP was first designed in the early 1970s. Many research, development and standardization efforts have
been devoted to the TCP/IP technology since then. It is widely used in the current Internet and it is the de-
facto standard transport-layer protocol.

Congestion management allows the protocol to react to and recover from congestion and operate
in a state of high throughput yet sharing the link fairly with other traffic. Van Jacobson proposed
the original TCP congestion management algorithms. The importance of congestion management
is now widely acknowledged. Many RFCs (Request For Comments) intended to enhance TCP’s
performance have been published within the IETF.
TCP’s congestion management is composed of two important algorithms. The slow-start and
congestion avoidance algorithms allow TCP to increase the data transmission rate without
overwhelming the network. They use a variable called CWND (Congestion Window). TCP’s
congestion window is the size of the sliding window used by the sender. TCP cannot inject more
than CWND segments of unacknowledged data into the network.
TCP’s algorithms are referred to as AIMD (Additive Increase Multiplicative Decrease) and are
the basis for its steady-state Congestion Control. TCP increases the congestion window by one
packet per window of data acknowledged, and halves the window for every window of data
containing a packet drop.
The Problem With the TCP Congestion Avoidance Algorithm
The introduction of high-speed network technologies has opened the opportunity for a dramatic
change in the achievable performance of TCP based applications. Unfortunately, this potential
has not generally been realized. The performance of a TCP connection is dependent on the
network bandwidth, round trip time, and packet loss rate. At present, TCP implementations can
only reach the large congestion windows necessary to fill a pipe with a high bandwidth delay
product when there is an exceedingly low packet loss rate. Otherwise, random losses lead to a
significant throughput deterioration when the product of the loss probability and the square of the
bandwidth delay is larger than one.
Proposed Solutions for TCP Congestion Avoidance Performance Problems in High Speed Links.
Other solutions that have been proposed to overcome this limitation include
: a) XCP: The XCP (eXplicit Control Protocol) generalizes the ECN (Explicit Congestion
Notification) proposal. Instead of the one bit congestion indication used by ECN, XCP-enabled
routers inform senders of the degree of congestion at the bottleneck.
b) FAST TCP: FAST TCP [19] proposes that, to maintain stability, sources should scale down
their responses by their individual RTT and links should scale down their response by their
individual capacity, because it has been shown that the current algorithms can become unstable
as delay increases, and also as network capacity increases.

2
III. HIGHSPEED TCP FUNDAMENTALS

The High Speed TCP for Large Congestion Windows was introduced in [16] as a modification of
TCP’s congestion control mechanism to improve performance of TCP connections with large
congestion windows.
High Speed TCP is designed to have a different response in environments of very low congestion
event rate, and to have the standard TCP (referred to in this work as Regular TCP or REGTCP)
response in environments with packet loss rates of at most 10−3. Since, it leaves TCP’s behavior
unchanged in environments with mild to heavy congestion, it does not increase the risk of
congestion collapse. In environments with very low packet loss rates (typically lower than 10−3),
HighSpeed TCP presents a more aggressive response function.
The High Speed TCP response function is represented by new additive increase and
multiplicative decrease parameters. These parameters modify both the increase and decrease
parameters accordinig to CWND. In congestion avoidance phase, its behavior can be expressed
in the following equations: Congestion Avoidance
ACK: CWND←CWND + a(CWND) CWND-------------(4)
DROP: CWND←CWND−b(CWND)×CWND-------- (5)

IV. INVESTIGATION
The general purpose of this work was to study the effectiveness of High Speed TCP in high-
speed long-distance links as a mechanism for bulk data transfer. Of particular concern was the
study of fairness with other types of TCP already in use. More details of this study can be found
in.
The main focus was on the behavior of High Speed TCP and Regular TCP in situations where
both were in steady state or near to steady-state. The TCP congestion avoidance phase was of
particular interest since it is where the AIMD algorithm works, and thus when the High Speed
TCP algorithm runs. Long-lived TCP flows traveling on high-speed and long distance links with
a large amount of data to transmit were the focus. This investigation was developed using a
simple topology scenario to minimize complexity and to reduce the number of variables to
collect and study.

V. METHODOLOGY

1) Network Topology:

3
The simulation network topology used was a dumbbell with a single bottleneck, as shown
in Figure 3. All traffic passed through the bottleneck link. The bottleneck link bandwidth
was 1 Gbits/s, the link delay was 50 ms. The simulations used two types of router queue
management, DT (DropTail) and RED (Random Early Detection). In the case of RED,
ECN was also used. The queue size at each router was the bandwidth delay product in
packets.

2) TCP Flows Setup: The TCP flows had the ECN bit set; the packet size was 1500 bytes;
the maximum window size was large enough to not impose limits; random times between
sends were set to avoid phase effects; the flows used a modified version of the Limited
Slow-Start algorithm for large congestion windows. For comparison, the HSTCP flows
were run against TCP SACK implementations. A set of web-like flows and a set of small
TCP flows were also used as background noise for all the simulation. Other TCP flows
were used to represent bursty traffic. They were short-lived flows that lasted for a few
seconds.
3) Data Collection Configuration: The aggregated data was collected in two halves. Only the
second half was of interest because this research focused on the steady-state behavior.
Each experiment was run ten times, for three hundred seconds. The line shown is the
median of these simulation
Descriptions of Scenarios for the Experiments
The three sets of flows were each exposed to different network environments. In the first network
environment there were no other traffic sources and no extra interference beyond that generated
by the REGTCP and HSTCP flows. This network environment we refer to as Ideal Condition.
The second network environment represented the situation where there were systemic losses (or
losses not directly related to congestion). We call it Lossy Link Condition. Some number of
packets were randomly dropped from the flows, with a defined drop rate. The third network
environment explored the reaction of the flow sets to bursty traffic, so we refer to it as Bursty
Traffic Condition. The bursty traffic was composed of shortlived standard TCP flows running for
a few seconds.

VI. RESULTS FROM THE EXPERIMENTS

4
This section presents the results of our experiments. All experiments were run with both RED and DT
queuing policy. In the cases where there is a significant difference in the results, results for each
individual queuing policy are presented.

This first experiment allowed us to observe the basic behavior of a single REGTCP or a single HSTCP flow
run in isolation. The experiment ran only one time, without external interference.

As shown below, a REGTCP flow has a slower growth compared to the HSTCP flow. The REGTCP flow
takes around 300 seconds to reach the bandwidth limit in congestion avoidance. In comparison, the
HSTCP flow reaches this point before 50 seconds (RED case). The second observation is that HSTCP has
an oscillatory behavior with a very short period. The third important observation is the influence of
router queue management on the behavior of both flows.

A. Comparison with Regular TCP

5
As was established in the previous section, High Speed TCP performs better than Regular TCP
for high-speed long-distance links. The major drawback in Regular TCP is that it dramatically
reduces the size of the congestion window in response to a congestion event and its ACK-
clocked congestion window grows in increments of one. This leads to slow recovery from a
congestion event when the congestion window was very large and leaves the link with a low
level of utilization during significant periods of time. In comparison, High Speed TCP cuts the
window less and it grows faster so its recovery takes less time. This characteristic increases its
average link utilization. In the presence of systemic losses, Regular TCP flows show poor link
utilization. For the conditions of our study, a link loss rate between 10−5 and 10−4 prevented
Regular TCP from making reasonable use of the link bandwidth available (less than 50% in our
case). In this range, the High Speed TCP flows were able to use almost double the bandwidth
used by Regular TCP.

6
VIII. Future Work

As result of this work, I found that High Speed TCP indeed performs better than Standard TCP
for high-speed long-distance links. High Speed TCP increases its throughput faster and its
recovery from a congestion event takes less time. This characteristic increases its average link
utilization.
The pressure for more network bandwidth has produced technologies that have increased the
bandwidth capacity available several times. New technologies, such as optical links have opened
the opportunity to transfer several terabytes in a relatively small time. However, standard TCP
has a difficult time reaching full utilization of optical links, particularly in wide-area
connections. So, many network applications are unable to take full advantage of these new high-
speed networks, and utilize the available bandwidth.
I saw that High Speed TCP only requires changes in the TCP stack in the sender to be used. This
represents an advantage over other types of bulk data transfer such as parallel streams, when it is
necessary to change the programs and know a priori the number of parallel flows to use. Fairness
is a concern for the deployment of High Speed TCP, however our opinion is that High Speed
TCP presents better adaptability to an environment of variable congestion event rates.

IX. CONCLUSION

TCP has difficulty fully utilizing network links with a high bandwidth delay product. High Speed
TCP outperforms TCP in this condition, and has an adaptability that makes an incremental
adaption approach easy. High Speed TCP is easy to deploy avoiding changes in routers and
programs. High Speed TCP is appropriate to bulk data transfer applications, because it is able to
maintain high throughput in different network conditions, and it is easy to deploy when
compared with other solutions already in use. A point of concern is its fairness at low speeds,
mainly in networks with droptail routers. A better relation with TCP may be achieved by
adjusting its three parameters, in particular Low Window. At high speeds it is not possible to
maintain a fair relationship, nor is it desirable, using the present concept of fairness. Pacing could
help with droptail routers, and improve fairness. Futher studies in this area should be carried out.

You might also like