Hub, Bridge, Switch Router & Latency

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 17

hub is typically the least expensive, least intelligent, and least complicated of the
three. Its job is very simple: anything that comes in one port is sent out to the others.
That's it. Every computer connected to the hub "sees" everything that every other
computer on the hub sees. The hub itself is blissfully ignorant of the data being
transmitted. For years, simple hubs have been quick and easy ways to connect
computers in small networks.

A switch does essentially what a hub does but more efficiently. By paying attention to
the traffic that comes across it, it can "learn" where particular addresses are. For
example, if it sees traffic from machine A coming in on port 2, it now knows that
machine A is connected to that port and that trafficto machine A needs to only be sent
to that port and not any of the others. The net result of using a switch over a hub is
that most of the network traffic only goes where it needs to rather than to every port.
On busy networks this can make the network significantly faster.

"Varying degrees of magic happen inside the device, and therein


lies the difference."
A router is the smartest and most complicated of the bunch. Routers come in all
shapes and sizes from the small four-port broadband routers that are very popular right
now to the large industrial strength devices that drive the internet itself. A simple way
to think of a router is as a computer that can be programmed to understand, possibly
manipulate, and route the data its being asked to handle. For example, broadband
routers include the ability to "hide" computers behind a type of firewall which involves
slightly modifying the packets of network traffic as they traverse the device. All routers
include some kind of user interface for configuring how the router will treat traffic. The
really large routers include the equivalent of a full-blown programming language to
describe how they should operate as well as the ability to communicate with other
routers to describe or determine the best way to get network traffic from point A to
point B.

A quick note on one other thing that you'll often see mentioned with these devices and
that's network speed. Most devices now are capable of both 10mps (10 mega-bits, or
million bits, per second) as well as 100mbs and will automatically detect the speed. If
the device is labeled with only one speed then it will only be able to communicate with
devices that also support that speed. 1000mbs or "gigabit" devices are starting to
slowly become more common as well. Similarly many devices now also include 802.11b
or 802.11g wireless transmitters that simply act like additional ports to the device.
Hubs operate at ISO layer 1 - physical layer, Switches operates at ISO layer 2 - data link
layer, and Routers operate at ISO layer 3 - network layer.

HUB When Ethernet was originally designed it used a single fat coax called a backbone.
Individual hosts were physically connected to the backbone. This created a party line. Each
host has to listen for the backbone to be idle before it started talking. It is possible more
then one host will start talking at the same time, in that case the messages collide making
them unintelligible. This condition is detected each transmitter stops talking and waits a
variable interval before attempting to talk again. The Ethernet network is called a collision
domain, since all devices must wait until the line is clear, and may inadvertently interfere
with one another.

When Ethernet was modified to run over Unshielded Twisted Pair (UTP) Category rated
wiring the original coax backbone was shrunk within the hub, called a collapsed backbone.
Functionally a hub operates exactly as the old coax backbone. The ports on the hub provide
a point-to-point connection to the Ethernet interface in each computer. With a hub each
node must wait for the network to be idle and detect collisions between multiple nodes.

SWITCH As Ethernet networks grew in speed and size the party line nature was recognized
as a performance limitation. Switches eliminate the collision domain and work much like the
telephone switching system. 

When an Ethernet packet arrives at the switch the destination MAC address is examined and
the packet is switched to the proper port. Each Ethernet interface has a Media Access
Controller (MAC) 48-bit address assigned by the hardware vendor. The switch remembers
which MAC addresses are connected to each port. If the Switch does not know which port to
use it floods the packet to all ports. When it gets a response it updates its internal MAC
address table. 

This means Port A can talk to C at the same time F is taking to B. This greatly increases
overall performance even though it does not change the speed of individual connections.
Because the collision domain is eliminated connections are able to use full duplex, hosts can
transmit and receive at the same time improving performance even more.

ROUTER A router is used to interconnect multiple networks. The Internet is literally


Internetwork -- a network of networks. Internet router’s work on IP addresses to determine
how best to interconnect the sender to the destination. Because router’s work at the IP
layer different physical networks can be interconnected, Ethernet, Token Ring, Sonet, even
RS232 serial used for dialup can carry IP packets.

Routers intended for home use include Network Address Translation (NAT). This allows a
single address assigned by the ISP to be shared by multiple hosts connected to the local
network.
Hub
A hub is the simplest of these devices. Any data packet coming from one
port is sent to all other ports. It is then up to the receiving computer to
decide if the packet is for it. Imagine packets going through a hub as
messages going into a mailing list. The mail is sent out to everyone and it is
up to the receiving party to decide if it is of interest.
The biggest problem with hubs is their simplicity. Since every packet is sent
out to every computer on the network, there is a lot of wasted transmission.
This means that the network can easily become bogged down.

Hubs are typically used on small networks where the amount of data going
across the network is never very high.

Bridge
A bridge goes one step up on a hub in that it looks at the destination of the
packet before sending. If the destination address is not on the other side of
the bridge it will not transmit the data.
A bridge only has one incoming and one outgoing port.

To build on the email analogy above, the bridge is allowed to decide if the
message should continue on. It reads the address bob@smith.com and
decides if there is a bob@smith.com on the other side. If there isn’t, the
message will not be transmitted.

Bridges are typically used to separate parts of a network that do not need to
communicate regularly, but still need to be connected.

Switch
A switch steps up on a bridge in that it has multiple ports. When a packet
comes through a switch it is read to determine which computer to send the
data to.
This leads to increased efficiency in that packets are not going to computers
that do not require them.

Now the email analogy has multiple people able to send email to multiple
users. The switch can decide where to send the mail based on the address.
Most large networks use switches rather than hubs to connect computers
within the same subnet.

Router
A router is similar in a switch in that it forwards packets based on address.
But, instead of the MAC address that a switch uses, a router can use the IP
address. This allows the network to go across different protocols.
The most common home use for routers is to share a broadband internet
connection. The router has a public IP address and that address is shared
with the network. When data comes through the router it is forwarded to the
correct computer.

This comparison to email gets a little off base. This would be similar to the
router being able to receive a packet as email and sending it to the user as a
fax.

Hub :

A hub is a small, simple, inexpensive device that joins multiple computers together. Its job is very

simple: anything that comes in one port is sent out to the others. That’s it. This is quick and easy

ways to connect computers in small networks.

Hubs operate using a broadcast model 

Switch :

A switch is a small hardware device that joins multiple computers together within one local area

network (LAN). a switch generally contains more intelligence and a slightly higher price than a hub.

switches are capable of inspecting data packets as they are received, determining the source and

destination device of each packet, and forwarding them appropriately. For example, if it sees traffic

from machine A coming in on port 2, it now knows that machine A is connected to that port and that

traffic to machine A needs to only be sent to that port and not any of the others.

switches operate using a virtual circuit model.Switching involves moving packets between devices on

the same network.Switches operate at layer 2 of the OSI Model.


A switch is able to determine where a packet should be sent by examining the MAC address within the

data link header of the packet (the MAC address is the hardware address of a network adapter). A

switch maintains a database of MAC addresses and what port they are connected to.

Router :

A router is a small hardware device that joins multiple networks together. These networks can include

wired or wireless home networks, and the Internet.A simple way to think of a router is as a computer

that can be programmed to understand, possibly manipulate, and route the data its being asked to

handle.

Routing involves moving packets between different networks. Routers, on the other hand, operate at

layer 3 of the OSI Model.

A router is able to determine where to send a packet using the Network ID within the Network layer

header. It then uses the routing table to determine the route to the destination host.

A Hub is, in its simplest form, just like a mains multiplug unit. There is no intelligence or circuitry in it.
More complex units may incorporate an amplifier or repeater. The network signal goes into one port
and out of all the others. This is a Layer 1 device.
A Switch has a small level of intelligence, in that it can open a message, check the IP address, and
direct the message packets to the port on which the device with that IP address resides. It cannot
modify IP addresses or see addresses outside of the range of the 'home' network. This is a Layer 2
device.
A Router can read IP addresses, and direct the messages to another network with different IP
addresses to the originating network. The Router software can build up an address table, so that it
'knows' where other devices are. This is a Layer 3 device.

Network Latency
What is network latency and how does it affect you?

Overview
This document discusses the following questions:

 What is packet latency and how is it measured?


 What does latency do to networks, client PCs, servers, and their
associated traffic?
 What is an acceptable level of latency?
 How do you measure your latency?
 What do you do if your latency is too high?

What is Network Latency?


Definition of latency
For the purpose of this document, network latency is defined as the amount of
time it takes for a packet to cross the network from a device that created the
packet to the destination device. This is also known as end-to-end latency.
Delay could be considered a synonym for latency, but the word latency will be
used throughout this document.

Latency can be more complicated. For example, Latency can be measured


starting with the first bit that leaves the transmitting host and ending when the
last bit of the packet enters the destination device. Fortunately, the amount of
time it takes for a device to read a packet from or write a packet to the
network doesn’t contribute significantly to the overall latency.

Many things can contribute to the overall end-to-end latency. Also, there are
other network performance measurements that are related to latency. One
such measurement is jitter, or the change in end-to-end packet latencies over
time. This can have an effect on certain types of traffic.

What Causes Latency?


End-to-end latency is a cumulative effect of the individual latencies along the
end-to-end network path. Listed are some of the typical components of
latency from a workstation to a servers:

 workstation LAN
 WAN (if applicable)
 access router
 ISP link
 ISP network
 Path from ISP to Host
 Host internal network
Network routers are the devices that create the most latency of any device on
the end-to-end path. Routers can be found in each of the above network
segments. Packet queuing due to link congestion is most often the culprit for
large amounts of latency through a router. Some types of network technology
such as satellite communications add large amounts of latency because of the
time it takes for a packet to travel across the link. Since latency is cumulative,
the more links and router hops there are, the larger end-to-end latency will be.

How latency is measured


Here are the steps to measure latency.

 Note the time when the packet is transmitted – transmitted time


 Note the time when the packet reaches the destination – arrival time
 Subtract the transmitted time from the arrival time

Some difficulties with this process maybe evident:

 What device is used for a universal clock to read the time for both the
device that sends the packet and the device that receives the packet?
 Tracking the time that a particular packet is sent and when it is received.
 How useful is measuring end-to-end latency with only one packet
through a complex network where there is ample opportunity for the
latency to change over time?
 What happens to latency under various amounts of traffic loads?

Round-trip latency

To solve the universal clock problem, some test equipment has the ability to
sync up with a GPS clock and place a timestamp inside the packet that is sent
to measure latency. The receiving device is a similar piece of equipment that
can also sync up with a GPS clock. This device then compares the time that
the packet is received with the timestamp in the received packet to obtain an
end-to-end latency measurement for that packet.

This option is very expensive. Fortunately, there is a cost efficient method that
provides acceptable accuracy. If the sender to the receiver path is the same
as the path from the receiver to the sender, the round-trip latency (latency
from the sender to the receiver and back) can be measured and assumed that
the end-to-end latency is half of this result.
Measuring round-trip latency is easy and details of this will be covered later in
this document. Measuring round-trip latency means that all time comparisons
are made from the same device, which removes the need for devices to sync
to a common clock. It also solves the problem of keeping up with the send and
receive times for each packet since these times are all associated with one
packet in one device.

Statistical significance

Measuring the round-trip latency of one packet is not useful because latency
changes frequently. One good way to handle this variation in results is to
measure the round-trip latency for a number of packets and calculate an
average, maximum, and minimum for all these values.

Second-order latency values such as jitter (difference in latency values) or


standard deviation (difference from the average latency) can offer more
information as to the end-to-end network performance. However these values
are considered insignificant compared with the average latency for the
purposes of this document.

The more packets that are measured to obtain an average, the better the
measurement. For practical reasons, measuring a minute or so of latency
values to obtain an average latency should be adequate for each end-to-end
latency measurement.

Latency and traffic load

Latency can change as the traffic load changes. As load increases, it is


possible that latency will increase since buffers may begin to populate on the
path between the sender and receiver. Measuring latency while considering
network load can get complicated. To fully characterize the latency versus
load, measurements must be made at various network loads. Controlling the
network load during the measurement is difficult.

Fortunately, there are ways to simplify this process. Most enterprise networks
have a fairly predictable bandwidth usage pattern. This pattern will change as
new network applications are deployed and as more people use the network,
but from one day to the next there is little change in this pattern. Typically the
network bandwidth used is low from late afternoon until the beginning of the
business day. Network traffic will increase during the workday and there may
be slight decline in utilization during lunch.
Knowing these patterns makes it possible to test network latency with known
levels of network utilization. To systematically measure latency, it is good to
sample latency throughout the day in regular intervals.

Effect of latency on networks


The overwhelming majority of network traffic falls into one of two types of
traffic – UDP (User Datagram Protocol) and TCP (Transmission Control
Protocol). The majority of this traffic tends to be TCP.

UDP latency effects


UDP is a protocol that defines how to form messages that are sent over IP. A
device that sends UDP packets assumes that they reach the destination, so
there is no mechanism to alert senders that the packet has arrived. UDP
traffic is typically used for streaming media applications where an occasional
lost packet does not matter.

Since the sender of UDP packets does not require any knowledge that the
destination received the packets, UDP is relatively immune to latency. The
only effect that latency has on a UDP stream is an increased delay of the
entire stream. Second-order effects such as jitter may affect some UDP
applications in a negative way, but these issues are outside the scope of this
document.

It is important to note that latency and throughput are completely independent


with UDP traffic. In other words, if latency goes up or down, UDP throughput
remains the same. This concept has more meaning on the effects of latency
on TCP traffic.

There are no effects of latency on the sending device with UDP traffic. The
receiving device may have to buffer the UDP packets longer with high
amounts of jitter to help the application run better.

TCP latency effects


TCP is more complicated than UDP. TCP is a guaranteed delivery protocol,
which means that the device that sends the packets is told that the packet did
or did not arrive at the destination. To make this work, a device that needs to
send packets to a destination must set up a session with the destination.
Once this session has been set up, the receiver tells the sender which
packets were received by sending an acknowledgement packet back to the
sender. If the sender does not receive an acknowledgement packet for some
packets after a length of time, the packets are resent.

In addition to providing guaranteed delivery of packets, TCP has the ability to


adjust to the network capacity by adjusting the ‘window size’. The TCP
window is the number of packets a sender will transmit before waiting for an
acknowledgement. As acknowledgements arrive the window size is increased.
As the window size increases, the sender may begin sending traffic at a rate
that the end-to-end path can’t handle resulting in packet loss. Once packet
loss is detected, the sender will react by cutting the sending packet rate in
half. Then the process for increasing the window size begins again as more
acknowledgements are received.

As end-to-end latency increases, the sender may spend lots of time waiting on
acknowledgements instead of sending packets. In addition, the process of
adjusting the window size becomes slower since this process is dependent on
receiving acknowledgements.

Considering these inefficiencies, latency has a profound effect on TCP


bandwidth. Unlike UDP, TCP has a direct inverse relationship between
latency and throughput. As end-to-end latency increases, TCP throughput
decreases. The following table shows what happens to TCP throughput as
round trip latency increases. This data was generated by using a latency
generator between two PCs connected via fast Ethernet (full duplex). Note the
drastic reduction in TCP throughput as the latency increases.

Round trip TCP


latency Throughput
0ms 93.5 Mbps
30ms 16.2 Mbps
60ms 8.07 Mbps
90ms 5.32 Mbps

Table 1 - Effect of Latency on TCP Throughput

As latency increases, the sender may sit idle while waiting on


acknowledgements from the receiver. The receiver however, must buffer
packets until all the packets can be assembled into a complete TCP message.
If the receiver is a server, this buffering effect can be complicated by the large
number of sessions that the server may be terminating. This increased use of
buffer memory can cause performance degradation in the server.
With all the problems that latency creates for TCP, packet loss compounds
these problems. Packet loss causes the TCP window size to shrink, which
may cause the sender to sit idle longer while waiting for acknowledgements
with high latency. Also, acknowledgements may be lost which causes the
sender to wait until a timeout occurs for the lost acknowledgement. If this
happens, the associated packets will be retransmitted even though they may
have been transmitted properly. The result is packet loss can further decrease
TCP throughput.

The following table illustrates the effect of latency and packet loss on TCP
throughput. This data was generated by using a latency and packet loss
generator between two PCs connected via fast Ethernet (full duplex). The
packet loss rate was set to 2%, which means that 2% of packets were
discarded by the test equipment. Note that the TCP throughput values are
much lower in the presence of packet loss.

Round trip TCP TCP


latency Throughpu Throughput
t with no with 2%
packet loss packet loss
0 ms 93.50 Mbps 3.72 Mbps
30 ms 16.20 Mbps 1.63 Mbps
60 ms 8.07 Mbps 1.33 Mbps
90 ms 5.32 Mbps 0.85 Mbps

Table 2 - Effect of Latency and 2% Packet Loss on TCP Throughput

Some packet loss is unavoidable. If a network runs perfectly and does not
drop any packets, an assumption cannot be made that other networks operate
as well.

Regardless of the situation, keep in mind that packet loss and latency have a
profoundly negative effect on TCP bandwidth and should be minimized as
much as possible.

Measuring latency with ping


Most operating systems include a network connectivity test tool called ‘ping’.
Here is some example output from running ping on a Mac running OS X.
Other implementations of ping will produce similar output[macosxpowerbook]
user% ping 152.1.1.1

PING 152.1.1.1 (152.1.1.1): 56 data bytes


64 bytes from 152.1.1.1: icmp_seq=0 ttl=254 time=35.068 ms

64 bytes from 152.1.1.1: icmp_seq=1 ttl=254 time=2.92 ms

64 bytes from 152.1.1.1: icmp_seq=2 ttl=254 time=3.45 ms

64 bytes from 152.1.1.1: icmp_seq=3 ttl=254 time=7.409 ms

64 bytes from 152.1.1.1: icmp_seq=4 ttl=254 time=3.319 ms

64 bytes from 152.1.1.1: icmp_seq=5 ttl=254 time=9.072 ms

64 bytes from 152.1.1.1: icmp_seq=6 ttl=254 time=14.982 ms

64 bytes from 152.1.1.1: icmp_seq=7 ttl=254 time=4.495 ms

64 bytes from 152.1.1.1: icmp_seq=8 ttl=254 time=3.193 ms

64 bytes from 152.1.1.1: icmp_seq=9 ttl=254 time=10.613 ms

^C

--- 152.1.1.1 ping statistics ---

10 packets transmitted, 10 packets received, 0% packet loss

round-trip min/avg/max = 2.92/9.452/35.068 ms

The following is a list of useful information that can be inferred from the
output::

 The average round trip latency is 9.452ms (summary info in last line).
 Ten 64 byte packets were sent.
 The latency values ranged from 2.92ms to 35.068ms (summary info in
last line).
 No packets were lost (0% packet loss in the next to last line).

Pinging can be run manually at any time to measure latency and is fairly non-
intrusive, so it should not affect any users. The tool is easily automated in
scripts since it can run from the command line.
Latency standards
Defining a level of latency that is deemed acceptable is difficult since it’s hard
to determine a threshold for user productivity based on application response
times. However, it is important to define typical end-to-end latency values so
you have a reasonable goal for latency.

Monitoring is important since it is helpful to know typical latencies between


each remote and the servers. An increase in end-to-end latency may be an
indicator of a network problem.

With these issues in mind, here are some “rules of thumb” for end-to-end
latency (between workstation and servers):

 A round trip end-to-end latency of 30ms or less for LEAs is healthy. This
measurement should be monitored to track any changes.
 Round trip latencies between 30ms and 50ms should be monitored. If
there are ways to lower end-to-end latency, they should be considered.
 Round trip latencies greater than 50ms require immediate attention to
determine the cause of the latency and possible remedies to lower the
end-to-end latency. Monitor this measurement to track the
improvements.

What to do if your latency is too high


Review the ‘What Causes Latency?’ of this document. Note the components
of latency that can be controlled. This typically includes the following items:

 workstation LAN
 WAN (if applicable)
 access router
 ISP link

Use ping to measure the latency through each of these components


independently. If one or more components appear to be contributing
significantly to latency, begin designing a strategy to reduce the latency in that
component. There are too many possibilities to list here.

If high latency is outside of your control, report the problem to the parties
responsible for those network components. If this is the case, report the issue
to the ISP to see if they can relieve the problem and/or look into other options
for an ISP.

Most network communications-including downloading files, browsing the Web, and reading
e-mail-use the TCP Layer 3 protocol. TCP is considered a reliable network protocol because
the recipient must confirm receipt of all transmissions. If a transmission isn't confirmed, it's
considered lost and will be retransmitted.

However, confirming transmissions can prevent TCP transfers from using all available
bandwidth. This happens because TCP breaks blocks of data into small pieces before
transmitting them, and recipients must confirm receipt of each piece of the data. The
number of pieces that can be sent before waiting to receive confirmation receipts is called
the TCP receive window size.

When TCP was designed, network bandwidth was very low by today's standards, and
communications were relatively unreliable. Therefore, waiting for confirmation for small
pieces of a data block did not have a significant impact on performance. However, now that
bandwidth is measured in Mbps instead of Kbps, a small TCP receive window size can slow
communication significantly while the computer sending a data block waits for the receiving
computer to send confirmation receipts.

Figure below demonstrates how TCP confirms portions of a data block.

TCP works well and does indeed provide reliable transfers over a variety of networks.
However, waiting to confirm each portion of a data block causes a slight delay each time a
confirmation is required. Just how much delay depends on two factors:

 Network latency Network latency is the delay, typically measured in ms, for a


packet to be sent to a computer and for a response to be returned. Latency is also
called the round-trip time (RTT). If latency is so high that the sending computer is
waiting to receive confirmation receipts, latency has a direct impact on transfer
speed because nothing is being transmitted while the sending computer waits for the
confirmations.
 How much of the file can be transferred before waiting for confirmation
(TCP receive window size) The smaller the TCP receive window size, the more
frequently the sending computers might have to wait for confirmation. Therefore,
smaller TCP receive window sizes can cause slower network performance because
the sender has to wait for confirmations to be received. Larger TCP receive window
sizes can improve performance by reducing the time spent waiting for confirmations.

As you can see, high network latency can hurt performance, especially when combined with
small TCP receive window sizes. Computers can reduce the negative impact of highlatency
networks by increasing the TCP receive window size. However, versions of Windows prior to
Windows Vista used a static, small, 64-KB receive window. This setting was fine for low-
bandwidth and low-latency links, but it offered limited performance on high-bandwidth,
high-latency links. A TCP connection can get with various static values of the receive
window over different latency conditions. As you can see, the maximum throughput of a
TCP connection with the default receive window of 64 KB can be as low as 5 Mbps even
within a single continent and can go all the way down to 1 Mbps on a satellite link.

Windows Vista and Windows 7 include an auto-tuning capability for TCP receive window size
that is enabled by default. Every TCP connection can benefit in terms of increased
throughput and decreased transfer times, but high-bandwidth, high-latency connections will
benefit the most. Therefore, Receive Window Auto-Tuning can benefit network performance
significantly across both satellite and WAN links. However, performance on high-speed LANs
where latency is very low will benefit less.

Receive Window Auto-Tuning continually determines the optimal receive window size on a
per-connection basis by measuring the bandwidth-delay product (the bandwidth multiplied
by the latency of the connection) and the application retrieve rate, and it automatically
adjusts the maximum receive window size on an ongoing basis. For auto-tuning to
dramatically improve the throughput on a connection, all of the following conditions must be
true:

 High latency connection For example, RTTs of greater than 100 ms.


 High bandwidth connection For example, greater than 5 Mbps.
 Application does not specify a receive buffer size Some applications may
explicitly specify a receive buffer size, which would override the Windows default
behavior. This can offer similar benefits on older versions of Windows, but changing
the receive buffer size is uncommon.
 Application consumes data quickly after receiving them If an application does
not immediately retrieve the received data, Receive Window Auto-Tuning may not
increase overall performance. For example, if the application retrieves received data
from TCP only periodically rather than continually, overall performance might not
increase.

When TCP considers increasing the receive window size, it pays attention to the connection's
past history and characteristics. TCP won't advertise more than the remote host's fair share
of network bandwidth. This keeps the advertised receive window in line with the remote
host's congestion window, discouraging network congestion while encouraging maximum
utilization of the available bandwidth.
The Windows Vista and Windows 7 TCP/IP stacks support the following RFCs to optimize
throughput in high-loss environments:

 RFC 2582: The NewReno Modification to TCP's Fast Recovery Algorithm The


NewReno algorithm provides faster throughput by changing the way that a sender
can increase the sending rate when multiple segments in a window of data are lost
and the sender receives a partial acknowledgment (an acknowledgment for only part
of the data that is successfully received). You can find this RFC at
http://www.ietf.org/rfc/rfc2582.txt.
 RFC 2883: An Extension to the Selective Acknowledgment (SACK) Option for
TCPSACK, defined in RFC 2018, allows a receiver to indicate up to four
noncontiguous blocks of received data. RFC 2883 defines an additional use of the
fields in the SACK TCP option to acknowledge duplicate packets. This allows the
receiver of the TCP segment containing the SACK option to determine when it has
retransmitted a segment unnecessarily and adjust its behavior to prevent future
retransmissions. The fewer retransmissions sent, the better the overall throughput.
You can find this RFC at http://www.ietf.org/rfc/rfc2883.txt.
 RFC 3168: The Addition of Explicit Congestion Notification (ECN) to IP If a
packet is lost in a TCP session, TCP assumes that it is caused by network congestion.
In an attempt to alleviate the source of the problem, TCP lowers the sender's
transmission rate. With ECN support on both TCP peers and in the routing
infrastructure, routers experiencing congestion mark the packets as they forward
them. This enables computers to lower their transmission rate before packet loss
occurs, increasing the throughput. Windows Vista and Windows 7 support ECN, but it
is disabled by default. You can enable ECN support with the following command.
netsh interface tcp set global ecncapability=enabled
You can find this RFC at http://www.ietf.org/rfc/rfc3168.txt.
 RFC 3517: A Conservative Selective Acknowledgment (SACK)-based Loss
Recovery Algorithm for TCP The implementation of TCP/IP in Windows Server
2003 and Windows XP uses SACK information only to determine which TCP segments
have not arrived at the destination. RFC 3517 defines a method of using SACK
information to perform loss recovery when duplicate acknowledgments are received,
replacing the fast recovery algorithm when SACK is enabled on a connection.
Windows Vista and Windows 7 keep track of SACK information on a per-connection
basis and monitor incoming acknowledgments and duplicate acknowledgments to
recover more quickly when multiple segments are not received at the destination.
You can find this RFC at http://www.ietf.org/rfc/rfc3517.txt.
 RFC 4138: Forward RTO-Recovery (F-RTO): An Algorithm for Detecting
Spurious Retransmission Timeouts with TCP and the Stream Control
Transmission Protocol (SCTP) Spurious retransmissions of TCP segments can
occur with a sudden and temporary increase in the RTT. The Forward Retransmission
Timeout (F-RTO) algorithm prevents spurious retransmission of TCP segments. The
result of the F-RTO algorithm is that for environments that have sudden and
temporary increases in the RTT-such as when a wireless client roams from one
wireless access point to another-F-RTO prevents unnecessary retransmission of
segments and more quickly returns to its normal sending rate. You can find this RFC
at http://www.ietf.org/rfc/rfc4138.txt.

What does latency have to do with trransfer speed ?


There is no direct relationship between latency and transfer speed. The latenycy, or RTT (round-
trip time) measures how quickly a small packet can get from your computer to a server and back,
however it does not measure how much data (how many packets) can be transfered in a given
period of time. Two different ISPs with the same transfer speed, or even the same computer, at
different times and when connecting to different servers can give you very
different RTT values. Latency is related to the distance, as well as
your ISP's peering arrangements, network congestion at the time, etc. however it does not relate
to your available bandwidth or transfer speed.

With all that said, there is an indirect relationship between transfer speed and latency. For
example, a packet takes time to travel from a server to the client, and there is a limited number
of packets that can be sent (in a TCP/IP data transfer) before the server stops and awaits
acknowledgement for already received packets in order to continue, so excessive latency can
have a negative effect on transfer speed, especially with untweaked PCs on
abroadband Internet connection.

long latencies
Latency is the time it takes for a small packet of data to traverse from one end point to another on a
connected network. Long latencies can have a severely negative effect on the ability for a single TCP
network connection to utilize the available bandwidth. Long latencies and interactive use of web sites are
usually a bad idea, though all things are relative. The biggest problem with long latencies is that they
usually cannot be removed and the delay can as noticeable at the computer level as it is with live
transcontinental video interviews on TV news. Worse is the effect it has on large data transfers (big files),
without specific help vis-a-vis network acceleration, very large latencies like that seen in global and
satellite communications can make for a very frustrating experience.

You might also like