Professional Documents
Culture Documents
Hub, Bridge, Switch Router & Latency
Hub, Bridge, Switch Router & Latency
Hub, Bridge, Switch Router & Latency
hub is typically the least expensive, least intelligent, and least complicated of the
three. Its job is very simple: anything that comes in one port is sent out to the others.
That's it. Every computer connected to the hub "sees" everything that every other
computer on the hub sees. The hub itself is blissfully ignorant of the data being
transmitted. For years, simple hubs have been quick and easy ways to connect
computers in small networks.
A switch does essentially what a hub does but more efficiently. By paying attention to
the traffic that comes across it, it can "learn" where particular addresses are. For
example, if it sees traffic from machine A coming in on port 2, it now knows that
machine A is connected to that port and that trafficto machine A needs to only be sent
to that port and not any of the others. The net result of using a switch over a hub is
that most of the network traffic only goes where it needs to rather than to every port.
On busy networks this can make the network significantly faster.
A quick note on one other thing that you'll often see mentioned with these devices and
that's network speed. Most devices now are capable of both 10mps (10 mega-bits, or
million bits, per second) as well as 100mbs and will automatically detect the speed. If
the device is labeled with only one speed then it will only be able to communicate with
devices that also support that speed. 1000mbs or "gigabit" devices are starting to
slowly become more common as well. Similarly many devices now also include 802.11b
or 802.11g wireless transmitters that simply act like additional ports to the device.
Hubs operate at ISO layer 1 - physical layer, Switches operates at ISO layer 2 - data link
layer, and Routers operate at ISO layer 3 - network layer.
HUB When Ethernet was originally designed it used a single fat coax called a backbone.
Individual hosts were physically connected to the backbone. This created a party line. Each
host has to listen for the backbone to be idle before it started talking. It is possible more
then one host will start talking at the same time, in that case the messages collide making
them unintelligible. This condition is detected each transmitter stops talking and waits a
variable interval before attempting to talk again. The Ethernet network is called a collision
domain, since all devices must wait until the line is clear, and may inadvertently interfere
with one another.
When Ethernet was modified to run over Unshielded Twisted Pair (UTP) Category rated
wiring the original coax backbone was shrunk within the hub, called a collapsed backbone.
Functionally a hub operates exactly as the old coax backbone. The ports on the hub provide
a point-to-point connection to the Ethernet interface in each computer. With a hub each
node must wait for the network to be idle and detect collisions between multiple nodes.
SWITCH As Ethernet networks grew in speed and size the party line nature was recognized
as a performance limitation. Switches eliminate the collision domain and work much like the
telephone switching system.
When an Ethernet packet arrives at the switch the destination MAC address is examined and
the packet is switched to the proper port. Each Ethernet interface has a Media Access
Controller (MAC) 48-bit address assigned by the hardware vendor. The switch remembers
which MAC addresses are connected to each port. If the Switch does not know which port to
use it floods the packet to all ports. When it gets a response it updates its internal MAC
address table.
This means Port A can talk to C at the same time F is taking to B. This greatly increases
overall performance even though it does not change the speed of individual connections.
Because the collision domain is eliminated connections are able to use full duplex, hosts can
transmit and receive at the same time improving performance even more.
Routers intended for home use include Network Address Translation (NAT). This allows a
single address assigned by the ISP to be shared by multiple hosts connected to the local
network.
Hub
A hub is the simplest of these devices. Any data packet coming from one
port is sent to all other ports. It is then up to the receiving computer to
decide if the packet is for it. Imagine packets going through a hub as
messages going into a mailing list. The mail is sent out to everyone and it is
up to the receiving party to decide if it is of interest.
The biggest problem with hubs is their simplicity. Since every packet is sent
out to every computer on the network, there is a lot of wasted transmission.
This means that the network can easily become bogged down.
Hubs are typically used on small networks where the amount of data going
across the network is never very high.
Bridge
A bridge goes one step up on a hub in that it looks at the destination of the
packet before sending. If the destination address is not on the other side of
the bridge it will not transmit the data.
A bridge only has one incoming and one outgoing port.
To build on the email analogy above, the bridge is allowed to decide if the
message should continue on. It reads the address bob@smith.com and
decides if there is a bob@smith.com on the other side. If there isn’t, the
message will not be transmitted.
Bridges are typically used to separate parts of a network that do not need to
communicate regularly, but still need to be connected.
Switch
A switch steps up on a bridge in that it has multiple ports. When a packet
comes through a switch it is read to determine which computer to send the
data to.
This leads to increased efficiency in that packets are not going to computers
that do not require them.
Now the email analogy has multiple people able to send email to multiple
users. The switch can decide where to send the mail based on the address.
Most large networks use switches rather than hubs to connect computers
within the same subnet.
Router
A router is similar in a switch in that it forwards packets based on address.
But, instead of the MAC address that a switch uses, a router can use the IP
address. This allows the network to go across different protocols.
The most common home use for routers is to share a broadband internet
connection. The router has a public IP address and that address is shared
with the network. When data comes through the router it is forwarded to the
correct computer.
This comparison to email gets a little off base. This would be similar to the
router being able to receive a packet as email and sending it to the user as a
fax.
Hub :
A hub is a small, simple, inexpensive device that joins multiple computers together. Its job is very
simple: anything that comes in one port is sent out to the others. That’s it. This is quick and easy
Switch :
A switch is a small hardware device that joins multiple computers together within one local area
network (LAN). a switch generally contains more intelligence and a slightly higher price than a hub.
switches are capable of inspecting data packets as they are received, determining the source and
destination device of each packet, and forwarding them appropriately. For example, if it sees traffic
from machine A coming in on port 2, it now knows that machine A is connected to that port and that
traffic to machine A needs to only be sent to that port and not any of the others.
switches operate using a virtual circuit model.Switching involves moving packets between devices on
data link header of the packet (the MAC address is the hardware address of a network adapter). A
switch maintains a database of MAC addresses and what port they are connected to.
Router :
A router is a small hardware device that joins multiple networks together. These networks can include
wired or wireless home networks, and the Internet.A simple way to think of a router is as a computer
that can be programmed to understand, possibly manipulate, and route the data its being asked to
handle.
Routing involves moving packets between different networks. Routers, on the other hand, operate at
A router is able to determine where to send a packet using the Network ID within the Network layer
header. It then uses the routing table to determine the route to the destination host.
A Hub is, in its simplest form, just like a mains multiplug unit. There is no intelligence or circuitry in it.
More complex units may incorporate an amplifier or repeater. The network signal goes into one port
and out of all the others. This is a Layer 1 device.
A Switch has a small level of intelligence, in that it can open a message, check the IP address, and
direct the message packets to the port on which the device with that IP address resides. It cannot
modify IP addresses or see addresses outside of the range of the 'home' network. This is a Layer 2
device.
A Router can read IP addresses, and direct the messages to another network with different IP
addresses to the originating network. The Router software can build up an address table, so that it
'knows' where other devices are. This is a Layer 3 device.
Network Latency
What is network latency and how does it affect you?
Overview
This document discusses the following questions:
Many things can contribute to the overall end-to-end latency. Also, there are
other network performance measurements that are related to latency. One
such measurement is jitter, or the change in end-to-end packet latencies over
time. This can have an effect on certain types of traffic.
workstation LAN
WAN (if applicable)
access router
ISP link
ISP network
Path from ISP to Host
Host internal network
Network routers are the devices that create the most latency of any device on
the end-to-end path. Routers can be found in each of the above network
segments. Packet queuing due to link congestion is most often the culprit for
large amounts of latency through a router. Some types of network technology
such as satellite communications add large amounts of latency because of the
time it takes for a packet to travel across the link. Since latency is cumulative,
the more links and router hops there are, the larger end-to-end latency will be.
What device is used for a universal clock to read the time for both the
device that sends the packet and the device that receives the packet?
Tracking the time that a particular packet is sent and when it is received.
How useful is measuring end-to-end latency with only one packet
through a complex network where there is ample opportunity for the
latency to change over time?
What happens to latency under various amounts of traffic loads?
Round-trip latency
To solve the universal clock problem, some test equipment has the ability to
sync up with a GPS clock and place a timestamp inside the packet that is sent
to measure latency. The receiving device is a similar piece of equipment that
can also sync up with a GPS clock. This device then compares the time that
the packet is received with the timestamp in the received packet to obtain an
end-to-end latency measurement for that packet.
This option is very expensive. Fortunately, there is a cost efficient method that
provides acceptable accuracy. If the sender to the receiver path is the same
as the path from the receiver to the sender, the round-trip latency (latency
from the sender to the receiver and back) can be measured and assumed that
the end-to-end latency is half of this result.
Measuring round-trip latency is easy and details of this will be covered later in
this document. Measuring round-trip latency means that all time comparisons
are made from the same device, which removes the need for devices to sync
to a common clock. It also solves the problem of keeping up with the send and
receive times for each packet since these times are all associated with one
packet in one device.
Statistical significance
Measuring the round-trip latency of one packet is not useful because latency
changes frequently. One good way to handle this variation in results is to
measure the round-trip latency for a number of packets and calculate an
average, maximum, and minimum for all these values.
The more packets that are measured to obtain an average, the better the
measurement. For practical reasons, measuring a minute or so of latency
values to obtain an average latency should be adequate for each end-to-end
latency measurement.
Fortunately, there are ways to simplify this process. Most enterprise networks
have a fairly predictable bandwidth usage pattern. This pattern will change as
new network applications are deployed and as more people use the network,
but from one day to the next there is little change in this pattern. Typically the
network bandwidth used is low from late afternoon until the beginning of the
business day. Network traffic will increase during the workday and there may
be slight decline in utilization during lunch.
Knowing these patterns makes it possible to test network latency with known
levels of network utilization. To systematically measure latency, it is good to
sample latency throughout the day in regular intervals.
Since the sender of UDP packets does not require any knowledge that the
destination received the packets, UDP is relatively immune to latency. The
only effect that latency has on a UDP stream is an increased delay of the
entire stream. Second-order effects such as jitter may affect some UDP
applications in a negative way, but these issues are outside the scope of this
document.
There are no effects of latency on the sending device with UDP traffic. The
receiving device may have to buffer the UDP packets longer with high
amounts of jitter to help the application run better.
As end-to-end latency increases, the sender may spend lots of time waiting on
acknowledgements instead of sending packets. In addition, the process of
adjusting the window size becomes slower since this process is dependent on
receiving acknowledgements.
The following table illustrates the effect of latency and packet loss on TCP
throughput. This data was generated by using a latency and packet loss
generator between two PCs connected via fast Ethernet (full duplex). The
packet loss rate was set to 2%, which means that 2% of packets were
discarded by the test equipment. Note that the TCP throughput values are
much lower in the presence of packet loss.
Some packet loss is unavoidable. If a network runs perfectly and does not
drop any packets, an assumption cannot be made that other networks operate
as well.
Regardless of the situation, keep in mind that packet loss and latency have a
profoundly negative effect on TCP bandwidth and should be minimized as
much as possible.
^C
The following is a list of useful information that can be inferred from the
output::
The average round trip latency is 9.452ms (summary info in last line).
Ten 64 byte packets were sent.
The latency values ranged from 2.92ms to 35.068ms (summary info in
last line).
No packets were lost (0% packet loss in the next to last line).
Pinging can be run manually at any time to measure latency and is fairly non-
intrusive, so it should not affect any users. The tool is easily automated in
scripts since it can run from the command line.
Latency standards
Defining a level of latency that is deemed acceptable is difficult since it’s hard
to determine a threshold for user productivity based on application response
times. However, it is important to define typical end-to-end latency values so
you have a reasonable goal for latency.
With these issues in mind, here are some “rules of thumb” for end-to-end
latency (between workstation and servers):
A round trip end-to-end latency of 30ms or less for LEAs is healthy. This
measurement should be monitored to track any changes.
Round trip latencies between 30ms and 50ms should be monitored. If
there are ways to lower end-to-end latency, they should be considered.
Round trip latencies greater than 50ms require immediate attention to
determine the cause of the latency and possible remedies to lower the
end-to-end latency. Monitor this measurement to track the
improvements.
workstation LAN
WAN (if applicable)
access router
ISP link
If high latency is outside of your control, report the problem to the parties
responsible for those network components. If this is the case, report the issue
to the ISP to see if they can relieve the problem and/or look into other options
for an ISP.
Most network communications-including downloading files, browsing the Web, and reading
e-mail-use the TCP Layer 3 protocol. TCP is considered a reliable network protocol because
the recipient must confirm receipt of all transmissions. If a transmission isn't confirmed, it's
considered lost and will be retransmitted.
However, confirming transmissions can prevent TCP transfers from using all available
bandwidth. This happens because TCP breaks blocks of data into small pieces before
transmitting them, and recipients must confirm receipt of each piece of the data. The
number of pieces that can be sent before waiting to receive confirmation receipts is called
the TCP receive window size.
When TCP was designed, network bandwidth was very low by today's standards, and
communications were relatively unreliable. Therefore, waiting for confirmation for small
pieces of a data block did not have a significant impact on performance. However, now that
bandwidth is measured in Mbps instead of Kbps, a small TCP receive window size can slow
communication significantly while the computer sending a data block waits for the receiving
computer to send confirmation receipts.
TCP works well and does indeed provide reliable transfers over a variety of networks.
However, waiting to confirm each portion of a data block causes a slight delay each time a
confirmation is required. Just how much delay depends on two factors:
As you can see, high network latency can hurt performance, especially when combined with
small TCP receive window sizes. Computers can reduce the negative impact of highlatency
networks by increasing the TCP receive window size. However, versions of Windows prior to
Windows Vista used a static, small, 64-KB receive window. This setting was fine for low-
bandwidth and low-latency links, but it offered limited performance on high-bandwidth,
high-latency links. A TCP connection can get with various static values of the receive
window over different latency conditions. As you can see, the maximum throughput of a
TCP connection with the default receive window of 64 KB can be as low as 5 Mbps even
within a single continent and can go all the way down to 1 Mbps on a satellite link.
Windows Vista and Windows 7 include an auto-tuning capability for TCP receive window size
that is enabled by default. Every TCP connection can benefit in terms of increased
throughput and decreased transfer times, but high-bandwidth, high-latency connections will
benefit the most. Therefore, Receive Window Auto-Tuning can benefit network performance
significantly across both satellite and WAN links. However, performance on high-speed LANs
where latency is very low will benefit less.
Receive Window Auto-Tuning continually determines the optimal receive window size on a
per-connection basis by measuring the bandwidth-delay product (the bandwidth multiplied
by the latency of the connection) and the application retrieve rate, and it automatically
adjusts the maximum receive window size on an ongoing basis. For auto-tuning to
dramatically improve the throughput on a connection, all of the following conditions must be
true:
When TCP considers increasing the receive window size, it pays attention to the connection's
past history and characteristics. TCP won't advertise more than the remote host's fair share
of network bandwidth. This keeps the advertised receive window in line with the remote
host's congestion window, discouraging network congestion while encouraging maximum
utilization of the available bandwidth.
The Windows Vista and Windows 7 TCP/IP stacks support the following RFCs to optimize
throughput in high-loss environments:
With all that said, there is an indirect relationship between transfer speed and latency. For
example, a packet takes time to travel from a server to the client, and there is a limited number
of packets that can be sent (in a TCP/IP data transfer) before the server stops and awaits
acknowledgement for already received packets in order to continue, so excessive latency can
have a negative effect on transfer speed, especially with untweaked PCs on
abroadband Internet connection.
long latencies
Latency is the time it takes for a small packet of data to traverse from one end point to another on a
connected network. Long latencies can have a severely negative effect on the ability for a single TCP
network connection to utilize the available bandwidth. Long latencies and interactive use of web sites are
usually a bad idea, though all things are relative. The biggest problem with long latencies is that they
usually cannot be removed and the delay can as noticeable at the computer level as it is with live
transcontinental video interviews on TV news. Worse is the effect it has on large data transfers (big files),
without specific help vis-a-vis network acceleration, very large latencies like that seen in global and
satellite communications can make for a very frustrating experience.