Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 99

Unit - V

Contents
• Datagram and Virtual-Circuit Networks
• Forwarding and Routing
• Routing Protocols:
– Flooding
– Shortest path routing
– Distance vector routing
– Count to Infinity problem
– Link state routing
– Hierarchical routing
– Multicasting
– Broadcasting
• Internet Control Protocols:
– ICMP
– ARP
– RARP
– DHCP
• Congestion Control in Virtual-circuit and Datagram Subnets
• Traffic Shaping:
– Leaky-Bucket and
– Token-Bucket algorithms
Internetworking
• Internetworking is combined of 2 words, inter and networking
which implies an association between totally different nodes or
segments.
• This interconnection is often among or between public, private,
commercial, industrial, or governmental networks.
• It could be an assortment of individual networks, connected by
intermediate networking devices, that function as one giant
network.
• To enable communication, every individual network node is
designed with a similar protocol or communication logic.
• Once a network communicates with another network having
constant communication procedures, it’s called
Internetworking.
• Internetworking is enforced in Layer three (Network Layer)
of the OSI-ISO model.
• The foremost notable example of internetworking is the
Internet. 
• The purpose of interconnecting the networks is to allow users
on any of them to communicate with users on all the other
ones and also to allow users on any of them to access data on
any of them. Accomplishing this goal means sending packets
from one network to another.
• Two styles of internetworking are possible: a connection-
oriented concatenation of virtual-circuit subnets, and a
datagram internet style.
Comparison of Virtual-Circuit and Datagram
Subnets

5-4
Forwarding and Routing
• Routing is the process of moving data from one device to
another device. In most cases, routing is performed by a
networking device called a router (or three-layer switch).
Additionally, a router can forward the packets too.
• Routing is sometimes referred to as forwarding, but it’s
important to note that routing is a different process from
forwarding.
• Forwarding is the process of collecting data from one router
and sending it to another router.
• In routing, a router is responsible for deciding the path. In
forwarding, unlike the routing process, router only performs
some actions and simply forwards the packets which arrive in
intermediate routers to another attached network.
• Both routing and forwarding are performed by the network
layer.
Routing Algorithms
• The main function of the network layer is routing packets from the
source node to the destination node.
• For this, the routers are being used.
• In most networks, packets will require multiple hops to make the
journey.
• The algorithms that choose the routes are a major area of network layer
design.
• The routing algorithm is that part of the network layer software
responsible for deciding which output line an incoming packet should be
transmitted on.
• If the network uses datagrams internally, this decision must be made
again for every arriving data packet since the best route may have
changed since last time.
• If the network uses virtual circuits internally, routing decisions are
made only when a new virtual circuit is being set up. Thereafter, data
packets just follow the already established route.
• The routers in internetworking maintain routing tables.
• The routing process is responsible for filling in and updating
the routing tables.
• Certain properties are desirable in a routing algorithm:
correctness, simplicity, robustness, stability, fairness, and
efficiency.
• Routing algorithms can be grouped into two major classes:
nonadaptive (or static) and adaptive (or dynamic).
• Nonadaptive algorithms do not base their routing decisions on
any measurements or estimates of the current topology and
traffic. Instead, the routes are computed in advance, offline,
and downloaded to the routers when the network is booted.
This procedure is called static routing. Because it does not
respond to failures, static routing is mostly useful for situations
in which the routing choice is clear.
• Adaptive algorithms, in contrast, change their routing decisions
to reflect changes in the topology, and sometimes changes in the
traffic as well. These dynamic routing algorithms differ in where
they get their information, when they change the routes, and what
metric is used for optimization.
Flooding
• Flooding is a simple local technique is, in which every
incoming packet is sent out on every outgoing line except the
one it arrived on.
• It generates vast numbers of duplicate packets, in fact, an
infinite number unless some measures are taken to damp the
process.
• One such measure is to have a hop counter contained in the
header of each packet that is decremented at each hop, with the
packet being discarded when the counter reaches zero.
• Ideally, the hop counter should be initialized to the length of the
path from source to destination. If the sender does not know
how long the path is, it can initialize the counter to the worst
case, namely, the full diameter of the network.
• A better technique for damming the flood is to have routers keep
track of which packets have been flooded, to avoid sending them
out a second time.
• This can be achieved through having the source routers put a
sequence number in each packet they send and thereby each
router maintaining a list of them.
• If an incoming packet is on the list, it is not flooded.
• To prevent the list from growing without bound, each list should
be augmented by a counter, k, meaning that all sequence
numbers through k have been seen.
• When a packet comes in, it is easy to check if the packet has
already been flooded, by comparing its sequence number to k; if
so, it is discarded.
• Flooding is not practical in most cases, but it does have some
important uses.
• First, it ensures that a packet is delivered to every node in the
network. This may be wasteful if there is a single destination,
but it is effective for broadcasting.
• Second, it is tremendously robust. Even if large numbers of
routers are blown (e.g., in a military network), flooding will
find a path if one exists, to get a packet to its destination.
• Flooding requires a little setup. The routers only need to know
their neighbours. This means that flooding can be used as a
building block for other routing algorithms.
• Flooding can also be used as a metric against which other
routing algorithms can be compared.
Shortest Path Routing (Dijkstra’s algorithm)
• The target of shortest path routing is to find a route between any
pair of nodes, so the path length is minimum.
• One way of measuring path length is the number of hops. Using
this metric, the paths ABC and ABE in Fig. are equally long.
Another metric is the geographic distance in kilometers, in which
case ABC is clearly much longer than ABE (assuming the figure is
drawn to scale).
• However, many other metrics are also possible. For example, each
node could be labeled with the mean delay of a standard test
packet, as measured by hourly runs. With this, the shortest path is
the fastest path.
• In general, the labels on nodes could be computed as a function of
distance, bandwidth, average traffic, communication cost,
measured delay, and other factors. By changing the weighting
function, the algorithm would then compute the ‘‘shortest’’ path.
• Dijkstra (1959) algorithm - finds the shortest paths between a
source and all destinations in the network.
• Each node is labeled (in parentheses) with its distance from the
source node along the best known path.
• The distances must be non-negative, as they will be if they are
based on real quantities like bandwidth and delay.
• Initially, no paths are known, so all nodes are labeled with
infinity.
• As the algorithm proceeds and paths are found, the labels may
change, reflecting better paths.
• A label may be either tentative or permanent.
• Initially, all labels are tentative. When it is discovered that a
label represents the shortest possible path from the source to that
node, it is made permanent and never changed thereafter.
• To illustrate how algorithm works, look at the weighted,
undirected graph of Fig. (a), where the weights represent, for
example, distance.
Algorithm
•On each node a label is placed (parentheses): first value in label is
distance from the start node while the second one previous node.
•Open two lists: Visited nodes and Unvisited nodes
•Let distance of start node from start node = 0
•Let distance of all other nodes from start node = ∞
Repeat
– Visit the unvisited node with the smallest known distance from the
start node.
– For the current node, examine its unvisited neighbours.
– For the current node, calculate distance of each neighbour from
start node.
– If the calculated distance of a node is less than the known distance,
update the shortest distance.
– Update the previous node for each of the updated distances.
– Add the current node to the list of the visited nodes.
Until all nodes visited.
Distance Vector routing
• A distance vector routing algorithm is a dynamic distributed
algorithm in which the route between any two nodes is the route
with minimum distance.
• In this protocol, as the name implies, each node maintains a
vector (in the table form) of minimum distances to every node.
• The algorithm is sometimes called Bellman-Ford routing
algorithm, after the researchers who developed it.
• It was the original ARPANET routing algorithm and was also
used in the Internet under the name RIP (Routing Information
Protocol). Now, it is not used much.
• In this, each router maintains a routing table, and containing one
entry for each router in the network.
• This entry has two parts: the preferred outgoing line to use for
that destination and an estimate of the distance to that
destination.
• The distance might be measured as the number of hops or using
another metric for computing shortest paths.
• The router is assumed to know the ‘‘distance’’ to each of its
neighbours. If the metric is hops, the distance is just one hop. If
the metric is propagation delay, the router can measure it
directly with special ECHO packets that the receiver just
timestamps and sends back as fast as it can.
• Figure shows a system of five nodes with their corresponding
tables.
• Initialization – Each node can know only the distance between itself
and its neighbors, those directly connected to it. The distance for any
entry that is not a neighbour is marked as infinite (unreachable).
• Sharing – each node shares its routing table with its immediate
neighbors periodically and when there is a change. A node shares
only the first two columns of its table to any neighbour.

Initialization of tables in distance vector routing


• Updating – When a node receives a two-column table from a
neighbour, it needs to update its routing table.Updating takes
three steps (with the help of figure write in your way)
1. The receiving node needs to add the cost between itself and the sending
node to each value in the second column.
2. The receiving node needs to add the name of the sending node to each row
as the third column if the receiving node uses information from any row. The
sending node is the next node in the route.
3. The receiving node needs to compare each row of its old table with the
corresponding row of the modified version of the received table.
a. If the next-node entry is different, the receiving node chooses the row with the
smaller cost. If there is a tie, the old one is kept.
b. If the next-node entry is the same, the receiving node chooses the new row. For
example, suppose node C has previously advertised a route to node X with
distance 3. Suppose that now there is no path between C and X; node C now
advertises this route with a distance of infinity. Node A must not ignore this value
even though its old entry is smaller. The old route does not exist any more. The
new route has a distance of infinity.
Updating in distance vector routing
• When to Share
– Periodic Update: A node sends its routing table periodically,
normally every 30 seconds.
– Triggered Update: A node sends its routing table to its
neighbors anytime there is a change in its routing table.
change can result from the following.
1. A node receives a table from a neighbor, resulting in changes in its
own table after updating.
2. A node detects some failure in the neighboring links which results in a
distance change to infinity.
Count-to-infinity problem(1)
• Two-node Instability
At the beginning, both nodes A and B know how to reach node
X. But suddenly, the link between A and X fails. Node A
changes its table. If A can send its table to B immediately,
everything is fine. However, the system becomes unstable if B
sends its routing table to A before receiving A’s routing table.
Node A received the update and assuming that B has found a
way to reach X, and immediately updates its routing table.
Based on the triggered update strategy, A sends its new routing
table to B. Now node B thinks that something has been
changed around A and updates its routing table. The cost of
reaching X increases gradually until it reaches infinity. Packets
between A and B, creating a two-node loop problem.
To avoid two-node instability, we apply the following techniques.
• Defining infinity – to redefine infinity to a smaller number, say
as 100. Mostly in DVR, maximum 15 hops.
• Split Horizon – Here the node, doesn’t give its routing table as it
is. Instead, the information received through A node is not sent to
node A. So in the next update, node A eliminates the route to X
from its advertisement to B.
• Split Horizon and Poison Reverse – Using split horizon, has
one drawback. Normally, the distance vector protocol uses a
timer, and if there is no news about a route, the node deletes the
route from its table. When node A in the previous scenario
eliminates the route to X from its advertisement to B, node B
cannot guess that this is due to the split horizon strategy (the
source of information was A) or because A has not received any
news about X recently. To avoid this, the split horizon strategy
can be combined with the poison revere strategy. Now node A
can still advertise the value for X, with the message “Do not use
this value; what I know about this route comes from you”.
Count-to-infinity problem(2)
• Three-node Instability
At the beginning, both nodes A, B and C know how to reach
node X. But suddenly, the link between A and X fails. Node A
changes its table. If A can send its table to B and C
immediately, everything is fine. After getting the updated
routing table from A, node B immediately updates its table,
but the packet to C is lost in the network and never reaches C.
Now, the node C remains in the dark and still thinks that there
is a route to X via A with a distance 5. After a while, node C
sends its routing table to B. Now the node B thinks that node
X is reachable through node C. Hence node B updates their
routing tables toward node X. Similarly node A also updates
its routing table based on C’s routing table. The cost of
reaching X increases gradually until it reaches infinity. Packets
between A, B and C creating a three-node loop problem.
Link State Routing
• In LSR, each node has the entire topology of the network.
– The list of nodes and links
– How they are connected including the type
– Cost (metric)
– Condition of the link (up or down).
• In LSR, the nodes use Dijkstra’s algorithm to build a routing
table.
• No node can know the topology at the beginning or after a
change somewhere in the network. LSR is based on the
assumption that, although the global knowledge about the
topology is not clear, each node has partial knowledge: it
knows the state (type, condition, and cost) of its links.
Building Routing Tables
• In LSR, 4 sets of actions are required to ensure that each node has the
routing table showing the least-cost path to every other node:
1. Creation of the states of the links by each node, called the link state
packet (LSP).
2. Dissemination of LSPs to every other router, called flooding, in an
efficient and reliable way.
3. Formation of a shortest path tree for each node.
4. Calculation of a routing table based on the shortest path tree.
Creation of Link State Packet (LSP):
• A link state packet carries
– Node identity
– List of links
– Sequence number
– Age.
• The first two, are needed to make the topology. The third, facilitates flooding
and distinguishes new LSPs from old ones. The fourth, age, prevents old
LSPs from remaining in the network for a long time.
(a) A subnet. (b) The link state packets for this subnet.
• LSPs are generated on two occasions:
– On a periodic basis (60 minutes or 2 hours)
– When there is a change in the topology of the network.
Flooding of LSPs:
• After a node has prepared an LSP, it must be disseminated to all other
nodes, not only to its neighbors. The process is called flooding and
based on the following:
1. The creating node sends a copy of the LSP through each connected
interface.
2. A node that receives an LSP compares it with the copy it may
already have. If the newly arrived LSP is older than the one it has
(found by checking the sequence number), it discards the LSP. If it is
newer, the node does the following.
– It discards the old LSP and keeps the new one.
– It sends a copy of it out of each interface except the one from
which the packet arrived. This guarantees that flooding stops
somewhere in the domain (where a node has only one interface)
Formation of Shortest Path Tree:
• The topology is not sufficient to find the shortest path to every other
node; a shortest path tree is needed.
• A tree is a graph of nodes and links; one node is called the root. All
other nodes can be reached from the root through only one single
route. A shortest path tree is a tree in which the path between the
root and every other node is the shortest.
• The Dijkstra’s algorithm creates a shortest path tree.
• The figure shows the flow chart of Dijkstra’s algorithm.
Calculation of Routing Table from Shortest Path Tree:
• Each node uses the shortest path tree protocol to construct its
routing table. The routing table shows the cost of reaching each
node from the root.
Dijkstra’s
algorithm
Example of formation of shortest path tree
Distance vector routing Link state routing
Distributed algorithm that works with Distributed algorithm that works
local knowledge. with global knowledge.
Have limited scope of the network Have full scope of the network
topology. topology.
Gets knowledge of the entire
Gets the knowledge of the network only
topology of the network through
from the directly connected neighboring
nodes. broadcasting messages sent by each
node.
Send periodic routing updates to
Send periodic routing updates to
neighboring nodes only.
every node of the network using
reliable flooding technique.
Only the distance vector information is Entire routing table is included in
included in the periodic routing updates. the periodic updates.
Convergence time is not an issue. Convergence time is crucial.
First generation routing protocols. Better and new generation protocols.
Example: Routing Information Protocol Example: Open Shortest Path First
(RIP). (OSPF).
Suited for small sized networks. Suited for large sized networks.
Hierarchical Routing
• As networks grow in size, the routing tables grow proportionally.
• Not only is router memory consumed by ever-increasing tables, but
more CPU time is needed to scan them and more bandwidth is needed
to send status reports about them.
• At a certain point, the network may grow to the point where it is no
longer feasible for every router to have an entry for every other router,
so the routing will have to be done hierarchically.
• When hierarchical routing is used, the routers are divided into regions.
• Each router knows all the details about how to route packets to
destinations within its own region but knows nothing about the
internal structure of other regions.
• For huge networks, a two-level hierarchy may be insufficient; it may
be necessary to group the regions into clusters, the clusters into zones,
the zones into groups, and so on, until we run out of names for
aggregations.
• Figure gives a quantitative example of routing in a two-level
hierarchy with five regions.
• The full routing table for router 1A has 17 entries, as shown in
Fig. (b). When routing is done hierarchically, as in Fig. (c),
there are entries for all the local routers, as before, but all other
regions are condensed into a single router, so all traffic for
region 2 goes via the 1B-2A line, but the rest of the remote
traffic goes via the 1C-3B line. Hierarchical routing has reduced
the table from 17 to 7 entries.
• As the ratio of the number of regions to the number of routers
per region grows, the savings in table space increase.
• However, these gains in space are not free. There is a penalty to
be paid: increased path length.
• For example, the best route from 1A to 5C is via region 2, but
with hierarchical routing all traffic to region 5 goes via region
3, because that is better for most destinations in region 5.
Hierarchical routing.
Broadcasting
• In some applications, hosts need to send messages to many or all
other hosts. For example, a service distributing weather reports,
stock market updates, or live radio programs.
• Sending a packet to all destinations simultaneously is called
broadcasting.
• One broadcasting method that requires no special features from the
network is multiple unicast routing - the source simply sends a
distinct packet to each destination.
• It is not only the wasteful method of bandwidth and slow, but it also
requires the source to have a complete list of all destinations.
Hence, this method is not desirable in practice.
• An improvement is multidestination routing, in which each
packet contains a list of destinations.
• When a packet arrives at a router, the router checks all the
destinations to determine the set of output lines that will be needed.
(An output line is needed if it is the best route to at least one of the
destinations.)
• The router generates a new copy of the packet for each output line
to be used and includes in each packet only those destinations that
are to use the line. In effect, the destination set is partitioned among
the output lines. After a sufficient number of hops, each packet will
carry only one destination like a normal packet.
• The network bandwidth is therefore used more efficiently.
However, this scheme still requires the source to know all the
destinations, plus it is as much work for a router to determine
where to send one multidestination packet.
• Flooding can be a better broadcast routing technique, when
implemented with a sequence number per source, it uses links
efficiently with a decision rule at routers that is relatively simple.
• The another idea is reverse path forwarding – elegant and
remarkably simple.
• When a broadcast packet arrives at a router, the router checks to see if
the packet arrived on the link that is normally used for sending packets
toward the source of the broadcast. If so, there is an excellent chance
that the broadcast packet itself followed the best route from the router
and is therefore the first copy to arrive at the router. This being the
case, the router forwards copies of it onto all links except the one it
arrived on. If, however, the broadcast packet arrived on a link other
than the preferred one for reaching the source, the packet is discarded
as a likely duplicate.
• Reverse path forwarding is efficient while being easy to implement. It
sends the broadcast packet over each link only once in each direction,
yet it requires only that routers know how to reach all destinations,
without needing to remember sequence numbers or list all destinations
in the packet.
• In Fig. (C), the tree (with source node I) built by reverse path
forwarding is shown. The preferred path to I is indicated by a
circle around the letter. After five hops and 24 packets, the
broadcasting terminates.
• The broadcast algorithm which improves on the behaviour of
reverse path forwarding is the one that makes explicit use of
the sink tree (or any other convenient spanning tree) for the
router initiating the broadcast.
• A spanning tree is a subset of the network that includes all the
routers but contains no loops.
• Sink trees are spanning trees.
• If each router knows which of its lines belong to the spanning
tree, it can copy an incoming broadcast packet onto all the
spanning tree lines except the one it arrived on.
An example

Reverse path forwarding. (a) A subnet. (b) a Sink tree. (c) The
tree built by reverse path forwarding.
• This method makes excellent use of bandwidth, generating the
absolute minimum number of packets necessary to do the job.
• In Fig., for example, when the sink tree of part (b) is used as the
spanning tree, the broadcast packet is sent with the minimum 14
packets.
• The only problem is that each router must have knowledge of
some spanning tree for the method to be applicable.
• Sometimes this information is available (e.g., with link state
routing, all routers know the complete topology, so they can
compute a spanning tree) but sometimes it is not (e.g., with
distance vector routing).
Multicasting
• Some applications, such as a multiplayer game or live video of
a sports event streamed to many viewing locations, send
packets to multiple receivers.
• Unless the group is very small, sending a distinct packet to
each receiver is expensive.
• On the other hand, broadcasting a packet is wasteful if the
group consists of, say, 1000 machines on a million-node
network, so that most receivers are not interested in the
message.
• Thus, we need a way to send messages to well-defined groups
that are numerically large in size but small compared to the
network as a whole.
• Sending a message to such a group is called multicasting, and
the routing algorithm used is called multicast routing.
• All multicasting schemes require some way to create and
destroy groups and to identify which routers are members of a
group.
• Each group is identified by a multicast address and that routers
know the groups to which they belong.
• Multicast routing schemes build on the broadcast routing
schemes – sending packets along spanning trees to deliver the
packets to the members of the group.
• If the group is dense, broadcast is a good start because it
efficiently gets the packet to all parts of the network. But
broadcast will reach some routers that are not members of the
group, which is wasteful.
• The solution is to prune the broadcast spanning tree by
removing links that do not lead to members. The result is an
efficient multicast spanning tree.
Various ways of pruning the spanning tree are possible:
• The simplest one can be used if link state routing is used and
each router is aware of the complete topology, including which
hosts belong to which groups. Each router can then construct
its own pruned spanning tree for each sender to the group in
question by constructing a sink tree for the sender as usual and
then removing all links that do not connect group members to
the sink node.
• Suppose that a network has n groups, each with an average of
m nodes. At each router and for each group, m pruned
spanning trees must be stored, for a total of mn trees.
• MOSPF (Multicast OSPF) is an example of a link state
protocol that works in this way.
• With distance vector routing, a different pruning strategy can
be followed. The basic algorithm is reverse path forwarding.
• However, whenever a router with no hosts interested in a
particular group and no connections to other routers receives a
multicast message for that group, it responds with a PRUNE
message, telling the neighbour (that sent the message) not to
send it any more multicasts from the sender for that group.
When a router with no group members among its own hosts
has received such messages on all the lines to which it sends
the multicast, it, too, can respond with a PRUNE message.
• In this way, the spanning tree is recursively pruned.
• DVMRP (Distance Vector Multicast Routing Protocol) is an
example of a multicast routing protocol that works this way.
• Pruning results in efficient spanning trees that use only the
links that are actually needed to reach members of the group.
• One potential disadvantage is that it is lots of work for routers,
especially for large networks.
An example of Multicast Routing

(a) A network. (b) A spanning tree for the leftmost router. (c) A
multicast tree for group 1. (d) A multicast tree for group 2.
• When many large groups with many senders exist, considerable
storage is needed to store all the trees.
• An alternative design uses core-based trees to compute a single
spanning tree for the group. All of the routers agree on a root
(called the core) and build the tree by sending a packet from
each member to the root.
• Fig. (a) (in next slide) shows a core-based tree for group 1.
• To send to this group, a sender sends a packet to the core. When
the packet reaches the core, it is forwarded down the tree. This
is shown in Fig. (b) for the sender on the right hand side of the
network.
• As a performance optimization, packets destined for the group
do not need to reach the core before they are multicast. As soon
as a packet reaches the tree, it can be forwarded up toward the
root, as well as down all the other branches. This is the case for
the sender at the top of Fig. (b).
• Having a shared tree is not optimal for all sources.
• The inefficiency depends on where the core and senders are
located, but often it is reasonable when the core is in the middle
of the senders.
• Also of note is that shared trees can be a major savings in
storage costs, messages sent, and computation.
• Each router has to keep only one tree per group, instead of m
trees. Further, routers that are not part of the tree do no work at
all to support the group.
• PIM (Protocol Independent Multicast) is the popular shared
tree approach protocol used in the Internet.
Internet Control Protocols
• In addition to IP, used for data transfer, the Internet has several
companion control protocols that are used in the network
layer. They include ICMP, ARP, RARP and DHCP.
• Let’s look at each of these in turn, describing the versions that
correspond to IPv4.
• ICMP and DHCP have similar versions for IPv6; the
equivalent of ARP is called NDP (Neighbor Discovery
Protocol) for IPv6.
Internet Control Message Protocol (ICMP)
• The ICMP has been designed to compensate the two deficiencies
of IP: lack of error control and lack of assistance mechanisms.
• It is a companion protocol to the IP.
• ICMP messages are divided into two broad categories: error-
reporting messages and query messages.
• The error-reporting messages report problems that a router or a
host (destination) may encounter when it processes an IP packet.
ICMP always reports error messages to the original source.
• The query messages, which occur in pairs, help a host or a
network manager get specific information from a router or
another host.
• Each ICMP message is encapsulated in an IP packet.
• About a dozen types of ICMP messages are defined. The most
important ones are listed in Fig.
• The DESTINATION UNREACHABLE message is used
when the router cannot locate the destination or when a packet
with the DF bit cannot be delivered because a ''small-packet''
network stands in the way.
• The TIME EXCEEDED message is sent when a packet is
dropped because its TtL counter has reached zero. This event
is a symptom that packets are looping, that there is enormous
congestion, or that the timer values are being set too low.
• The PARAMETER PROBLEM message indicates that an
illegal value has been detected in a header field. This problem
indicates a bug in the sending host's IP software or possibly in
the software of a router transited.
• The SOURCE QUENCH message was formerly used to
throttle hosts that were sending too many packets. When a host
received this message, it was expected to slow down. It is
rarely used anymore because when congestion occurs, these
packets tend to add more fuel to the fire.
• The REDIRECT message is used when a router notices that a
packet seems to be routed wrong. It is used by the router to tell
the sending host about the probable error.
• The ECHO and ECHO REPLY messages are used to see if a
given destination is reachable and alive. Upon receiving the
ECHO message, the destination is expected to send an ECHO
REPLY message back.
•  The TIMESTAMP REQUEST/REPLY messages are
similar, except that the arrival time of the message and the
departure time of the reply are recorded in the reply. This
facility is used to measure network performance.
• The ROUTER ADVERTISEMENT/SOLICITATION
messages are used to let hosts find nearby routers. A host
needs to learn the IP address of at least one router to be able to
send packets off the local network.
Address Resolution Protocol (ARP)
• Although every machine on the Internet has one (or more) IP
addresses, these cannot actually be used for sending packets
because the data link layer NICs (Network Interface Cards)
such as Ethernet cards do not understand Internet addresses.
• The NICs send and receive frames based on 48-bit Ethernet
addresses.
How do IP addresses get mapped onto data link layer
addresses, such as Ethernet?
• To understand how this works, let’s use the example of Fig., in
which a small university with two switched Ethernet /24
networks is illustrated.
• One is in the Computer Science (CS) Dept., with IP address
192.32.65.0/24 and one in Electrical Engineering (EE), with IP
address 192.32.63.0/24.
• The two LANs are connected by an IP router.
• Each machine on an Ethernet and each interface on the router
has a unique Ethernet address, labeled E1 through E6, and a
unique IP address on the CS or EE network.
How a user on host 1 sends a packet to a user on host 2 on the CS
network?
• Let’s assume the sender knows the name of the intended receiver,
possibly something like eagle.cs.uni.edu.
• The first step is to find the IP address for host 2.
• This lookup is performed by the Domain Name System (DNS).
• For the moment, let’s just assume that DNS returns the IP address for host
2 (192.32.65.5).
• The upper layer software on host 1 now builds a packet with 192.32.65.5
in the Destination address field and gives it to the IP software to transmit.
• The IP software can look at the address and see that the destination is on
its own network, but it needs some way to find the destination's Ethernet
address to send the frame.
• One solution is, to have a configuration file somewhere in the system that
maps IP addresses onto Ethernet addresses.
• While this solution is certainly possible, for organizations with thousands
of machines keeping all these files up to date is an error-prone, time-
consuming job.
• A better solution is for host 1 to output a broadcast packet onto
the Ethernet asking who owns IP address 192.32.65.5. The
broadcast will arrive at every machine on the CS Ethernet, and
each one will check its IP address. Host 2 alone will respond
with its Ethernet address (E2). In this way host 1 learns that IP
address 192.32.65.5 is on the host with Ethernet address E2.
The protocol used for asking this question and getting the
reply is called ARP (Address Resolution Protocol).
• The advantage of using ARP over configuration files is the
simplicity. The system manager does not have to do much
except assign each machine an IP address and decide about
subnet masks. ARP does the rest.
• Various optimizations are possible to make ARP work more efficiently.
• To start with, once a machine has run ARP, it caches the result. Next
time it will find the mapping in its own cache, thus eliminating the need
for a second broadcast.
• In many cases, host 2 will need to send back a reply, forcing it, too, to
run ARP to determine the sender’s Ethernet address. This ARP
broadcast can be avoided by having host 1 to include its IP-to-Ethernet
mapping in the ARP packet. When the ARP broadcast arrives at host 2,
the pair (192.32.65.7, E1) is entered into host 2’s ARP cache. In fact, all
machines on the Ethernet can enter this mapping into their ARP caches.
• To allow mappings to change, for example, when a host is configured to
use a new IP address (but keeps its old Ethernet address), every
machine broadcast its mapping when it is configured. This broadcast is
generally done in the form of an ARP looking for its own IP address.
There should not be a response, but a side effect of the broadcast is to
make or update an entry in everyone’s ARP cache. This is known as a
gratuitous ARP.
• Now let’s look at Fig. again, this time assume that host 1
wants to send a packet to host 4 (192.32.63.8) on the EE
network.
• Host 1 will see that the destination IP address is not on the CS
network. It knows to send all such off-network traffic to the
router, which is also known as the default gateway.
• By convention, the default gateway is the lowest address on
the network (198.32.65.1).
• To send a frame to the router, host 1 must still know the
Ethernet address of the router interface on the CS network.
• It discovers this by sending an ARP broadcast for 198.32.65.1,
from which it learns E3. It then sends the frame.
• The same lookup mechanisms are used to send a packet from
one router to the next over a sequence of routers in an Internet
path.
• When the Ethernet NIC of the router gets this frame, it gives
the packet to the IP software. It knows from the network
masks that the packet should be sent onto the EE network
where it will reach host 4.
• If the router does not know the Ethernet address for host 4,
then it will use ARP again.
• The table in Fig. lists the source and destination Ethernet and
IP addresses that are present in the frames as observed on the
CS and EE networks.
• Observe that the Ethernet addresses change with the frame on
each network while the IP addresses remain constant (because
they indicate the endpoints across all of the interconnected
networks).
• It is also possible to send a packet from host 1 to host 4
without host 1 knowing that host 4 is on a different network.
• The solution is to have the router answer ARPs on the CS
network for host 4 and give its Ethernet address, E3, as the
response.
• It is not possible to have host 4 reply directly because it will
not see the ARP request (as routers do not forward Ethernet-
level broadcasts).
• The router will then receive frames sent to 192.32.63.8 and
forward them onto the EE network.
• This solution is called proxy ARP.
• It is used in special cases in which a host wants to appear on a
network even though it actually resides on another network.
Reverse Address Resolution Protocol (RARP)
• It finds the logical address for a machine that knows only its
physical address.
• Each host or router is assigned one or more logical (IP)
addresses, which are unique and independent of the physical
(hardware) address of the machine.
• To create an IP datagram, a host or a router needs to know its
own IP address or addresses.
• The IP address of a machine is usually read from its
configuration file stored on a disk file.
• However, a diskless machine is usually booted from ROM,
which has minimum booting information. The ROM is
installed by the manufacturer. It cannot include the IP address
because the IP addresses on a network are assigned by the
network administrator.
• The machine can get its physical address (by reading its NIC,
for example), which is unique locally. It can then use the
physical address to get the logical address by using the RARP
protocol.
• A RARP request is created and broadcast on the local network.
• Another machine on the local network that knows all the IP
addresses will respond with a RARP reply.
• The requesting machine must be running a RARP client
program; the responding machine must be running a RARP
server program.
• There is a serious problem with RARP: Broadcasting is done at
the data link layer. The physical broadcast address, in the case
of Ethernet, does not pass the boundaries of a network. This
means that if an administrator has several networks or several
subnets, it needs to assign a RARP server for each network or
subnet. This is the reason that RARP is almost obsolete.
Dynamic Host Configuration Protocol (DHCP)
• ARP (as well as other Internet protocols) makes the
assumption that hosts are configured with some basic
information, such as their own IP addresses.
How do hosts get this information?
• It is possible to manually configure each computer, but that is
tedious and error-prone. There is a better way, and it is called
DHCP (Dynamic Host Configuration Protocol).
• The DHCP has been devised to provide static and dynamic
address allocation that can be manual or automatic.
• With DHCP, every network must have a DHCP server that is
responsible for configuration.
• When a computer is started, it has a built-in Ethernet or other
link layer address embedded in the NIC, but no IP address.
• Now, the computer broadcasts a request for an IP address on
its network.
• It does this by using a DHCP DISCOVER packet.
• This packet must reach the DHCP server.
• If that server is not directly attached to the network, the router
will be configured to receive DHCP broadcasts and relay them
to the DHCP server, wherever it is located.
• A DHCP server has a database that statically binds physical
addresses to IP addresses and has a second database with a pool
of available IP addresses.
• When a DHCP client sends a request to a DHCP server, the
server first checks its static database. If an entry with the
requested physical address exists in the static database, the
permanent IP address of the client is returned. On the other
hand, if the entry does not exist in the static database, the server
selects an IP address from the available pool, assigns the address
to the client, and adds the entry to the dynamic database.
• DHCP server sends it to the host in a DHCP OFFER packet (which
again may be relayed via the router).
• To do this work, the server identifies a host using its Ethernet address
(which is carried in the DHCP DISCOVER packet).
• The DHCP server assigns an IP address for a negotiable period of time.
• The DHCP server issues a lease for a specific time. When the lease
expires, the client must either stop using the IP address or renew the
lease. The server has the option to agree or disagree with the renewal.
If the server disagrees, the client stops using the address.
• It is widely used in the Internet to configure all sorts of parameters in
addition to providing hosts with IP addresses.
• In business and home networks, DHCP is used by ISPs to set the
parameters of devices over the Internet access link, so that customers
do not need to phone their ISPs to get this information.
• Common examples of the information that is configured include the
network mask, the IP address of the default gateway, and the IP
addresses of DNS and time servers.
Congestion
• Congestion occurs in packet-switched networks.
• Congestion in a network may occur if the load on the network
(the number of packets sent to the network) is greater than the
capacity of the network (the number of packets a network can
handle).
• Congestion control refers to the mechanisms/techniques to
control the congestion and keep the load below the capacity or
refers to techniques/mechanisms that can either prevent
congestion, before it happens, or remove congestion, after it has
happened.
• Congestion control involves two factors that measure the
performance of a network: delay and throughput.
• The network and transport layers share the responsibility for
handling congestion.
• Figure depicts the onset of congestion.
Goodput: is the rate at which useful packets are
delivered by the network.

Figure. With too much traffic, performance degrades sharply


Factors causing Congestion:
• Congestion occurs because routers and switches have limited
size queues/buffers that hold the packets before and after
processing.
• Slow processors
• Low-bandwidth lines

• A variety of metrics can be used to monitor the subnet for


congestion. Some of them are
– % of all packets discarded for lack of buffer space
– average queue lengths
– number of packets that time out and are retransmitted
– average packet delay,
– standard deviation of packet delay and so on.
• In all cases, rising numbers indicate growing congestion.
General Principles of Congestion Control
• In general, congestion control mechanisms can be divided into
two broad categories: open-loop congestion control (prevention)
and closed-loop congestion control.
Closed loop:
• Closed loop solutions are based on the concept of a feedback
loop.
• This approach has three parts :
– Monitor the system to detect when and where congestion
occurs.
– Pass this information to places where action can be taken.
– Adjust system operation to correct the problem.
• The closed loop algorithms are further divided into two
subcategories: explicit feedback versus implicit feedback.
• In explicit feedback algorithms, packets are sent back from the point
of congestion to warn the source.
• In implicit algorithms, the source deduces the existence of congestion
by making local observations, such as the time needed for
acknowledgements to come back.
Open Loop:
• In open-loop congestion control, policies are applied to prevent
congestion before it happens.
• Once the system is up and running, midcourse corrections are not made.
• In these mechanisms, congestion control is handled by either the source
or the destination.
• Policies for open-loop control include deciding when to accept new
traffic (admission policy), deciding when to discard packets and which
ones (discarding policy), retransmission policy, window policy, and
acknowledgement policy.
• All of these have in common the fact that they make decisions without
regard to the current state of the network.
Congestion Control in Virtual-Circuit Subnets
• One technique that is widely used to keep congestion that has
already started from getting worse is admission control.
• The idea is simple: once congestion has been signaled, no more
virtual circuits are set up until the problem has gone away.
• Thus, attempts to set up new transport layer connections fail.
An alternative approach:
• An alternative approach is to allow new virtual circuits but
carefully route all new virtual circuits around problem areas.
• For example, consider the subnet of Fig.(a), in which two
routers are congested, as indicated.
• Suppose that a host attached to router A wants to set up a
connection to a host attached to router B.
• Normally, this connection would pass through one of the
congested routers.
Figure. (a) A congested subnet.
(b) A redrawn subnet that eliminates the congestion. A virtual
circuit from A to B is also shown.
• To avoid this situation, we can redraw the subnet as shown in
Fig.(b), omitting the congested routers and all of their lines.
• The dashed line shows a possible route for the virtual circuit that
avoids the congested routers.
• Another strategy is to negotiate an agreement between the host
and subnet when a virtual circuit is set up.
• This agreement normally specifies the volume and shape of the
traffic, quality of service required, and other parameters.
• To keep its part of the agreement, the subnet will typically
reserve resources along the path when the circuit is set up. These
resources can include table and buffer space in the routers and
bandwidth on the lines.
• This kind of reservation can be done all the time as standard
operating procedure or only when the subnet is congested.
• A disadvantage of doing it all the time is that it tends to waste
resources.
Congestion Control in Datagram Subnets
• Each router can easily monitor the utilization of its output lines
and other resources.
• Whenever the selected predefined parameter moves above the
threshold, the output line enters a ''warning'' state.
• Each newly-arriving packet is checked to see if its output line is
in warning state. If it is, some action is taken.
• The action taken can be one of several alternatives:
– Choke packets
– Explicit Congestion Notification
– Load shedding
– Jitter control
Choke packets
• In this approach, the router sends a choke packet back to the
source host, giving it the destination found in the packet.
• The original packet is tagged (a header bit is turned on) so that
it will not generate any more choke packets farther along the
path and is then forwarded in the usual way.
• When the source host gets the choke packet, it is required to
reduce the traffic sent to the specified destination by X percent.
• Since other packets aimed at the same destination are probably
already under way and will generate yet more choke packets,
the host should ignore choke packets referring to that
destination for a fixed time interval.
•  After that period has expired, the host listens for more choke
packets for another interval.
•  If one arrives, the line is still congested, so the host reduces the
flow still more and begins ignoring choke packets again.
•  If no choke packets arrive during the listening period, the host
may increase the flow again.
• Hosts can reduce traffic by adjusting their policy parameters,
for example, their window size.
Figure.
(a)A choke
packet that
affects only the
source.
(b)A choke
packet that
affects each it
passes through.
Explicit Congestion Notification (ECN)
• Explicit congestion notification is another feedback scheme
that indicates congestion information by marking packets
instead of dropping them.
• The warning state of an output line is signaled by setting a
special bit in the packet's header.
• When the packet is arrived at its destination, the transport
entity copies the bit into the next acknowledgement and
sends back to the source.
• As a result, the source decreases its transmit rate.
• As long as the warning bits continued to flow in, the source
continued to decrease its transmission rate.
• The implementation of ECN uses an ECN-specific field in
the IP header.
Load Shedding
• Routers bring out the heavy artillery: load shedding, as the last
option.
• Load shedding is a fancy way of throwing some packets away
when the routers are being overloaded by them.

Random Early Detection:


• It is well known that dealing with congestion after it is first
detected is more effective than letting it gum up the works and
then trying to deal with it.
• This has led to the idea of discarding some packets before all the
buffer space is really exhausted.
• A popular algorithm for doing this is called RED (Random Early
Detection).

 
• By having routers drop packets before the situation has become
hopeless (hence the ''early'' in the name), the idea is that there is
time for action to be taken before it is too late.
• To determine when to start discarding, routers maintain a running
average of their queue lengths. When the average queue length on
some line exceeds a threshold, the line is said to be congested and
action is taken.
• Since the router probably cannot tell which source is causing
most of the trouble, picking a packet at random from the queue is
probably as good as it can do.
• In some transport protocols (including TCP), the response to lost
packets is for the source to slow down. This fact can be exploited
to help reduce congestion.
• The source will eventually notice the lack of acknowledgement
and will respond by slowing down instead of trying harder, since
it knows that lost packets are generally caused by congestion.
Further:
• A router drowning in packets can just pick packets at random to
drop, but usually it can do better than that.
• For many applications, some packets are more important than
others. For example, in file transfer, an old packet is worth more
than a new one. In contrast, for multimedia, a new packet is more
important than an old one.
• Hence, prioritization can be considered.
Jitter Control
• The variation (i.e., standard deviation) in the packet arrival
times is called jitter.
• Jitter is shown in Fig.
How Jitter is bounded?
• The jitter can be bounded by computing the expected transit
time for each hop along the path.
• When a packet arrives at a router, the router checks to see how
much the packet is behind or ahead of its schedule.
• This information is stored in the packet and updated at each
hop.
• If the packet is ahead of schedule, it is held just long enough to
get it back on schedule.
• If it is behind schedule, the router tries to get it out the door
quickly.
Figure. (a) High jitter (b) Low jitter
• In fact, the algorithm for determining which of several packets
competing for an output line should go next can always choose the
packet furthest behind in its schedule.
• In this way, packets that are ahead of schedule get slowed down
and packets that are behind schedule get speeded up, in both cases
reducing the amount of jitter.
Elimination of Jitter:
• In some applications, such as video on demand, jitter can be
eliminated by buffering at the receiver and then fetching data for
display from the buffer instead of from the network in real time.
• However, for other applications, especially those that require real-
time interaction between people such as Internet telephony and
videoconferencing, the delay inherent in buffering is not
acceptable.
Traffic Shaping
• Before the network can make QoS guarantees, it must know what
traffic is being guaranteed.
• However, traffic in data networks is bursty. It typically arrives at
non-uniform rates as the traffic rate varies (e.g.,
videoconferencing with compression), users interact with
applications (e.g., browsing a new Web page), and computers
switch between tasks.
• Bursts of traffic are more difficult to handle than constant-rate
traffic because they can fill buffers and cause packets to be lost.
• Traffic shaping is a mechanism to control the amount and the rate
of the traffic sent to the network. Two techniques can shape traffic:
leaky bucket and token bucket.
• When a flow is set up, the user and the subnet (i.e., the customer
and the provider) agree on a certain traffic pattern (i.e., shape) for
that flow.  Sometimes this is called a service level agreement. 
• As long as the customer fulfills her part of the bargain and only
sends packets according to the agreed-on contract, the provider
promises to deliver them all in a timely fashion.
• Traffic shaping reduces congestion and thus helps the network
live up to its promise.
• Monitoring a traffic flow is called traffic policing.
• Agreeing to a traffic shape and policing it afterward are easier
with virtual-circuit subnets than with datagram subnets.
• However, even with datagram subnets, the same ideas can be
applied to transport layer connections.
Leaky Bucket Algorithm
• Try to imagine a bucket with a small hole in the bottom, as
illustrated in Fig.(b).
• No matter the rate at which water enters the bucket, the outflow is
at a constant rate, R, when there is any water in the bucket and zero
when the bucket is empty.
• The input rate can vary, but the output rate remains constant.
• Also, once the bucket is full to capacity B, any additional water
entering it spills over the sides and is lost.
• Conceptually, each host is connected to the network by an interface
containing a leaky bucket.
• If a packet arrives when the bucket is full, the packet must either be
queued until enough water leaks out to hold it or be discarded.
• This technique is called the leaky bucket algorithm.
• A leaky bucket algorithm shapes bursty traffic into fixed-rate traffic
by averaging the data rate.
Leaky Bucket Algorithm

Token Bucket Algorithm


• In the fig., we assume that the network has committed a bandwidth
of 3 Mbps for a host. And, the host sends a burst of data at a rate of
12 Mbps for 2 s, for a total of 24 Mbits of data. The host is silent
for 5 s and then sends data at a rate of 2 Mbps for 3 s, for a total of 6
Mbits of data. In all, the host has sent 30 Mbits of data in 10s.
• The leaky bucket smoothes the traffic by sending out data at a rate
of 3 Mbps during the same 10 s.
• We can also see that the leaky bucket may prevent congestion.
• A simple leaky bucket implementation is shown in Fig.
• A FIFO queue holds the packets.
• The process removes a fixed number of packets from the
queue at each tick of the clock.
Token Bucket Algorithm
• The leaky bucket is very restrictive. It does not credit an idle host.
For example, if a host is not sending for a while, its bucket
becomes empty. Now if the host has bursty data, the leaky bucket
allows only an average rate. The time when the host was idle is not
taken into account.
• On the other hand, the token bucket algorithm allows idle hosts to
accumulate credit for the future in the form of tokens.
• For each tick of the clock, the system sends n tokens to the bucket.
• The system removes one token for every cell (or byte) of data sent.
For example, if n is 100 and the host is idle for 100 ticks, the
bucket collects 10,000 tokens. Now the host can consume all these
tokens in one tick with 10,000 cells, or the host takes 1000 ticks
with 10 cells per tick.
• In other words, the host can send bursty data as long as the bucket
is not empty.
• Fig.(C) shows the idea.
• The token bucket can easily be implemented with a counter.
• The token is initialized to zero. Each time a token is added, the
counter is incremented by 1. Each time a unit of data is sent, the
counter is decremented by 1. When the counter is zero, the host
cannot send data.
• The token bucket allows bursty traffic at a regulated maximum
rate.
Leaky and Token Buckets
• Leaky and token buckets limit the long-term rate of a flow but
allow short-term bursts up to a maximum regulated length to
pass through unaltered and without suffering any artificial
delays.
• The two techniques can be combined to credit an idle host and at
the same time regulate the traffic.
• The leaky bucket is applied after the token bucket; the rate of
the leaky bucket needs to be higher than the rate of tokens
dropped in the bucket.

You might also like