Download as pdf or txt
Download as pdf or txt
You are on page 1of 61

126 Computer Communication Network

Unit 7: Network Layer


Notes
Structure
7.1 Introduction
7.2 Network Layer Functions
7.3 Switching Concepts
7.3.1 Circuit Switching Networks
7.3.2 Packet Switching
7.3.3 Message Switching
7.3.4 Cell Switching (ATM)
7.4 Routing
7.4.1 Routing Algorithms
7.4.2 Non-adaptive algorithms
7.4.3 Adaptive Algorithms
7.5 Concept of Congestion Control
7.5.1 General Principles of Congestion Control
7.5.2 Traffic Management
7.5.3 Congestion Prevention Policies
7.5.4 Traffic Shaping
7.5.5 Leaky Bucket
7.5.6 Token Bucket Algorithm
7.6 Queuing Disciplines
7.6.1 FIFO
7.6.2 Fair Queuing
7.7 Congestion Avoidance
7.7.1 Decbit
7.7.2 Random Early Detection (RED)
7.8 Internetworking Concepts
7.8.1 History of Internetworking
7.8.2 Internetworking Challenges
7.8.3 Internet
7.8.4 Routing in the Internetwork
7.8.5 Virtual Circuits
7.8.6 Connectionless Internetworking
7.8.7 Fragmentation
7.9 IP (Internet Protocol)
7.9.1 Unreliable Connection Delivery
7.9.2 Datagrams

Amity Directorate of Distance & Online Education


Network Layer 127
7.9.3 IP Addresses
7.9.4 Routing IP Datagrams Notes
7.10 ICMP
7.11 Summary
7.12 Check Your Progress
7.13 Questions and Exercises
7.14 Key Terms
7.15 Further Readings

Objectives
After studying this unit, you should be able to:
 Identify the functions of network layer
 Explain the concept of switching
 Describe circuit switching networks
 Explain the concept of packet switching
 Discuss the concept of routing
 Know about various routing algorithms
 Explain the concept of congestion control
 Describe traffic shaping approach
 Explain queuing disciplines such as FIFO and FQ
 Identify the techniques used for congestion avoidance
 Explain internetworking concepts
 Describe the concept of Internet Protocol (IP)
 Explain datagrams in Internet Protocol (IP)

7.1 Introduction
In this lesson, you will study the functions of network layer. The network layer deals with
forwarding packets from the source node to the destination node using different routes.
Hence, the network layer transports traffic between devices that are not locally
attached. In doing so, it controls the operation of the subnet, which involves routing of
the packets from the source to destination. This lesson will cover various switching
concepts. You will also learn about the concept of routing.
The Network layer which is lowest layer deals with end-to-end transmission. The
main aim of the network layer is to permit end-systems, connected to various networks,
to exchange information via intermediate systems known as router.

7.2 Network Layer Functions


In this section, you will learn about the network layer functions. The function of Network
layer is to transport packet from sending to receiving hosts via internet. Network layer
protocols exist in every host and router.
By means of the network layer, end devices are allowed to exchange data across
the network. In order to accomplish this end-to-end transport, the network layer makes
use of four basic processes:

Amity Directorate of Distance & Online Education


128 Computer Communication Network
Addressing end devices: End devices must be configured with a unique IP
address. This is necessary for identification on the network. An end device with a
Notes configured IP address is known as a host.
Encapsulation: The network layer gets a protocol data unit (PDU) from the
transport layer. In a process known as encapsulation, the network layer adds IP header
information, such as the IP address of the source (sending) and destination (receiving)
hosts. After header information is added to the PDU, the PDU is known as a packet.
Routing: The network layer provides services to direct packets to a destination host
on another network. In order to travel to other networks, it is must for the packet to be
processed by a router. The router’s role is to choose paths for and direct packets
toward the destination host in a process called routing. A packet may cross various
intermediary devices before reaching the destination host. Every route that the packet
takes in order to reach the destination host is known as a hop.
De-encapsulation: When the packet reaches at the network layer of the
destination host, the host verifies the packet’s IP header. If the destination IP address
within the header matches its own IP address, the IP header is detached from the
packet. This process of detaching headers from lower layers is called de-encapsulation.
After the packet is de-encapsulated by the network layer, the resulting Layer 4 PDU is
passed up to the suitable service at the transport layer.
The network layer design issues include the service provided to the transport layer,
routing of packets through the subnet, congestion control, and connection of multiple
networks together, etc. Now you will understand the design issues of network layer:
It is the purpose of network layer to provide seamless services to different users
connected to different networks, therefore, the services provided should be independent
of the underlying technology. In other words, users availing the service need not to
bother of the physical implementation of the network for transmitting their messages. It
should be able to provide interoperability among variety of networks in operation and
provided by different vendors.
The transport layer at the host machine should not need to know as to how the
communication link with destination machine is established. Hence, it should be
shielded from the number, type and different topologies of the subnets that it uses.
There should be some uniform addressing scheme for network addresses.Three
important functions of network layer are:
 Path determination: It determines the route taken by packets from source to
destination (Routing algorithms)
 Forwarding: It moves packets from a router’s input to a suitable output of router.
 Call setup: Some networks need router call setup along path before data flows (for
example, MPLS)

7.3 Switching Concepts


This section emphasises on the switching concepts. An Ethernet can connect up to only
1024 hosts within a span of only 1500 meters. Therefore, the LAN technologies are not
sufficient for building a global network and interconnecting hosts of other networks. In a
telephone exchange, a switch provides connection between called and calling party
without providing direct line-to-line connection between them. The telephone network
uses circuit switching that provides dedicated channel between called and calling party
while computer networks use packet switching that does not provide a dedicated
channel between the hosts. Figure 5.1 attempts to explain switching technique where
any computer may exchange information with any other computer. A switch is used to
interconnect different hosts to its several inputs and outputs. The functions of a switch
are store and forward, routing and congestion control that are required to accomplish
interconnection. The switching techniques enable us to build a MAN or WAN or Internet.

Amity Directorate of Distance & Online Education


Network Layer 129

Notes

Switching
Node

Figure 7.1: Switching Techniques


7.3.1 Circuit Switching Networks
Here you must understand that the circuit switching technique provides a dedicated
physical communication path from source to destination terminal within a network.
Therefore, a dedicated bandwidth is created, maintained and terminated for each
communication session. The circuit switching session, thus comprises of 3 phases like
circuit establishment, data transfer and circuit disconnect. Prior to the data transfer, a
dedicated connection is established for the transfer of data. At the end of the data
transfer, the connection is broken. In this manner, the circuit switching provides a fixed
data rate channel for the source and destination devices. The circuit switching
technique has disadvantages over packet switching technique because of wastage of
bandwidth when there is no data for transmission at any moment of time. Moreover,
setting up of connection also takes time. Circuit switching involves datagram and data-
stream transmissions. Datagram transmissions have frames that are individually
addressed. Data-stream transmissions do not have frames. They have a data stream
for which address checking occurs only once. The routing may be either static routing or
dynamic routing. As you can see in figure 7.2, it explains the alternate dedicated routes
for the transfer of data from one host to another.

A B

Figure 7.2: Alternate Dedicated Route for a Connection from A to B


Example: Integrated Services Digital Network (ISDN) is an example of a circuit-
switched WAN technology.
Each user has sole access to a circuit (functionally equivalent to a pair of copper
wires) during network use. Following example shows a circuit switched connection.
Example: Consider communication among two points A and D in a network. The
connection among A and D is provided using (shared) links between two other pieces of
equipment, B and C. Figure 5.3 shows connection between two systems A & D formed
from 3 links.

Amity Directorate of Distance & Online Education


130 Computer Communication Network

Notes

Figure 7.3: A Connection between Two Systems A & D Formed from 3 Links
Network use is started by a connection phase, during which a circuit is set up
among source and destination, and terminated by a disconnect phase. Figure 7.4
illustrates these phases, with associated timings:

Figure 7.4: A Circuit Switched Connection between A and D


After a user requests a circuit, the desired destination address must be
communicated to the local switching node (B). In case of a telephony network, this is
attained by dialing the number.
Node B obtains the connection request and recognizes a path to the destination (D)
through an intermediate node (C). This is followed by a circuit connection phase
managed by the switching nodes and started by allocating a free circuit to C (link BC),
followed by transmission of a call request signal from node B to node C. In turn, node C
allocates a link (CD) and the request is then passed to node D after a similar delay.
The circuit is then established and may be utilized. While it is available for use,
resources (i.e. in the intermediate equipment at B and C) and capacity on the links
among the equipment are dedicated to the use of the circuit.
After the connection is completed, a signal confirming circuit establishment is
returned. This flows directly back to node A with no search delays since the circuit has
been established. Then, the transfer of the data in the message then begins. After data
transfer, the circuit is disconnected; a simple disconnect phase is included after the end
of the data transmission.
Delays for setting up a circuit connection can be high, particularly if ordinary
telephone equipment is utilized. Call setup time with conventional equipment is usually
on the order of 5 to 25 seconds after completion of dialing. New fast circuit switching
techniques can diminish delays. Trade-offs among circuit switching and other types of
switching rely strongly on switching times.

Amity Directorate of Distance & Online Education


Network Layer 131
7.3.2 Packet Switching
The packet switched data networks divides data into one or more message units, called Notes
packets at source host before transmitting it to the destination host. The packets have
varying length and they include the source and destination addresses and the
necessary control information. In a switched network, the switching nodes receive the
packets and store them briefly before forwarding to the next node. The switching node
examines the destination address contained in the packets that are reaching there.
Each switching node maintains a routing directory in the form of a table to
determine the outgoing links based on the destination addresses of the received
packets. The packets finally reach to the destination node and are forwarded to the
destination device. The destination device collects all he packets of the same data
reaching it to from different routes and arranges them in sequence according to the
sequence number contained in each packet.

Message
Message

Switching
Smaller packets
Node
Message broken reassembled into
into smaller message
packets

Figure 7.5: Packet Switched Network


Unlike circuit switching, the packet switching does not involve dedicated channel for
transfer of information hence, it is prone to encounter with errors and damaged or lost
packets in the route from source to destination devices. Therefore, error and flow
control procedures are applied on each link by the switching nodes. The advantages of
packet switching are channel efficiency, no busy conditions and priorities data
transmission.
You must understand that the packet switching datagram treats each packet
independently and the packets take any route to reach at the destination irrespective of
their sequence numbers. The destination device reassembles the packets to reproduce
the message and recover for the missing packets. Packet switching enables to transmit
the same information to more than one receiver at the same time. Packet switching also
enables communication between terminals that have different transfer rates and
different types of interface.
Following example shows a packet switched communication.
Example: Consider communication among two points A and D in a network. The
connection among A and D is provided by using circuits which are shared by means of
packet switching.

Figure 7.6: Communication between A and D using Circuits which


are Shared using Packet Switching

Amity Directorate of Distance & Online Education


132 Computer Communication Network

Notes

(The message in this case has been broken into three parts labeled 1-3)

Figure 7.7: Packet-Switched Communication between Systems A and D


The advantages of packet switching are discussed below.
 The first and most significant advantage is that since packets are short, the
communication links among the nodes are only allocated to transferring a single
message for a short period of time while transmitting every packet. Longer
messages need a series of packets to be sent, however do not require the link to be
dedicated among the transmission of every packet. The implication is that packets
associated to other messages may be sent among the packets of the message
being sent from A to D. This offers a much fairer sharing of the resources of each of
the links.
 Another advantage of packet switching is called "pipelining". Pipelining is visible in
the figure 5.7. At the time packet 1 is sent from B to C, packet 2 is sent from A to B;
packet 1 is sent from C to D while packet 2 is sent from B to C, and packet 3 is sent
from A to B, and so forth. This simultaneous utilization of communications links
represents a gain in efficiency, the total delay for transmission across a packet
network may be considerably less than for message switching, in spite of the
inclusion of a header in each packet rather than in each message.
The packets are handled using datagram and virtual circuit and permanent virtual
circuit methods.
Datagram: It refers to the self-contained packet of data carrying sufficient
information to route it reliably from source to destination devices following any arbitrary
route. The destination device collects and reassembles out of the packets in order to
reconstruct the information. Chances are that some packet may be lost. The receiving
device then requests for missing packets.
Virtual circuit and permanent virtual circuit: It refers to create virtual connection
between source and destination devices. This is in contrast to the circuit switching that
creates a dedicated physical connection. The virtual connection so created may either
be connection oriented or connectionless.
Example: Datagram is an example of connectionless communication.

Amity Directorate of Distance & Online Education


Network Layer 133
Packet switching uses two types of virtual connections. They are Switched Virtual
Circuit (SVC) or Virtual Circuit (VC) and Permanent Virtual Circuit (PVC).
Notes
Virtual circuit (VC) Connection: Like circuit switching, the source device selects
the links or route to be used for sending data to the destination device before
communication. The links are disconnected as soon as the packets are transferred.
Permanent Virtual Circuit (PVC): Like leased lines, PVC is a virtual circuit used to
establish a long-term connection between sending and receiving end for permanent
types of the users who always wish to connect to a logical channel. It eliminates the
need for repeated connection set-up and termination. This is provided by the switching
nodes that store the information permanently for the transfer of packets between two
devices or more.

Connectionless and Connection Oriented Communication


It is important to note that the types of the connections that are used to define the
interface between sending or receiving devices and switching node are the following
two types:
Connectionless: This does not involve any pre-determined route from source to
destination. The packets are transmitted independently and take any available route to
reach destination. The sequence of the packet is not guaranteed. Example: Datagram
is an example of this kind of interface between sending or receiving devices and
switching nodes.
Connection-oriented: The source or destination device establishes a logical
connection with the switching node on request before transmission of data. Example:
Virtual circuit is an example for this kind of communication to the switching node.
In other words, a path is identified on which all packets belonging to that connection
are sent sequentially. The network ensures the delivery of all packets in sequence.

7.3.3 Message Switching


In message switching, there is no need for a connection to be established all the way
from source to destination. In figure 5.8, you can see the communication between Tx
(sending or transmitting device) and Rx (receiving device) via a number of links as Tx to
Tx1, Tx1 to Tx2, Tx2 to Tx3, Tx3 to Rx.

Tx Tx1 Tx2 Tx3 Rx

Figure 10.6 A connection between two systems Tx and Rx through 3 links


Figure 7.8: A Communication between Two Systems Tx and Rx through 3 Links
The switching nodes like Tx1, Tx2 and so on receive the message, store it and
forward the message to the adjacent message switching node after creating a
connection with the adjacent message switch. Message switching is also known as
store-and-forward switching since the messages are stored at intermediate nodes en
route to their destinations. The difference between packet switching and message
switching may be understood by the size of packets. In case of the packet switching,
the size of packet is very short compared to the size of message in message switching.
The short size packet takes less time to reach the destination and therefore
reassembling of out of order packets does not require a dedicated connection. Thus the
packet switching allows packets belonging to other messages to be sent in between
other packets. Packet switching uses pipelining to make a continuous flow of packets
from source to destination devices via intermediate switching nodes.
Following example shows a message switched communication. Example: Consider
a connection among the users (A and D) shown in figure 7.9 (i.e. A and D) which is
represented by a series of links (AB, BC, and CD).

Amity Directorate of Distance & Online Education


134 Computer Communication Network

Notes

Figure 7.9: A Connection between Two Systems A & D Formed from 3 Links
For instance, when a telex (or email) message is sent from A to D, it first passes
over a local connection (AB). It is then passed at some later time to C (via link BC), and
from there to the destination (via link CD). At every message switch, the received
message is stored, and a connection is subsequently made to deliver the message to
the neighboring message switch.

Figure 7.10: The use of Message Switching to Communicate between A and D


The figure 7.10 illustrates message switching; transmission of only one message is
illustrated for simplicity. As the figure indicates, a complete message is sent from node
A to node B when the link interconnecting them becomes available. As the message
may be competing with other messages for access to facilities, a queuing delay may be
incurred while waiting for the link to become available. The message is stored at B until
the next link becomes available, with another queuing delay before it can be forwarded.
It repeats this process until it reaches its destination.
A link from source to destination devices and intermediate nodes are used to
transmit packets simultaneously. This enhances the channel efficiency and reduces the
total delay for transmission across a packet network as compared to message
switching.

7.3.4 Cell Switching (ATM)


You must note that the cell switching, associated with Asynchronous Transmission
Mode (ATM) is considered to be a high speed switching technology to overcome the
speed problems for real time applications. Cell switching uses a connection-oriented
packet-switched network. In cell switching, a connection is known as signalling. The cell
switching uses a fixed length of packets of 53 bytes out of which 5 bytes are reserved
for header. The packet switching technique uses variable length packets. Like packet
switching, the cell switching technique also divided the message into smaller packets
but of fixed length. The advantages are high performance, common LAN/WAN
architecture, multimedia support, dynamic bandwidth and scalability. High performance
is achieved because of the use of hardware switches. The cell switching also
possesses connection-oriented service features of circuit switching. The connection

Amity Directorate of Distance & Online Education


Network Layer 135
oriented virtual circuits for each phase allocates specified resources for different
streams of traffic.
Notes
7.4 Routing
In this section, you will understand about routing. Routing refers to the process of
selecting the shortest and most reliable path intelligently over which to send data to its
ultimate destination. IP routing protocol makes the distinction between hosts and
gateways. A host is the end system to which data is ultimately delivered. An IP
gateway, on the other hand, is the router that accomplishes the act of routing data
between two networks. A router can be a specialized device supporting multiple
interfaces, with connected to a different network as shown in Figure 5.11 or a computer
multiple interfaces (commonly called a multi-homed host) with routing services running
in that computer.

Figure 7.11: IP Router Providing Services between two Networks


By OSI norms and standards, a gateway is not only a router but also a connectivity
device that provides translation services between two completely hybrid networks.
Example: A gateway (not a router) is needed to connect a TCP/IP network to an
AppleTalk network.
It is important for you to know that both hosts and IP routers (gateway) perform
routing functions and therefore, compatible implementations of the IP protocol are
necessary at both ends. In other words, datagrams are submitted either to a host that
shares the same physical network with the originating host or to a default gateway for
further routing across the network. As such, IP on a host is responsible for routing
packets that originate on this host only, fulfilling local needs for routing. A gateway, on
the other hand, is responsible for routing all traffic regardless of its originator (as long as
the TTL field is valid).
A default gateway is a router that a host is configured to trust for routing traffic to
remote systems across the network. However, the trusted router must be attached to
the same network as the trusting host. A router on a remote network cannot be used for
providing the functionality of the default gateway.

7.4.1 Routing Algorithms


The routing algorithm that runs on the network layer decides which output line an
incoming packet should be transmitted on. A routing table that is built in every router
tells which outgoing line should be used for each possible destination router. A router
looks up the outgoing communication line to use in the routing table after receiving a
datagram that contains the destination address. Thereafter, it sends the packet on its
way to the destination. Thus, the major role of the network layer is to routing the
packets from source to destination machine. The algorithms that enable to choose the
possible routes and the data structures that they use are a major area of routing

Amity Directorate of Distance & Online Education


136 Computer Communication Network

algorithm. The desirable properties of the routing algorithms are correctness, simplicity,
robustness, stability, fairness and optimality.
Notes
Hence, the routing algorithm is defined as the part of the network layer software
deciding which output line an incoming packet should be transmitted on. It all depends
upon if the subnet uses datagrams internally, this decision is made a new for every
arriving data packet since the best route may have changed since last time. If the
subnet using virtual circuits such decision is made ones per session.
Figure 7.12 shows the routing table for router A (address 138.25.10.1). This table
lists destination addresses for each local network, and not for each destination host.
This table also includes as the next hop (the address of next router) to which the packet
must be transferred. If no hops are included, this means that the destination network is
directly connected to the router.
When router A receives a packet, it tracks this table to perform routing. Example: If
the packets addressed to the host of network 138.25.40.0, then router A sends the
packet to router C (138.25.30.1). Router C has a similar routing table so that it can
perform routing.

Figure 7.12: Routing Table


Routing plays a major role in the forwarding function. Routing algorithms are
grouped into two classes. They are non-adaptive and adaptive algorithms.

7.4.2 Non-adaptive Algorithms


You must be aware that non-adaptive algorithms are independent of the volume of the
current traffic and topology. They decide the route to which a datagram is to send
offline. The route is computed in advance and downloaded to the routers when the
network is booted. Thus, routing information is manually specified. It provides fixed
route information to each router. If there is no change in route, it is made manually.

Optimality Principle
The optimality principle defines that if router A is on the optimal path from router B to
router C, then the optimal path from A to C also falls along the same route.
Consequently, the set of optimal routes from all sources to a given destination form a
tree rooted at the destination. Such tree is called a sink tree.

Shortest Path Routing


The shortest path routing is simple and easy to understand. In this method, a graph of
the subnet is built where each node of the graph represents a router and each arc
represents a communication link. The shortest path algorithm chooses a shortest route
between a given pair of routers on the graph. The shortest path method intends to
measure path length for which number of hops; geographical distance, the mean
Amity Directorate of Distance & Online Education
Network Layer 137
queuing and transmission delay of router are used. The labels on the arcs are
computed as a function of the distance, bandwidth, average traffic, communication cost,
mean queue length, measured delay, etc. A number of algorithms exist for computing Notes
shortest path between two nodes of a graph.

Flooding
You must note that flooding is another static algorithm where every incoming packet is
forwarded to every outgoing line except the one from where it arrived on the router. It,
thus, generates infinite number of duplicate packets. To control the number of packets
so generated, a measure namely hop counter is applied. In this method, the header of
each packet is decremented at each hop with the packet being discarded till the counter
reaches zero. If the source host knows the path from source to destination, he initialises
the hop counter to the length of the path from source to destination. If the sender does
not have an idea of the path length, he initializes the counter to the full diameter of the
subnet.
Alternatively, a track of the packets that are flooding the communication link is kept
so that they could not be sent out a second time. The source router puts a sequence
number in each packet it receives from its hosts. Then each router needs a list per
source router indicating which sequence numbers originating at that source have been
seen to avoid any incoming packet on the list. Each list is incremented by a counter, k,
indicating that all sequence numbers through k have been seen. This prevents list from
growing unnecessarily.

Selective Flooding
Selective flooding, which slightly more practical is a variation of flooding. Every
incoming packet is not forwarded to each line. Instead, incoming packets are forwarded
only to those going approximately in the right direction.

Flow-based Routing
Unlike the algorithms discussed above based on topology only, the flow based routing
takes into account the topologic and the load. The networks, which have the mean data
flow between each pair of nodes is relatively stable and predictable offers to analyse the
flows mathematically to optimize the routing. The flow analysis makes it possible to
compute the mean packet delay on the line from queuing theory, for which the capacity
and average flow are known. This, in turn, calculates a flow-weighted average to obtain
the mean packet delay for the whole subnet. However, this technology demands certain
information.

7.4.3 Adaptive Algorithms


Adaptive algorithms are capable of changing their routing decisions to reflect changes
in the topology and the traffic. Routers automatically update routing information when
changes are made to the network configuration. It is convenient, as it does not involve
human intervention in case of changes to the network configuration. Its disadvantage,
however, is that the overhead required to send configuration change information can be
a heavy burden.

Distance Vector Routing


It is essential to know that distance Vector Routing comes under the category of
dynamic routing. Modern computer networks believe in dynamic routing algorithms as
compared to static routing algorithms. This routing algorithm along with link state routing
is the popular. Distance vector protocols are RIP, Interior Gateway Routing Protocol
(IGPR).
In distance vector algorithm each router maintains a routing table and exchanges its
routing table with each of its neighbours so that their routing tables get updated. Each
router will then merge the received routing tables with its own table, and then transmit

Amity Directorate of Distance & Online Education


138 Computer Communication Network

the merged table to its neighbours. This is shown in Figure 5.13. This occurs
dynamically after a fixed time interval by default, thus requiring significant link overhead.
Notes

Figure 7.13: Routing Method – Distance Vector Type


There are problems, however, such as
 If exchanging data among routers every 90 seconds, for example, it takes 90 x 10
seconds that a router detects a problem in a router ahead and the route cannot be
changed during this period.
 Traffic increases since routing information is continually exchanged.
 There is a limit to the maximum amount of routing information (15 for RIP), and
routing is not possible on networks where the number of hops exceeds this
maximum.
 Cost data is only the number of hops, and so selecting the best path is difficult.
However, routing processing is simple, and it is used in small-scale networks in
which the points mentioned above are not a problem. Distance vector routing was used
in the ARPANET routing algorithm and was also used in the Internet under the name
RIP. It also found its uses in early versions of DECnet and Novell’s IPX. AppleTalk and
CISCO routers use improved version of distance vector protocols. In the improved
version, each router has a routing table indexed by and containing one entry for each
router in the subnet. This entry has two parts. They are the preferred outgoing line to
use for destination and an estimate of the time or distance to destination. The metric
used is number of hops, time delay in milliseconds and total number of packets queued
along the path or something similar.

Link State Routing


You must understand that the link state routing is simple. These are OSPF, IS-IS
(Intermediate System to Intermediate System Intra-Domain Routing Exchange
Protocol). Link state routing algorithm where each router in the network learns the
network topology then creates a routing table based on this topology. Each router will
send information of its links (Link-State) to its neighbour who will in turn propagate the
information to its neighbours, etc. This occurs until all routers have built a topology of
the network. Each router will then prune the topology, with itself as the root, choosing
the least-cost-path to each router, and then build a routing table based on the pruned
topology as shown in Figure 7.14.

Amity Directorate of Distance & Online Education


Network Layer 139

Notes

Figure 7.14: Routing Method – Link State Type


The entire topology and delays are measured and distributed to every router. Then
Dijkstra’s algorithm is used to find the shortest path to every other router. In link-state
protocols, there are no restrictions in number of hops as in distance-vector protocols,
and these are aimed at relatively large networks.Example: Internet backbones.
The load on routers will be large however, since processing is complex.
Briefly, the link state routing deals with:
 Discovering its neighbour and learn their network addresses,
 Measuring the delay or cost to each of its neighbours,
 Constructing a packet indicating all it has just learned,
 Sending this packet to all other routers for their learning, and
 Computing the shortest path to every other router.

Hierarchical Routing
It is important to note that because of the global nature of Internet system and ever
growing networks in size, it becomes more difficult to centralize the system
management and operation. For this reason, the system must be hierarchical such that
it is organised into multiple levels, with several group loops connected with one another
at each level. The routers are divided into regions with each router knowing all the
details about how to route packets within its own region but knowing nothing about the
internal structure of other regions. Therefore, hierarchical routing is commonly used for
such a system as shown in the Figure 7.15.

Figure 7.15: Hierarchical Routing

Amity Directorate of Distance & Online Education


140 Computer Communication Network

A set of networks interconnected by routers within a specific area using the same
routing protocol is called domain. Two or more domains may be further combined to
Notes form a higher-order domain. A router within a specific domain is called intra-domain
router. A router connecting domains is called inter-domain router. A network composed
of inter-domain routers is called backbone .Each domain, which is also called operation
domain, is a point where the system operation is divided into plural organizations in
charge of operation. Domains are determined according to the territory occupied by
each organisation.
You must understand that routing protocol in such an Internet system can be
broadly divided into two types:
1. Intra-domain routing
2. Inter-domain routing.
Each of these protocols is hierarchically organized. For communication within a
domain, only the former routing is used. However, both of them are used for
communication between two or more domains.
Two algorithms, Distance-Vector Protocol and Link-State Protocol, are available to
update contents of routing tables.

Routing for Mobile Hosts


With the growth of wireless communication, the growth of laptop with mobile Internet
connection is imminent. The mobile hosts have different sets of requirements for routing
a packet to a mobile host. Generally, this requirement is accomplished through creation
of two LAN foreign agent and home agent. When a mobile host connects to the
network, it collects a foreign agent packet or creates a request for foreign agent.
Consequently, a connection is set up between them and the mobile host provides to the
foreign agent its home and some other details relating to security. Subsequently, the
foreign agent contacts the mobile host’s home agent and delivers the message about
the mobile host.
The home location agent maintains user information in the form of static and
dynamic information. The static information is the International Mobile Subscriber
Identity (IMSI), account status, service subscription information authentication key and
options etc. The dynamic information is the current location area of the mobile
subscriber which is the identity of the currently serving foreign agent to enable the
routing of mobile-terminated calls. As soon as a mobile user leaves its current location
area the information in the home location agent is updated so that the mobile user can
be localized in the GSM network.
You must note that when a foreign agent contacts home location agent, the home
agent verifies the received information. If verification is found to be correct, it permits
the foreign agent to proceed. Consequent to this, the foreign agent enables the mobile
host into its routing table. When the packets for the mobile host arrive, its home location
register encapsulates it and redirects to the foreign agent where the mobile host is
residing. Thereafter, foreign agent returns encapsulation data to the router so that all
next packets would be directly sent to correspondent router (foreign agent).

Broadcast Routing
In broadcast routing, the source machine intends to send messages to many or all other
hosts. Example: Stock exchange reports, sports news like cricket match score, flights
schedules, etc.
Hence sending the same message to several recipients is broadcast and the
algorithm doing so is called broadcast algorithm. To accomplish the task the following
methods are proposed:
The simplest method is that source machine sends the packet to all the necessary
destination machines. In doing so, the source machine needs to maintain the complete

Amity Directorate of Distance & Online Education


Network Layer 141
list of the addresses of destination machines. It also involves wastage of the bandwidth.
It is one of the most undesirable methods.
Notes
Flooding routing: This method has problem of generating duplicate packets and
therefore consumes too much bandwidth.
Multi-destination routing: Each packet encapsulates either a complete
destinations list or a list of bitmaps which indicates desired destinations. The packet at
router is checked for every destination to find out the set of output lines that are needed.
Thus, a brand new copy of the packet is generated for forwarding to every output line.
This new copy of the packet includes solely those destinations that are to use the line.
As a result, the destination set is partitioned among different lines and after a sufficient
number of hops, every packet will be holding only 1 destination machine address and
thus it can be treated as a standard packet.
This fourth routing algorithm method makes use of spanning tree of the subnet. The
spanning tree method includes a subset of the subnet including all routers but without
any loops. A router copies an incoming broadcast packet onto all the spanning tree
lines except the one it arrived on. However, each router should which of its lines belong
to the spanning tree. In case of link state routing, this information is available but in
case of distance vector method, this information is not available. This method
maximises the usage of bandwidth by generating the minimum numbers of packets to
complete the task.
The Reverse path-forwarding algorithm enables the router to check whether the line
where the packet arrived on is the same one through which the packets are send to the
source machine. If it is true, it forwards the packet through all other lines, otherwise
discards it. In this case router does not require knowing about spanning tree.

Multicast Routing
It is important to note that sometimes, there are groups that are working and
exchanging information among group members. Hence, sending information to well-
defined groups that have large members, but small as compared to the network, as a
whole is called multicasting. The routing algorithm making it possible is called multicast
routing. If the group is small, sending messages point to point will suffice but if the
group is large, point to point transferring messages would be inefficient. In such a
situation broadcasting will also not be proved efficient because messages may not be of
interest of all the recipients and messages may also be classified.
The multicast routing algorithm involves update list of all process in a group
available with source or host machine.
is accomplished either by router to query their hosts periodically to know about the
new members in groups or by the hosts informing their routers about changes in-group
membership.
In multicast routing, each router computes a spanning tree for all other routers in
the subnet. On receipt of a multicast packet for a group at a router, that router examines
its spanning tree and prunes the multicast packet so that all lines that do not lead to
hosts that are members of the group could be removed.
The pruning of the spanning tree involves Link State routing when each router is
aware of the complete subnet topology including which hosts belong to which groups.
Thereafter, the spanning tree is pruned from beginning at the end to each path and
moving towards the root. Thus all routers that do not belong to the group are removed.
In distance vector routing, reverse path forwarding algorithm is followed. In such a
case, when a router that has neither a host for a group nor a connection to other routers
receives a multicast message for that group, it responds with a PRUNE message,
indicating the sender not to forward it any more multicasts for that group. When a router
with no group members among its own hosts has received such messages on all its

Amity Directorate of Distance & Online Education


142 Computer Communication Network

lines, it also responds with a PRUINE message. In this way, the subnet is recursively
pruned.
Notes
The major disadvantage of this algorithm is that it scales inefficiently to large
networks. To overcome this problem, another method, a core-base tree is used. This
computes a single spanning tree per group with the root (the core) near the middle of
the group. A host sends a multicast message to the core, which then performs the
multicast along the spanning tree. The advantage of this method is reduction in storage
costs from multiple trees to one tree per group.

7.5 Concept of Congestion Control


In this section, you will learn about the concept of congestion control. Congestion
causes chocking of the communication channel. When too many packets are present in
a part of the subnet, the performance of the subnet degrades. Hence, a communication
channel of a network is called congested if packets traversing the path experience
delays largely in excess of the paths propagation delay. It is called heavily congested
when the packets never reach the destination indicating that the delay approaches
infinity. The reasons for congestion are not one but many. When the input traffic rate
exceeds the capacity of the output lines, the input part of the subnet gets chocked and
creates congestion.
Congestion is also happened when the routers are too slow to perform queuing
buffers, updating tables, etc. Lack of capacity of the routers' buffer is also one of many
factors for congestion. However, enhancing memory of the router may be helpful up to a
certain point. Beyond a certain point of time, congestion gets worse because of timeout
retransmission will create more traffic load. Briefly, the apparent causes of congestion
are jamming by several input lines, slow processors, low bandwidth, finite number of
buffers, etc.
Congestion control and flow control are two different phenomena. Congestion is a
global phenomenon involving all hosts, all routers, the store-and-forward processing
within the routers, etc., whereas, flow control is concerned with point-to-point traffic
between a given source host and a given destination host.
Example: The example of congestion control is a situation when a store-and-
forward network with 1-Mbps lines and 1000 large minicomputers, half of which were
trying to transfer files at 100 kbps to the other half.
Example: An example of flow control is when a fiber optic network with a capacity
of 1000 gigabits/sec on which a supercomputer was trying to transfer a file to a personal
computer at 1Gbps.

7.5.1 General Principles of Congestion Control


It is essential to note that according to control theory, the computer network, which is
also a system, is divided into two groups. They are open loop and closed loop solutions.
The open loop solutions: Provide good design to ensure that the problem does
not occur in the first place. The designing tools include decision for accepting new
traffic, discarding packets and scheduling of the packets at various points in the
network. The open loop solution’s decisions are independent of the current state of the
network.
Closed loop solutions: Make decision based on the concept of a feedback loop.
The feedback loop enables the closed loop system to monitor the system to detect
when and where congestion occurs. Thereafter, it passes the information to the places
where actions can be taken. This enables to adjust system operation to correct the
problem.
Monitoring of the system is dependent upon the percentage of all packets discarded
for lack of buffer space, the average queue lengths, the number of packets that time out

Amity Directorate of Distance & Online Education


Network Layer 143
and are retransmitted, the average packet delay and the standard deviation of packet
delay. The monitored congestion information is provided to all places of actions when
the router detects the congestion; it sends a separate warning packet to the traffic Notes
source immediately. This done by reserving a bit or field in each packet which is filled in
each outgoing packet in case of a congested state encountered by a router to caution
the neighbours. Secondly, hosts or routers send packets periodically to explicitly know
about congestion so that the traffic around congested areas may be routed to alternate
destination paths.
The congestion may be controlled as given below:
 Increase the bandwidth in the network. Increasing an additional line temporarily
increases the bandwidth between certain points.
 Split traffic to follow multiple routes.
 Increase the resources.
 Decrease the load by denying service to some users or degrading service to some
or all users
 Estimate users schedule and demands in a more predictable way.

7.5.2 Traffic Management


You must understand that the traffic management facility allows maximising available
network resources and ensures efficient use of resources that have not been explicitly
allocated. Traffic management will be mainly dependent on transmit priority and
bandwidth availability. In the transmit priority, delay-sensitive traffic is assigned a higher
transmit priority. Support for bandwidth availability deals bandwidth allocation for each
VCC, Connection Admission Control (CAC) preventing network users from allocating
more bandwidth than the network can provide, traffic policing to ensure that a VCC,
once established, does not attempt to use more bandwidth than the network currently
has available and selective cell discard dealing with momentary over subscription of the
buffer capacity of an output port.

Average Packet Delay


Suppose:
Average arrival rate of packets at a router for processing =  packets per second
Average processing rate of packets one at a time at the router =  packets per second
Utilization of the channel =  = /
From Queuing theory, the average delay a packet experiences at a router before
being forwarded is given by:
T = 1/ (1/1-)
From the above, it is clear that average delay approaches infinity as utilization
approaches unity.

7.5.3 Congestion Prevention Policies


Open loop systems are designed to minimize the congestion at the place of its origin.
Applying congestion prevention policies at different layers solves the problem in case of
open loop systems. The policies at data link, network and transport layers that affects
congestion is given below:
Data link layer: The issues such as retransmission of packets, out of order
caching, and acknowledgement of the received packets from destination machine and
flow control affect congestion at this layer.

Amity Directorate of Distance & Online Education


144 Computer Communication Network
Network layer: Setting up of virtual channel and datagram inside the subnet,
packet queuing and forwarding at router, dropping of packets at router, routing
Notes algorithms, packet lifetime management, etc. are the factors affecting congestion at this
layer.
Transport layer: Retransmission of packets, out of order caching,
acknowledgement of the received packets from destination machine, flow control
mechanism, time out packets, etc. affect congestion at this layer.

7.5.4 Traffic Shaping


You may already be aware that one of the main reasons for congestion is the bursty
traffic. Another cause is the transmission of packets at an unpredictable rate. Hence,
the traffic shaping approach includes transmission of packets at uniform rate and in
more predictable rate in case of open loop method. Thus the traffic shaping attempts to
regularise the average rate of data transmission. Example: The ATM networks exploit
this method to a greater extent.
To reduce congestion, the user and the subnet agree on a certain traffic pattern in a
virtual circuit. Such agreements are of great importance for transfer of real time audio
and video connections, which do not tolerate congestion. Monitoring traffic pattern is
called traffic policing.

7.5.5 Leaky Bucket


You must note that the leaky bucket algorithm finds its use in the context of network
traffic shaping or rate limiting. The algorithm enables to control the rate at which data is
injected into a network and thus handling burstiness in the data rate. A leaky bucket
implementation and a token bucket implementation are predominantly used for traffic
shaping algorithms. The leaky-bucket algorithm is used to control the rate at which
traffic is sent to the network and shape the bursty traffic to a steady stream of traffic.
You can see the leaky bucket algorithm in figure 7.16.

Incoming packets

Outgoing packets

Figure 7.16: Leaky Bucket Algorithm


In the leaky bucket algorithm, a bucket with a volume of, say, b bytes and with a
hole in the bottom is taken into consideration. If the bucket is empty, it means b bytes
are available as storage. A packet with a size lesser than b bytes arrives at bucket, it
will be forwarded. If size of the packet increases more than b bytes, it will either be
discarded or queued. It is also assumed that the bucket leaks through the hole in its
bottom at a constant rate of r bytes per second. The outflow is considered at a constant
rate when there is any packet in the bucket, and zero when the bucket is empty. This
explains that if data flows into the bucket faster than data flows out through the hole, the
bucket overflows. This results further incoming data to be discarded until enough
volume again exists in the bucket to accept new data.
The disadvantages associated with the leaky-bucket algorithm are inefficient use of
available network resources. The leak rate is a fixed parameter. In case of the traffic
Amity Directorate of Distance & Online Education
Network Layer 145
volume is very low; the large portions of network resources like bandwidth are not being
used efficiently. The leaky-bucket algorithm does not enable individual flows to burst up
to port speed to effectively consume network resources at times when there would not Notes
be resource contention in the network.
It is important to note that the leaky bucket algorithm uses average rate and burst
rate parameters to control traffic flow. Average rate is defined as the average number of
packets per second that leak from the hole in the bottom of the bucket and enter the
network. The burst rate is the rate of accumulation of packets in the bucket and
expressed in packets per second. Example: If the average burst rate is 10 packets per
second, a burst of 10 seconds allows 100 packets to accumulate in the bucket.
The leaky bucket algorithm also uses two state variables namely current time and
the virtual time. The current time is current time of the computer’s watch while virtual
time measures how much data has accumulated in the bucket and is expressed in
seconds. Example: If the average rate is 10 packets per second and 100 packets have
accumulated in the bucket, then the virtual time is 10 seconds ahead of the current
time.

7.5.6 Token Bucket Algorithm


You must note that the leaky bucket algorithm has rigid output pattern at the average
rate independent of the bursty traffic. In many applications, when large bursts arrive, the
output is allowed to speed up. This calls for more flexible algorithm preferably that never
loses data. Hence, a token bucket algorithm finds its uses in the context of network
traffic shaping or rate limiting. The token bucket is a control algorithm that dictates when
traffic should be transmitted. This order comes based on the presence of tokens in the
bucket. The bucket contains tokens. Each of the token represents a packet of
predetermined size. Tokens in the bucket are removed for the ability to send a packet.
When tokens are present, a flow to transmit traffic occurs in the presence of tokens. No
token means no flow transmits its packets. Hence, a flow transmits traffic up to its peak
burst rate in the presence of adequate tokens in the bucket.
Thus, the token bucket algorithm adds token to the bucket every 1/r seconds. The
capacity of the bucket is b tokens. When a token arrives and the bucket is full, the token
is discarded. If a packet of n bytes arrives and n tokens are removed from the bucket,
the packet is forwarded to the network. Consider a situation when a packet of n bytes
arrives but fewer than n tokens are available. In such case no tokens are removed from
the bucket and the packet is considered to be non-conformant. The non-conformant
packets may either be dropped or queued for subsequent transmission when sufficient
tokens have accumulated in the bucket. They may also be transmitted but marked as
being non-conformant. Possibility is that they may be dropped subsequently if the
network is overloaded.
The advantage of this algorithm is to save up to the maximum size of the bucket.

7.6 Queuing Disciplines


In this section, you will study about the queuing disciplines. Irrespective of how simple
or how sophisticated the rest of the resource allocation mechanism is, it is must for
every router to implement some queuing discipline that oversees how packets are
buffered while waiting to be transmitted. The queuing algorithm can be considered as
allocating both bandwidth (and buffer space. It also directly affects the latency
experienced by a packet, by finding out how long a packet waits to be transmitted. In
this section, you will understand two common queuing algorithms—first-in-first-out
(FIFO) and fair queuing (FQ)—and recognises numerous variations that have been
proposed.

Amity Directorate of Distance & Online Education


146 Computer Communication Network
7.6.1 FIFO

Notes It is important to note that the idea of FIFO (first-come-first-served) queuing is simple:
The first packet that arrives at a router is the first packet to be transmitted. You can see
this figure 7.17(a), which shows a FIFO with “slots” to hold up to eight packets. Given
that the amount of buffer space at every router is finite, if a packet arrives and the
queue (buffer space) is full, then the router discards that packet, as shown in Figure
7.17 (b). This is done without regard to which flow the packet belongs to or how
significant the packet is. At times, this is known as tail drop, as packets that arrive at the
tail end of the FIFO are dropped.
You must observe that as FIFO and tail drop are the simplest instances of
scheduling discipline and drop policy, respectively, they are sometimes viewed as a
bundle—the vanilla queuing implementation.

Figure 7.17: (a) FIFO Queuing; (b) Tail Drop at a FIFO Queue
FIFO with tail drop, as the simplest of every queuing algorithm, is the most
extensively used in Internet routers at the time of writing. This simple method to queuing
pushes all responsibility for congestion control and resource allocation out to the edges
of the network. Therefore, the prevalent form of congestion control in the Internet
currently assumes no help from the routers: TCP takes accountability for identifying and
responding to congestion.

7.6.2 Fair Queuing


You must understand that the main problem with FIFO queuing is that it does not
distinguish between different traffic sources, and it does not separate packets as per the
flow to which they belong. This is a problem at two different levels. At one level, it is not
clear that any congestion-control algorithm implemented entirely at the source will be
able to adequately control congestion with so little help from the routers.

Amity Directorate of Distance & Online Education


Network Layer 147

Notes

Figure 7.18: Round-robin Service of Four flows at a Router


Source: http://www.sahyadri.edu.in/SMCA/students_guide/3semnotes/network/Computer_Networks_
Peterson__A_ Systems_Approach__Fourth_Edition.pdf

At another level, as the entire congestion control method is implemented at the


sources and FIFO queuing does not offer a means to provide how well the sources
follow this mechanism, it is possible for an ill-mannered source (flow) to capture an
arbitrarily large fraction of the network capacity. Considering the Internet again, it is
certainly possible for a given application not to use TCP, and as a consequence, to
bypass its end-to-end congestion-control mechanism. (Applications such as Internet
telephony do this today.) Such an application is able to flood the Internet’s routers with
its own packets, thereby causing other applications’ packets to be discarded.
Fair queuing (FQ) is an algorithm that has been proposed to address this problem.
The idea of FQ is to maintain a separate queue for each flow currently being handled by
the router. The router then services these queues in a sort of round-robin, as illustrated
in Figure 7.18. When a flow sends packets too rapidly, then its queue fills up. When a
queue reaches a particular length, additional packets belonging to that flow’s queue are
discarded. In this way, a given source cannot arbitrarily increase its share of the
network’s capacity at the expense of other flows.
You must understand that FQ is still designed to be used in conjunction with an
end-to-end congestion-control mechanism. It simply segregates traffic so that ill-
behaved traffic sources do not interfere with those that are faithfully implementing the
end-to-end algorithm. FQ also enforces fairness among a collection of flows managed
by a well-behaved congestion-control algorithm.

7.7 Congestion Avoidance


This section discusses about the congestion avoidance. TCP makes use of a feedback
mechanism known as Additive Increase, Multiplicative Decrease to detect congestion.
Every connection maintains a congestion window which represents the maximum
number bytes that can be sent to the network at once. To make things easier to
interpret, we will use number of packets as the unit of congestion window. After every
successful transmission, TCP increases its congestion window by 1; this is the additive
increase part. If a packet is lost due to a timeout, TCP halves its congestion window;
this is the multiplicative decrease. The instinct is that, when there is congestion in the
network, packets are dropped at the router as the router can no longer manage the
current traffic in the network and TCP rapidly reduces the number of packets it transmits
in order to recover from the congestion fast.

Amity Directorate of Distance & Online Education


148 Computer Communication Network

When multiple hosts detect that there is congestion in the network, it is possible that
they can all half their congestion windows and back off exponentially, resulting in an
Notes underutilization in the channel. This is called global synchronization and it is an
unwanted behaviour in the network. UDP on the other hand, does not have a built in
congestion control mechanism. However, since UDP does not guarantee delivery of
data, it doesn’t send packets multiple times when a timeout occurs.
Generally, congestion avoidance algorithms use the additive increase multiplicative
decrease mechanism of TCP in order to avoid congestion by attempting to force the
sender hosts to reduce their congestion windows. Now, you will study the methods to
accomplish this.

7.7.1 DECbit
You must note that in DECbit method, routers set a binary congestion bit in the packet
when the network is about to experience congestion. Then, the destination host copies
this DECbit to its acknowledgement and sends it back to the sender node. When the
sender node obtains the acknowledgement, it determines that the network is
overloaded and thus halves its congestion window.
A single congestion bit is added to the packet header. A router sets this bit in a
packet if its average queue length is greater than or equal to 1 at the time the packet
arrives. This average queue length is measured over a time interval that spans the last
busy + idle cycle, plus the current busy cycle. This is to note that the router is busy
when it is transmitting and idle when it is not. In Figure 8.4, you can see the queue
length at a router as a function of time. Basically, the router calculates the area under
the curve and divides this value by the time interval to compute the average queue
length. Using a queue length of 1 as the trigger for setting the congestion bit is a trade-
off between significant queuing (and therefore higher throughput) and increased idle
time (and therefore lower delay). Alternatively, a queue length of 1 seems to optimise
the power function.

Source: http://www.sahyadri.edu.in/SMCA/students_guide/3semnotes/network/Computer_Networks_
Peterson__A_ Systems_Approach__Fourth_Edition.pdf

Figure 7.19: Computing Average Queue Length at a Router


Now turning our attention to the host half of the mechanism, the source records how
many of its packets resulted in some router setting the congestion bit. In particular, the
source maintains a congestion window, just as in TCP, and watches to see what
fraction of the last window’s worth of packets resulted in the bit being set. If less than
50% of the packets had the bit set, then the source increases its congestion window by
one packet. If 50% or more of the last window’s worth of packets had the congestion bit
set, then the source decreases its congestion window to 0.875 times the previous value.
The value 50% was chosen as the threshold based on analysis that showed it to
correspond to the peak of the power curve.

Amity Directorate of Distance & Online Education


Network Layer 149
7.7.2 Random Early Detection (RED)
It is important to note that a mechanism known as random early detection (RED) is Notes
similar to the DECbit scheme in that every router is programmed to monitor its own
queue length, and when it discovers that congestion is imminent, to inform the source to
adjust its congestion window. RED, developed by Sally Floyd and Van Jacobson in the
early 1990s, varies from the DECbit scheme in two major ways.
The first is that instead of explicitly sending a congestion notification message to the
source, RED is most generally implemented such that it implicitly notifies the source of
congestion by dropping one of its packets. Thus, the source is effectively notified by the
subsequent timeout or duplicate ACK. In case you haven’t already guessed, RED is
designed to be used in conjunction with TCP, which currently detects congestion by
means of timeouts (or some other means of detecting packet loss such as duplicate
ACKs). As the “early” part of the RED acronym suggests, the gateway drops the packet
earlier than it would have to, so as to notify the source that it should decrease its
congestion window sooner than it would normally have. Alternatively, the router drops a
few packets before it has exhausted its buffer space completely, so as to cause the
source to slow down, with the hope that this will mean it does not have to drop lots of
packets later on.
You must note that the second difference between RED and DECbit is in the details
of how RED decides when to drop a packet and what packet it decides to drop. To
understand the basic idea, consider a simple FIFO queue. Rather than waiting for the
queue to become completely full and then be forced to drop each arriving packet, we
could decide to drop each arriving packet with some drop probability whenever the
queue length exceeds some drop level. This idea is called early random drop. The RED
algorithm includes the details of how to monitor the queue length and when to drop a
packet.

7.8 Internetworking Concepts


In this section, you will learn about the internetworking concepts. The availability of
different operating systems, hardware platforms and the geographical dispersion of
computing resources necessitated the need of networking in such a manner that
computers of all sizes could communicate with each other, regardless of the vendor, the
operating system, the hardware platform, or geographical proximity. Thus, you may say
that internetworking is a scheme for interconnecting multiple networks of dissimilar
technologies. Need of additional hardware and software is required to interconnect
multiple networks of dissimilar technologies. This additional hardware is positioned
between networks and software on each attached computer. This system of
interconnected networks is called an internetwork or an Internet.

Figure 7.20: Internetworking of Different Homogeneous Networks


To develop standards for internetworking, ARPANET, a project of DARPA
introduced the world of networking with protocol suite concepts such as layering, well
before ISO's initiative. This is NCP (Network Control Program) host-to-host protocol to

Amity Directorate of Distance & Online Education


150 Computer Communication Network

the TCP/IP protocol suite. ARPANET was basically a network based on leased lines
connected by special switching nodes, known as Internet Message Processors (IMP).
Notes Many researchers were involved in TCP/IP research by 1979. This motivated DARPA to
form an informal committee to coordinate and guide the design of the communication
protocols and architecture. The committee was called the Internet Control and
Configuration Board (ICCB).
It is important to note that the first real implementation of the Internet was when
DARPA converted the machines of its research network ARPANET to use the new
TCP/IP protocols. After this transition, DARPA demanded that all computers willing to
connect to its ARPANET must use TCP/IP. The success of ARPANET was more than
the expectations of its own founders and TCP/IP internetworking became widespread.
As a result, new Wide Area Networks (WAN) were created and connected to ARPANET
using TCP/IP protocol. In turn, other networks in the rest of the world, not necessarily
based on the TCP/IP protocols, were added to the set of interconnected networks.
Computing facilities are currently connected to the Internet via their own sub-networks,
constituting the world’s largest network. In 1990, ARPANET was eliminated, and the
Internet was declared as the formal global network.
DARPA also funded a project to develop TCP/IP protocols for Berkeley UNIX on the
VAX and to distribute the developed codes free of charge with their UNIX operating
system. The first release of the Berkeley Software Distribution (BSD) to include the
TCP/IP protocol set was made available in 1983 (4.2BSD). This led to the spread of
TCP/IP among universities and research centers and has become the standard
communications subsystem for all UNIX connectivity.

7.8.1 History of Internetworking


 Time-sharing networks were the first networks that used mainframes and attached
terminals. These types of environments were implemented by both IBM’s System
Network Architecture (SNA) as well as Digital’s network architecture.
 Local area networks (LANs) evolved around the PC revolution. LANs enabled
numerous users in a relatively small geographical area in order to exchange files
and messages, as well as access shared resources such as file servers.
 Wide area networks (WANs) interconnect LANs across normal telephone lines (and
other media), thus interconnecting geographically dispersed users.
 Nowadays, high-speed LANs and switched internetworks are used extensively. This
is because they function at very high speeds and provide support to such high-
bandwidth applications as voice and videoconferencing.
 Internetworking evolved as a solution to three key problems:
 isolated LANs
 duplication of resources
 and a lack of network management
 Isolated LANS made electronic communication among different offices or
departments impossible. Duplication of resources meant that the same hardware
and software had to be provided to each department, as did a separate support
staff. This deficiency of network management meant that no centralized method of
managing and troubleshooting networks existed.

7.8.2 Internetworking Challenges


The task of implementing a functional internetwork is not simple. Many challenges must
be faced, particularly in the areas of reliability, connectivity, network management, and
flexibility. Every area is key in establishing an efficient as well as effective internetwork.
The challenge when connecting various systems is to support communication among
disparate technologies. For example, different sites may use different types of media, or
they might function at varying speeds.

Amity Directorate of Distance & Online Education


Network Layer 151
Another necessary consideration, reliable service, must be maintained in any
internetwork. Individual users and entire organizations rely on consistent, reliable
access to network resources. Moreover, network management must offer centralized Notes
support and troubleshooting capabilities in an internetwork. Configuration, security,
performance, and other issues must be adequately addressed for the internetwork in
order to function smoothly.
The final concern, flexibility, is necessary for network expansion and new
applications and services, among other factors.

7.8.3 Internet
You must understand that the word Internet is a short form of a complete word
internetwork or interconnected network. Therefore, it can be said that the Internet is not
a single network, but a collection of networks. Internet is a form of resource sharing.
The commonality between them in order to communicate with each other is TCP/IP.
The Internet consists of the following groups of networks:
Backbones: These are large networks that exist primarily to interconnect other
networks. Example: Some examples of backbones are NSFNET, EBONE and large
commercial backbones.
Regional networks: These connect, for example, universities and colleges.
Example: ERNET (Education and Research Network) is an example in the Indian
context.
Commercial networks: They provide access to the backbones to subscribers, and
networks owned by commercial organizations for internal use and also have
connections to the Internet. Mainly, Internet Service Providers come into this category.
Local networks: These are campus-wide university networks.
The networks connect users to the Internet using special devices that are called
gateways or routers. These devices provide connection and protocol conversion of
dissimilar networks to the Internet. Gateways or routers are responsible for routing data
around the global network until they reach their ultimate destination as shown in Figure
7.21. The delivery of data to its final destination takes place based on some routing
table maintained by router or gateways. These are mentioned at various places in this
book, as these are the fundamental devices to connect similar or dissimilar networks
together.
Over time, TCP/IP defined several protocol sets for the exchange of routing
information. Each set pertains to a different historic phase in the evolution of
architecture of the Internet backbone.

Figure 7.21: Local Area Networks Connected to the Internet


via Gateways or Routers

Amity Directorate of Distance & Online Education


152 Computer Communication Network
7.8.4 Routing in the Internetwork

Notes Routing is defined as the act of moving information across an internetwork from a
source to a destination. Along the way, at least one intermediate node usually is
encountered. Routing is often contrasted with bridging, which might seem to achieve
precisely the same thing to the casual observer. Routing takes place at Layer 3 (the
network layer).
You must understand that to interconnect networks different hardware is required at
different layers. Repeaters or hubs that amplify and forward the signal from one network
to another are required at physical layer. The data link layer uses bridges and switches
that read the packet to decide as to whether the data is to be forwarded or it belongs to
the same network from where it originated. Routers or multiprotocol routers connect two
networks at the network layer to decide the best possible path for delivery of packets.
Routing includes two basic activities: finding out optimal routing paths and
transporting information groups (usually known as packets) through an internetwork. In
the context of the routing process, the latter of these is known as packet switching.
Even though packet switching is comparatively straightforward, path determination can
be very complex.
Routing algorithms can be differentiated based on various key characteristics. First,
the particular goals of the algorithm designer affect the operation of the resulting routing
protocol. Second, several types of routing algorithms exist, and every algorithm has a
different impact on network and router resources. Finally, routing algorithms use a
variety of metrics that affect calculation of optimal routes.
Routing algorithms consists of the following design goals:
 Optimality: It refers to the capability of the routing algorithm to choose the best
route, which relies on the metrics and metric weightings used to make the
calculation. For example, one routing algorithm may use a number of hops and
delays, but it may weigh delay more heavily in the calculation. Naturally, routing
protocols must define their metric calculation algorithms strictly.
 Simplicity and low overhead: Routing algorithms are designed to be as simple as
possible. Alternatively, the routing algorithm must provide its functionality efficiently,
with a minimum of software and utilization overhead. Efficiency is particularly
significant when the software implementing the routing algorithm must run on a
computer with limited physical resources.\
 Robustness and stability: Routing algorithms must be robust. It signifies that they
should perform correctly in the face of unusual or unforeseen circumstances, such
as hardware failures, high load conditions, and incorrect implementations. As
routers are located at network junction points, they can cause considerable
problems when they fail. The best routing algorithms are often those that have
withstood the test of time and that have proven stable under a variety of network
conditions.
 Rapid convergence: Routing algorithms must converge rapidly. Convergence is
considered as the process of agreement, by all routers, on optimal routes. When a
network event causes routes to either go down or become available, routers
distribute routing update messages that permeate networks, stimulating
recalculation of optimal routes and eventually causing all routers to agree on these
routes. Routing algorithms that converge slowly can cause routing loops or network
outages.
 Flexibility: Routing algorithms should also be flexible, which signifies that they
should quickly and accurately adapt to various network circumstances. Example:
Assume that a network segment has gone down. As many routing algorithms
become aware of the problem, they will quickly select the next-best path for all
routes normally using that segment. Routing algorithms can be programmed to
adapt to changes in network bandwidth, router queue size, and network delay,
among other variables. Routing algorithms within homogeneous networks are

Amity Directorate of Distance & Online Education


Network Layer 153
known as Interior Gateway Protocols (IGP). Example: IGP is Open Shortest Path
First (OSPF). The routing algorithms used between dissimilar networks are called
Exterior Gateway Protocols (EGP). Example: Border Gateway Protocol (BGP). Notes
 Transport layer uses gateways to connect dissimilar networks. The change of
application like Internet email to x.400 email, application layer uses application
gateways.

7.8.5 Virtual Circuits


It is essential to understand that connecting a number of dissimilar computer networks
presents a seamless communication channel to which many systems are attached. The
internal details of the large number interconnecting networks are hidden from the users.
The user understands this internetwork of large computers as a single seamless large
computer networks. This creates an illusion of virtual network comprising of virtual
circuits.
A virtual circuit offers a connection among points in a network in both
telecommunications as well as computer networks. The virtual circuit allows packets of
information to pass among the two connections. Typically, these circuits are used in
networks with fast transfer speeds, such as asynchronous transfer mode (ATM)
connections. While the virtual circuit may appear to be a physical path that connects
two points in the network, it actually switches back and forth among various circuits to
create different paths as needed.
The basic idea behind the virtual circuit is to building up an internetwork connection
by concatenating a series of intra-networks and gateway to gateway virtual circuits. The
gateways are different from routers. Figure 7.22 shows internetworking using
concatenated virtual circuits.
The source machine requests to the subnet to set up a virtual circuit connection to a
destination machine. When the subnet finds that the destination is remote, it builds a
virtual circuit to the router nearest the destination machine network. That router creates
a virtual circuit to an external gateway. An external gateway is a multiprotocol router.
Multi-protocol routers connect networks of different types, which use different routing
protocols. Figure 7.22 shows a multiprotocol router connecting networks of dissimilar
network protocols.
You will find it interesting to know that like routers, the gateway registers the
existence of the virtual circuit in its table and proceeds to build another virtual circuit to a
router in the next subnet till the destination host has been reached.
There are two types of virtual circuits: permanent virtual circuits (PVC) and switched
virtual circuits (SVC). PVC stays connected at all times while a SVC only connects
when in use and disconnects afterward. Usually, PVCs are used on frame relay
networks, which connect local networks with wider area networks. A SVC can be used
on frame relay networks but must maintain a constant connection during the transfer.

Figure 7.22: Multiprotocol Routers

Amity Directorate of Distance & Online Education


154 Computer Communication Network

Virtual circuits can also known as logical circuits, and it is significant to keep in mind
that while the circuit can change its path and connect to different networks or points, it
Notes still only connects two points at one time. It determines what two connections it needs to
make and sets up the best path for a smooth and fast transfer. For this reason it
appears to be a normal circuit connection that stays in place. The difference lies in how
the circuit can choose two different points to create a new connection when necessary.
This allows for fast transfers among various networks using fewer resources.

7.8.6 Connectionless Internetworking


As shown in figure 7.23, the datagram model is an alternative internetworking model. In
this model, the service that network layer provides to the transport layer is the capability
to insert datagrams into the subnet and hope for the best. There is no view of a virtual
circuit at all in the network layer, let alone a concatenation of them. This model does not
need every packet belonging to one connection to pass through the same sequence of
gateways.
As you can see in figure 7.23, datagrams from network 1 to network 2 are taking
different routes via the internetwork. A routing decision is made individually for every
packet, possibly relying on the traffic at the moment the packet is sent. This strategy
can make use of multiple routes and therefore attain a higher bandwidth as compared
to the concatenated virtual-circuit model. Alternatively, there is no assurance that the
packets reach at the destination in order, supposing that they arrive at all.
The model of Figure 7.23 is not quite as simple as it appears. For one thing, if every
network consists of its own network layer protocol, it is not possible for a packet from
one network to transit another one. One could visualize the multiprotocol routers
actually attempting to transform from one format to another, but unless the two formats
are close relatives with the same information fields, such conversions will always be
incomplete and frequently destined to failure. Thus, conversion is hardly tried.

Source: http://www-users.aston.ac.uk/~blowkj/internetworks/lecture10/img2.html

Figure 7.23: A Connectionless Internetworking


You need to pay attention to the fact that another problem is addressing. Consider a
simple case: a host on the Internet is attempting to send an IP packet to a host on an
adjacent SNA network. The IP and SNA addresses are dissimilar. One would require a
mapping among IP and SNA addresses in both directions. Moreover, the concept of
what is addressable is different. In IP, hosts have addresses. In SNA, entities excluding
hosts (e.g., hardware devices) can also have addresses. Another idea is to design a
universal ''internet'' packet and have all routers recognise it. This method is, in fact,
what IP is—a packet intended to be carried via many networks. Obviously, it may turn
out that IPv4 drives every other format out of the market, IPv6 (the future Internet
protocol) does not catch on, and nothing new is ever invented, but history advises
otherwise. Getting everyone to agree to a single format is problematic when companies
perceive it to their commercial benefit to have a proprietary format that they control.

Amity Directorate of Distance & Online Education


Network Layer 155
7.8.7 Fragmentation
It is important to note that each autonomous system places limits on the maximum size Notes
of a packet. These limits are dependent on the hardware like the width of a TDM
transmission slot, operating system, for example, all buffers are 512 bytes, protocols
such as the number of bits in the packet length field, compliance with international
standards, retransmissions, congestion, etc. If the size of the IP datagram becomes
greater than the maximum size allowed by a networking technology for a hardware
frame, the original IP datagram is fragmented into more than one IP datagram fragment.
Each of these fragments contains their own header.
Hence, routing through an internetwork must consider the size of a packet as to
whether that packet will be acceptable to every autonomous system of networks along
the path. A packet entering into a new network needs to be fragmented into acceptable
size packets. Thereafter, it requires to be reassembled when it reaches the next
gateway. This is known as transparent fragmentation. In another case, a packet is
fragmented at the first gateway and then reassembled only at the destination host. This
is known as non-transparent fragmentation. ATM networks hardware provides
transparent fragmentation of packets into cells and then reassembly of cells into
packets.

7.9 IP (Internet Protocol)


In this section, you will learn about the internet protocol. IP is a connectionless type
service and operates at third layer of OSI reference model. That is, prior to transmission
of data, no logical connection is needed. This type of protocol is suitable for the
sporadic transmission of data to a number of destinations. It does not have such
functions as sequence control, error recovery and control, flow control but it identifies
the connection with port number.

7.9.1 Unreliable Connection Delivery


Internet Protocol (IP) offers unreliable, connectionless packet delivery for the Internet.
IP is connectionless as it treats every packet of information independently. It is
unreliable as it does not assure deliver. It means that it does not need
acknowledgments from the receiving host, the sending host, or intermediate hosts.

7.9.2 Datagrams
The term 'datagram' or 'packet' is utilized to describe a chunk of IP data.
You must remember that the IP datagram has a header of 20-byte fixed size and a
text of variable length optional parts. The header format of IP datagram is depicted in
Figure 7.24. The header format is transmitted from left to right, with the high order bit of
Version field is transmitted first.

1. 32 Bits
2. Version 3. IHL 4. Types of 5. Total length
service
6. Identification 7. 8. DF 9. MF 10. Fragment
Offset
11. Time to Live 12. Protocol 13. Header checksum
14. Source Addresses
15. Destination Addresses
16. Options (0 or more words)

Figure 7.24: IP (Internet Protocol) Header

Amity Directorate of Distance & Online Education


156 Computer Communication Network

Data encapsulation adds the IP header to the data. The IP header consists of five
or six 32-bit words; the sixth word is attributed to the IP options field. The different fields
Notes of the IP header are given as below:
Version refers to the version of the IP protocol in use and keeps track of the version
of the protocol to which the datagram belongs to. The current version of IP is 4.
Internet Header Length (IHL) indicates the length of the header field in 32-bit words.
The minimum value of the header field is 5 that apply when no option is present. The
maximum value of this 4 bit filed is 15 that restricts the header to 60 bytes and thus
Option field to 40 byte.
Type of service enables the host to indicate the subnet what kind of service (e.g.,
reliability and speed) it wants. It refers to any of the type of services that IP supports.
Desired service type is normally specified by user level applications. Example: Service
type include minimum and maximum throughput, requested by applications such as the
File Transfer Protocol (FTP) and Simple Mail Transfer Protocol (SMTP).
Total length has everything in the datagram (max. 64 KB). If it is subtracted from the
IHL field, it indicates to IP the actual length of the data field.
Identification enables the destination host to determine which datagram a newly
arrived fragment belongs to.
DF means do not Fragment.
MF is for More Fragments.
Fragment offset indicates the source location of the current datagram. The
elementary fragment unit size is 8 bytes.
Time to live that counts hops is expressed in seconds. A zero count indicates that
the packet is discarded. TTL is employed by IP to prevent a lost datagram from
endlessly looping around the network. IP achieves this objective by initializing the TTL
field to the maximum number of routers that the packet can traverse on the network.
Protocol indicates the destination which transports process to give the datagram to
(TCP, UDP, or others).
Header checksum verifies the header only. The algorithm is to add up all the 16-bit
halfwords as they arrive, using one's complement arithmetic.
Source/Destination address tells the network number and host number.
It is important to note that options provides an escape to allow subsequent versions
of the protocol to have information not present in the original design, to allow
experimenters to try out new ideas, and to avoid allocating header bits to information
that is rarely needed. On its presence, it includes optional control information. Example:
An example of optional information includes the route record, which includes a record of
every router that the datagram traversed during its trip around the network.

7.9.3 IP Addresses
Using Internet has become common. You will now understand how Internet interprets
the Internet address. The Internet addresses are written as www.hotmail.com, for
instance we write one more address as server.institution.domain. The address
www.hotmail.com is not actual address; it is a text version of the Internet address,
which is basically a binary representation. Now we compare www.hotmail.com, and
server.institution.domain. WWW is the name of the server owned by the institution (in
this case, it is hotmail) and this server is connected to the Internet to a domain server
namely (com in this case) which maintains a database of the addresses of different
servers using the same domain com. The domain name has no geographical relevance
and two sites with same domain name may exist at two end of this world.

Amity Directorate of Distance & Online Education


Network Layer 157
The above case is the simplest case. In another instance an organization may be
large enough and have several other servers for different purposes such as web server,
email server, print server etc. Notes
Example: Let us now consider www.sun.planet.universe.in. This address has five
parts separated by three dots. If you try to understand this address, this address will
indicate that a group Planets (planet) comes under a Universe sub domain which is a
part of India domain and maintaining one server sun out of many servers, which is
linked to Internet through its web server. Likewise any organization with several
departments may create addresses for its sub domain with different servers being
maintained there.
You must keep in mind that internet is the collection of several independent
networks, which are interconnected with one another. Now each independent network
may have several hosts. Keeping this in mind, you can now think of address of your
house. Your house has a unique house number, which is not assigned, to any other
house in your locality. In this case, your house can be considered as a host. Your
locality can be considered as network and your city as domain. You can write your
address in Internet addressing notation as houseno.locality.city. Suppose you want to
tell your address to a foreigner, then you will have to add your country name in your
address. In this case it will become houseno.locality.city.country. Now if anybody
desires to send you a letter or visit your house, he will first has to come to your country
and then to your city. After that he will reach to your locality and then your house by
your house number. The same analogy applies in case of Internet addressing.
A host on Internet has two parts. These are identification of the network and
identification of the host on the network. In this manner, the address of a host is
therefore comprised of two parts namely network address and host address. These two
parts together make 32 bit long IP address for a particular host on the Internet. The IP
address, which will see in the subsequent discussion, is written in four octets each
separated by a dot. It may have a form like 197.23.207.10.

IPv4 Addressing
IPv4 addresses are uniquely used as identifiers, which work at network layer to identify
the source or destination of IP packets. Presently, the version of IP, which is in use, is
called as IPv4. In this version, every node on Internet may have one or more interfaces,
and it is required to identify each of these devices with a unique address assigned to
each of them. It means that each node is assigned one or more IP addresses to invoke
TCP/IP. These are logical addresses and have 32 bits.
Technically, IP addresses are expressed using binary notation with 32 bit long
string. In order to make these strings to remember easily, dotted decimal notations are
used, in which periods or dots separate four decimal numbers from 0 to 255
representing 32 bits. As there are 32 bits therefore each decimal number contains 8 bits
and called octet.
Example: The IPv4 address 11000000101010000000101000011001 is expressed
as 192.168.10.25 in dotted decimal notation.
Below are the steps to convert an IPv4 address from binary notation to dotted
decimal notation:
1. Break 32 bit long address into segments of 8-bit blocks: 11000000 10101000
00001010 00011001
2. Write decimal equivalent of each segment: 192 168 10 25
3. Separate the blocks with periods: 192.168.10.25

Amity Directorate of Distance & Online Education


158 Computer Communication Network

Figure 7.25 shows the IP address structure.

Notes 11000000 10101000 00001010 00011001


192 168 10 25

Figure 7.25: IP address in Dotted Decimal Notation

Dotted Decimal Notation


You have seen that IPv4 address is expressed as a 32-bit number in dotted decimal
notation. IP addresses may have fixed part and variable part depending upon the
allocation of total addresses to you or your organization. Fixed part of the address may
be from one octet to three octets and remaining octets will then be available for variable
part. An IPv4 address is assigned using these parts. All bits in the fixed octet (s) are set
to 1 while variable octet(s) are set to 0 bits. Thereafter, convert the result into dotted
decimal notation. Example: You may take an IP address as 192.168.10.25. Now set all
fixed bits to 1 and set all variable bits to 0. This gives 11111111 11111111 00000000
00000000. On converting it in dotted decimal notation, the result is 255.255.0.0. This
dotted decimal notation with fixed and variable parts is used as address prefix to
192.168.10.25 and is expressed as 192.168.10.25, 255.255.0.0. This way of expressing
the prefix length as a dotted decimal number is known as network mask or subnet mask
notation.

Classification of IPv4 Addresses


You must keep in mind that internet standards allow the following addresses:
Unicast: It is assigned to a single network interface located on a specific subnet ad
facilitates one-to-one communication. This is unique address globally for the
identification of a device on the network. It may be understood as the house number on
a particular locality. It includes a subnet prefix and a host ID portion.
Subnet prefix: The subnet prefix is basically network identifier or network address
portion of an IP unicast address. It should be noted that all nodes on the same physical
or logical subnet must use the same subnet prefix, which eventually becomes unique
within the entire TCP/IP network.
Host ID: The host ID, which is a host address portion of an IP unicast address,
identifies a network node to which some devices are interfaced. It is also unique within
the network segment.
Multicast: It is used for one or more network interfaces located on various subnets.
It allows one-to-many communication. It delivers single packets from one source to
many destinations. These addresses are part of Class D addressing scheme.
Broadcast: It is allocated to all network interfaces located on a subnet and is used
for one-to-everyone on a subnet communication. It delivers packets from one source to
all interfaces on the subnet. Broadcast addresses may be further classified as network
broadcast, subnet broadcast, all subnets directed broadcast and limited broadcast.
Internet Addresses are further classified into different classes. It is based on the
number bits are used for the address prefix of a single subnet and the number of bits
are used for the host ID. It therefore allocates the number of networks and the number
of hosts per network.

Subnetting for IP Addresses


It must be remembered that over the past several years, the Internet has scaled
enormous volume in terms of hosts connected to it and therefore IPv4 addresses yet
32
available are becoming scare. You may have confusion here that 32 bits give 2 unique
addresses which comes around 4.3 billion different addresses. But this not the condition
because of the different classes of the IPv4 addresses. Suppose a medium sized
organization gets Class B address based on its current user population of say 1000. It
Amity Directorate of Distance & Online Education
Network Layer 159
uses 1000 different addresses. But the organization management has the ability to
16
assign 2 = 65,536 different identifiers. It means that there is 64,536 addresses
wastage. Since they all belong to the same class B network number, they cannot be Notes
reclaimed by any other organization.
A network administrator may suggest using Class C network address, which may
require at least four class C networks. Later on, suppose, the number of users increase
and the organization applies for another class C network, it might not get the same or if
it gets, it has to pass through a hell of paper works and delays. In addition there is
another angle of this problem with regard to additional routing. With many Class C
networks, you need to have more network number for routers to track. Consequently
performance of the network deteriorates. The solutions for these problems lie either in
increasing the number of bits in IP address or Classless Inter Domain Routing (CIDR).
You may also use a technique called subnetting to efficiently divide the address
space allocated to an organization to the different users divided among different
subnets of an organization network. Therefore subnetting is a process through which
the address space of a unicast address prefix is efficiently divided for allocation among
the subnets of an organization network. As you know that a unicast address have fixed
and variable portions. The fixed portion of a unicast address prefix has a defined value.
The variable portion of a unicast address prefix has the bits beyond the prefix length,
which needs to set to 0.
In order to implement subnetting, you need to follow the some guidelines:
 Assess the number of subnets requirement.
 Assess the number of host IDs for each subnet.
 After this, a set of subnetted address prefixes with a range of valid IP addresses
may be defined. Following steps are followed for Subnetting:
Estimate the number host bits for the subnetting.
Determine the new subnetted address prefixes.
Determine the range of IP addresses for each new subnetted address prefix.
You may now learn as to how the subnet prefix of an IP address is determined.
Following steps give you a way to determine the same without the use of binary
numbers:
1. Write the number n (the prefix length) as the sum of 4 numbers by successively
subtracting 8 from n. Example: 22 is 8+8+6+0.
2. In a table with four columns and three rows, place the decimal octets of the IP
address in the first row. The second row will then contain the four digits of the sum
as has been determined in step 1.
3. The columns having 8 in the second row, write the corresponding octet from the first
row to the third row. In case of 0 in a column in the second row, place 0 in the third
row.
4. The column in the second row having a number between 0 and 8, convert the
decimal number in the first row to binary. Now select the high-order bits for the
number of bits indicated in the second row and put zero for the remaining bit and
then convert back to decimal number. This will be the entry in that column.
For our example the entry in third column of first row is 10. Therefore the binary
equivalent is 00001010. Again the third column of second row is having 6. It means
you have to take 6 bits as such from high by side and converting the remaining two
bits as 00. This will give us a binary number as 00001000 which is decimal
equivalent to 8. Therefore, the entry 8 will go in that column.

Amity Directorate of Distance & Online Education


160 Computer Communication Network

192 168 10 25
8 8 6 0
Notes
192 168 8 0

This gives the subnet prefix for the IPv4 address configuration 192.168.10.25/22 as
192.168.204.0/22.
Now, you have to extract the subnet prefix from an arbitrary IPv4 address using an
arbitrary subnet mask. For this purpose a mathematical operation logical AND is used.
A logical comparison between the 32-bit IP address and the 32-bit subnet mask is
performed. It gives the subnet prefix. Example: For example, you may consider the
following possible addresses for Class C.
Class C Network Bit Representation Address Range
210.195.8.0 11010010-11000011-00001000-xxxxxxxx 210.195.8.0-211.195.8.255
210.195.9.0 11010010-11000011-00001001-xxxxxxxx 210.195.9.0-211.195.9.255
210.195.10.0 11010010-11000011-00001010-xxxxxxxx 210.195.10.0-211.195.10.255
210.195.11.0 11010010-11000011-00001011-xxxxxxxx 210.195.11.0-211.195.11.255

Figure 7.26: Possible Addresses for Class C


These Class C networks define the contiguous set of addresses from 210.195.8.0
to 210.195.11.255. On examining these addresses, it is observed that the first 22 bits
are same for each address. It means that any of these Class C networks has 22 bit
network number followed by a 10 bit local identifier for hosts. A router then can extract
the network number using a logical AND operation between a 22-bit subnet mask and
an IP address. For this example, you can say that a router can represent the four
networks using the single entry 210.195.8.0/22, where /22 indicates the network
number is 22 bits long. Likewise, 210.195.8.0/20 address would first 20 bits and so on.
This indicates that you are grouping different smaller networks together and they are
being treated same for the routing purposes.

7.9.4 Routing IP Datagrams


It is important for you to know that the critical functions of IP datagram, that is,
encapsulation and addressing are every so often compared to putting a letter in an
envelope and then writing the recipient’s address on it. Once our IP datagram
“envelope” is filled and labelled, it is ready to go, but it is still sitting on our desk. The
last of the major functions of IP is to get the envelope from us to our proposed recipient.
This is the process of datagram delivery. When the recipient is not available on our local
network, this delivery needs that the datagram be routed from our network to the one
where the destination exists.

IP Datagram Direct Delivery and Indirect Delivery (Routing)


The general job of the IP is to transfer messages from higher layer protocols over an
internetwork of devices.
The delivery process can be either simple or complex, relying on the proximity of
the source and destination devices.
Theoretically, all IP datagram deliveries can be divided into two general types:
Direct Datagram Deliveries: When datagrams are sent among two devices on the
same physical network, it is possible for datagrams to be delivered directly from the
source to the destination. Example: Suppose that you want to deliver a letter to a
neighbour on your street. You perhaps would not bother mailing it through the post
office; you would just put the name of the neighbour on the envelope and stick it right
into his or her mailbox.

Amity Directorate of Distance & Online Education


Network Layer 161
Indirect Datagram Deliveries: When two devices are not on the similar physical
network, the datagrams delivery from one to the other is indirect. As the source device
cannot see the destination on its local network, it must send the datagram via one or Notes
more intermediate devices in order to deliver it. Example: Indirect delivery is similar to
mailing a letter to a friend in a different city. You do not deliver it yourself—you put it into
the postal system. The letter travels via postal system, perhaps taking numerous
intermediate steps, and ends up in your friend's neighbourhood, where a postal carrier
puts it into his or her mailbox.

IP Routes and Routing Tables


You must understand that routers are accountable for forwarding traffic on an IP
internetwork. Every router accepts datagrams from various sources, observes the IP
address of the destination and decides what the next hop is that the datagram requires
to take to get it that much closer to its final destination. A question then obviously
arises: how does a router know where to send different datagrams?
Every router holds a set of information that offers a mapping among different
network IDs and the other routers to which it is connected. This information is
comprised in a data structure usually known as a routing table. Every entry in the table,
naturally known as a routing entry, offers information about one network .Whenever a
datagram is obtained, the router checks its destination IP address against the routing
entries in its table to decide where to send the datagram, and then sends it on its next
hop.
Clearly, the less the entries in this table, the quicker the router can decide what to
do with datagrams. Some routers just have connections to two other devices, thus they
don't have much of a decision to make. Usually, the router will just take datagrams
coming from one of its interfaces and if required, send them out on the other one.
Example: Consider a router of the small company router acting as the interface among
a network of three hosts and the Internet. Any datagrams sent to the router from a host
on this network will require to go over the router's connection to the router at the ISP.
You must keep in mind that when a router consists of connections to more than two
devices, things turn out to be significantly more complex. Some distant networks may
be more effortlessly reachable if datagrams are sent by means of one of the routers
than the other. The routing table includes information not only about the networks
directly associated to the router, but also information that the router has “learned” about
more distant networks.Example: Routing Tables in an Internetwork
Let us suppose an example with routers R1, R2 and R3 connected in a “triangle”,
so that every router can send directly to the others, as well as to its own local network.
Suppose local network of R1 is 11.0.0.0/8, R2's is 12.0.0.0/8 and R3's is 13.0.0.0/8. R1
knows that any datagram it sees with 11 as the first octet is on its local network.
As you can see in Figure 7.27 a small, simple internetwork is shown which consists
of four LANs each served by a router. The routing table for each lists the router to which
datagrams for each destination network should be sent, and is colour coded to match
the colours of the networks. Notice that because of the “triangle”, each of R1, R2 and
R3 can send to each other. However, R2 and R3 must send via R1 to deliver to R4, and
R4 must make use of R1 to reach either of the others.
Now assume that R1 also connects to another router, R4, which has 14.0.0.0/8 as
its local network. R1 will have an entry for this local network. However, R2 and R3 also
need to know how to reach 14.0.0.0/8, even though they don't connect to it its router
directly. Most possibly, they will have an entry that says that any datagrams intended for
14.0.0.0/8 should be sent to R1. R1 will then forward them to R4. Likewise, R4 will send
any traffic intended for 12.0.0.0/8 or 13.0.0.0/8 through R1.

Amity Directorate of Distance & Online Education


162 Computer Communication Network

Notes

Source: http://www.tcpipguide.com/free/t_IPRoutesandRoutingTables-2.htm

Figure 7.27: IP Routing and Routing Tables

7.10 ICMP
This section emphasises on the Internet Control Message Protocol (ICMP). ICMP, you
can say, an error reporting protocol that is an integral part of the IP protocol. ICMP
communicate control data, information data, and error recovery data across the
network. Problems that is less severe than transmission errors result in error conditions
that can be reported. Example: Suppose some of the physical paths in Internet fail,
causing the Internet to be partitioned into two sets of networks with no path between the
sets, a datagram sent from a host in one set to a host in other cannot be delivered.
The TCP/IP suite includes a protocol called ICMP that IP uses to send error
messages when condition such as the one described above arises. The protocol is
required for a standard implementation of IP. You will see that the two protocols are co-
dependent. IP uses ICMP when it sends an error message, and ICMP uses IP to
transport messages.
You must keep in mind the following brief description of some of the error
messages defined by ICMP protocol:
Source Quench: A router or host whose receive communication buffers are nearly
full normally triggers this message. A source quench message is sent to the sending
host, the receiver is simply requesting the sending host to reduce the rate at which it is
transmitting until advised otherwise.
Time Exceeded: A time-exceeded message is sent in two cases. Whenever a
router reduces the TTL field in a datagram to zero, the router discards the datagram
and sends a time-exceeded message. In addition, a time-exceeded message is sent by
a host if the reassembly timer expires before all fragments from a given datagram
arrive.
Route Redirect: A router sends this message to a computer system that is
requesting its routing services. When a computer system creates a datagram destined
for a network, it will send that datagram to a router, which will then be forwarded to its
destination. If a router finds out that a computer system has incorrectly sent a datagram

Amity Directorate of Distance & Online Education


Network Layer 163
that should have been sent to a different router, the router then uses route redirect
message which tells the computer system to change its route. By this way, a route
redirect message improves the efficiency of the routing process by informing the Notes
requesting host of a shorter path to the desired destination.
Host Unreachable: Whenever a gateway or a router determines that a datagram
cannot be delivered to its final destination (due to link failure or bandwidth congestion),
an ICMP host unreachable message is sent to the originating node on the network.
Normally the message includes the reason the host cannot be reached.
Fragmentation and Reassembly: The largest datagram the IP protocol can handle
is 64 Kbytes. The maximum datagram size is dictated by the width of the total length
field in the IP header. Realistically, most underlying data link technologies cannot
accommodate this data size. Example: The maximum size of the data frame supported
by Ethernet is 1,514 bytes.
Unless rectified, something is done about situations like this. IP has to discard data
that is delivered to it from upper-layer protocols with sizes exceeding the maximum
tolerable size by the data link layer. To circumvent this difficulty, IP is built to provide
data fragmentation and reassembly.
Whenever an upper-layer protocol delivers data segments whose sizes exceed the
limit allowed by the underlying network, IP breaks the data into smaller pieces that are
manageable within the allowed limit. The small datagrams are then sent to the target
host, which reassembles them for subsequent delivery to an upper-layer protocol.
Data fragments, however, takes the same route but there is instances when they
may adopt alternate route too. Fragments traversing different routes may reach their
destination out of the order in which they were sent. To allow for recovery from such a
behaviour, IP employs the fragmentation-offset field in its header. The fragmentation-
offset field includes sequencing information that the remote IP host uses to recover the
sequence in which the datagrams were sent. The fragmentation-offset field also
contains information for detecting missing fragments, which is used by IP. Data is
passed to the protocol described in the protocol field only when all related fragments
are duly received and reordered, it is referred to as data reassembly.
Fragments belonging to two or more independent large data can be differentiated
by IP using identification field. Fragments of the same datagram are uniquely assigned
in the identification field. The receiving end uses this number to recover the IP
fragments to their respective datagrams.
A host that creates a datagram can set a bit in the flag field to specify the
fragmentation. This bit is set to 1 in all fragments belonging to a datagram except for the
final fragment. This ensures that all fragments of a datagram are received.
Echo request/Echo reply: These two ICMP messages are exchanged between
ICMP software on any two hosts in a bid to check connectivity between them. Example:
The ping command is an example of a diagnostic command commonly used by network
users to check for the reachability of a certain host. On invoking this command, ICMP
echo request message is sent to the target host. The target host responds with an echo
as proof of reachability.
It should however be operational and connected to the network. In other words, the
reply carries the same data as the request.
Address Mask Request/Reply: A host broadcasts an address mask request when
it boots, and routers that receive the request send an address mask reply that contains
the correct 32-bit subnet mask being used on the network.
ICMP uses IP to transport each error message. When a router has an ICMP
message to send, it creates an IP datagram and encapsulates the ICMP message in
the datagram. It means that the ICMP message is placed in the data area of the IP
datagram. The datagram is forwarded as usual with the complete datagram being

Amity Directorate of Distance & Online Education


164 Computer Communication Network

encapsulated in a frame for transmission. Figure 6.9 illustrates two levels of data
encapsulation.
Notes
ICMP Header ICMP Data Area

IP Header IP Data Area

Frame Header Frame Data

Figure 7.28: Two Levels of Encapsulation in case of ICMP Datagram Transmission

7.11 Summary
Network layer transports packet from sending to receiving hosts via internet. A switch is
used to interconnect different hosts to its several inputs and outputs. The circuit
switching session comprises of 3 phases like circuit establishment, data transfer and
circuit disconnect. Packet switching enables to transmit the same information to more
than one receiver at the same time. In message switching, there is no need for a
connection to be established all the way from source to destination. Cell switching is
considered to be a high speed switching technology to overcome the speed problems
for real time applications. IP routing protocol makes the distinction between hosts and
gateways. A host is the end system to which data is ultimately delivered. An IP gateway
is the router that accomplishes the act of routing data between two networks. The
routing algorithms decide which output line an incoming packet should be transmitted
on. Non-adaptive algorithms are independent of the volume of the current traffic and
topology. It includes Shortest Path Routing, Flooding, etc. Adaptive algorithms are
capable of changing their routing decisions to reflect changes in the topology and the
traffic. It includes Distance Vector Routing, link state routing, etc. Internetworking is a
scheme for interconnecting multiple networks of dissimilar technologies. Internet is not a
single network, but a collection of networks. The networks connect users to the Internet
using special devices that are called gateways or routers. To interconnect networks
different hardware is required at different layers. The basic idea behind the virtual circuit
is to building up an internetwork connection by concatenating a series of internetwork
and gateway to gateway virtual circuits. IP protocol is suitable for the sporadic
transmission of data to a number of destinations. Internet Protocol (IP) provides
unreliable, connectionless packet delivery for the Internet. IPv4 addresses are uniquely
used as identifiers, which work at network layer to identify the source or destination of
IP packets. ICMP communicate control data, information data, and error recovery data
across the network. Congestion is a global phenomenon involving all hosts, all routers,
the store-and-forward processing within the routers, etc. The traffic management facility
allows maximizing available network resources and ensures efficient use of resources
that have not been explicitly allocated. The traffic shaping approach includes
transmission of packets at uniform rate and in more predictable rate in case of open
loop method. The leaky bucket algorithm finds its use in the context of network traffic
shaping or rate limiting. A token bucket algorithm finds its uses in the context of network
traffic shaping or rate limiting. In FIFO queuing, the first packet that arrives at a router is
the first packet which is to be transmitted. The purpose of Fair Queuing is to maintain a
separate queue for every flow presently being managed by the router. Congestion
avoidance algorithms exploit the additive increase multiplicative decrease method of
TCP to avoid congestion by attempting to force the sender hosts to reduce their
congestion windows. In DEC bit method, routers set a binary congestion bit in the
packet when the network is about to experience congestion. Random early detection
(RED) is designed to be utilised in conjunction with TCP, which presently detects
congestion by using timeouts.

Amity Directorate of Distance & Online Education


Network Layer 165
7.12 Check Your Progress
Multiple Choice Questions Notes
1. The network layer concerns with
(a) Bits
(b) Frames
(c) Packets
(d) None of the mentioned
2. Which one of the following is not a function of network layer?
(a) Routing
(b) Inter-networking
(c) Congestion control
(d) None of the mentioned
3. The 4 byte IP address consists of
(a) Network address
(b) Host address
(c) Both (a) and (b)
(d) None of the mentioned
4. In virtual circuit network each packet contains
(a) Full source and destination address
(b) A short VC number
(c) Both (a) and (b)
(d) None of the mentioned
5. Which one of the following routing algorithm can be used for network layer design?
(a) Shortest path algorithm
(b) Distance vector routing
(c) Link state routing
(d) All of the mentioned
6. Multi destination routing
(a) Is same as broadcast routing
(b) Contains the list of all destinations
(c) Data is not sent by packets
(d) None of the mentioned
7. A subset of a network that includes all the routers but contains no loops is called
(a) Spanning tree
(b) Spider structure
(c) Spider tree
(d) None of the mentioned
8. Which one of the following algorithm is not used for congestion control?
(a) Traffic aware routing
(b) Admission control
(c) Load shedding
(d) None of the mentioned

Amity Directorate of Distance & Online Education


166 Computer Communication Network

9. The network layer protocol of internet is


(a) Ethernet
Notes
(b) Internet protocol
(c) Hypertext transfer protocol
(d) None of the mentioned
10. ICMP is primarily used for
(a) Error and diagnostic functions
(b) Addressing
(c) Forwarding
(d) None of the mentioned

7.13 Questions and Exercises


1. Discuss when a network is said to be congested.
2. Describe various reasons for congestion.
3. Differentiate between congestion control and flow control.
4. Discuss the functions of network layer.
5. What are the functions of a switch? Discuss.
6. Discuss the issues related with Distance Vector Routing.
7. Discuss the concept of routing in the internetwork.
8. Explain internetworking using concatenated virtual circuits.
9. Discuss the features of Internet Protocol.

7.14 Key Terms


 Backbones: These are large networks that exist primarily to interconnect other
networks.
 Datagram: The term 'datagram' is used to describe a chunk of IP data.
 Exterior Gateway Protocols (EGP): The routing algorithms used between
dissimilar networks are called Exterior Gateway Protocols (EGP).
 Congestion: Congestion is a global phenomenon involving all hosts, all routers, the
store-and-forward processing within the routers.
 DECbit: It is a method in which routers set a binary congestion bit in the packet
when the network is about to experience congestion.
 FIFO: It is a scheduling discipline which finds out the order in which packets.

Check Your Progress: Answers


1. (c) Packets
2. (d) None of the mentioned
3. (c) Both (a) and (b)
4. (b) A short VC number
5. (d) All of the mentioned
6. (c) Data is not sent by packets
7. (d) Spanning tree
8. (d) None of the mentioned
9. (a) Error and diagnostic functions
10. (a) Internet protocol

Amity Directorate of Distance & Online Education


Network Layer 167
7.15 Further Readings
nd
 Prakash C. Gupta, Data Communications And Computer Networks, 2 edition, PHI Notes
Learning Pvt. Ltd.. Copyright, 2014.
 Sanjay Sharma, Digital communication, S.K. Kataria & Sons, 2010
 Anurag Kumar, D. Manjunath, Joy Kuri, Communication Networking: An Analytical
Approach, Academic Press,Copyright, 2004
 Sanjay Sharma, Communication system; analog and digital, S.K. Kataria & Sons,
2012
 By V.S.Bagad, I.A.Dhotre, Computer Networks – II, Technical Publications,
Copyright, 2009

Amity Directorate of Distance & Online Education


168 Computer Communication Network

Unit 8: Transport Protocol


Notes
Structure
8.1 Introduction
8.2 Transport Layer Functions
8.2.1 Services Provided to the Upper Layers
8.2.2 Transport Service Primitives
8.3 The Internet Transmission Protocol
8.3.1 User Datagram Protocol (UDP)
8.3.2 Transmission Control Protocol
8.4 Elements of Transport Protocol
8.4.1 Reliable Delivery Service
8.4.2 Connection Establishment/Release
8.4.3 Flow Control
8.4.4 Error Control
8.4.5 Multiplexing/Demultiplexing
8.4.6 Addressing
8.5 Performance Issues
8.6 Summary
8.7 Check Your Progress
8.8 Questions and Exercises
8.9 Key Terms
8.10 Further Readings

Objectives
After studying this unit, you should be able to:
 Identify the functions of transport layer
 Describe the key protocols of the transport Layer
 Compare Transmission Control Protocol (TCP) and User Datagram Protocol
(UDP)
 Describe elements of transport protocol

8.1 Introduction
In this lesson, you will study the functions of transport layer. The transport layer makes
the upper layers from any concern with providing reliable and cost effective data
transfer. It facilitates end-to-end control and information exchange with the quality of
service required by the application program. Thus, layer four of the OSI reference
model is the transport layer that provides transparent transfers of data between the
source and destination machines using the services of the network layer such as IP. It
enables reliable internetworking data transport services that are transparent to upper
layers. The transport layer protocol administers end-to-end control and error checking to
ensure complete data transfer. The transport layer’s job also includes breaking the
Amity Directorate of Distance & Online Education
Transport Protocol 169
messages from the session layer into segments. This lesson will cover the concept of
the user datagram protocol and transmission control protocol. You will also learn about
elements of transport protocol. Notes
It offers transparent transfer of data among end systems by using the services of
the network layer below to move PDUs of data among the two communicating systems.
The transport layer relieves the upper layers from any concern with providing reliable
and cost effective data transfer. It offers end-to-end control and information transfer with
the quality of service required by the application program.

8.2 Transport Layer Functions


Now let us begin the lesson with the transport layer functions. The basic function of the
transport layer is to respond to service requests from the session layer and issue
service requests to the network layer. To accomplish this task, it accepts data from the
session layer, splits it up into smaller units if required, pass these smaller units to the
network layer and ensure that the packets of data reassemble correctly at the
destination machine. The transport layer indents to perform all these function efficiently
and keep the session layer isolated from the necessary changes in the hardware
technology. The transport layer provides the following services:

8.2.1 Services Provided to the Upper Layers


It is important to understand that transport layer in conjunction with network layer
intends to provide efficient, reliable and cost effective services to its users through
processes in application layer. The software and hardware that are used in the transport
layer is referred to as transport entity. The transport entity is located either in the kernel
of the operating system or network interface card or remote user process or the library
package meant for network applications.
Like, network layer, the transport also provides connection oriented and
connectionless services. Under normal conditions, in both cases, connection is
established to transfer data and after successful transfer of data the connection is
released. The transport layer establishes a distinct network connection for a transport
connection required by the session layer. When the transport connection needs a high
throughput, the transport layer establishes multiple network connections where it
divides the data among the multiple network connections to improve throughput. It also
manages the bandwidth and thus reduces the cost of establishing a connection. It does
so by multiplexing several transport connections onto the same network connection.
The multiplexing always remains transparent to the session layer.

Quality of Service
You must note that transport layer bridges the gap of the services provided by the
network and therefore enhances the quality of service provided to the users. The
possible parameters for the quality of service as offered by the transport layer are
connection establishment delay, connection establishment failure probability,
throughput, transit delay, residual error ratio, protection priority, resilience, etc.
 Connection establishment delay: It is the amount of time when an
acknowledgement is received from the destination machine to which a connection is
requested. Obviously, lesser is the delay, better is the service.
 Connection establishment failure probability: Due to congestion in the network,
lack of availability of space in table, some internal problem etc., causes the
connection not to set within the establishment delay.
 Throughput: It defines the number of bytes of user data transferred per second in
a defined time interval. For each communication link it is measured separately.
 Transit delay: It is the time gap between a transmitted data from source machine
to the reception of the same data by the destination machine. Like, throughput, for
each communication link it is measured separately.

Amity Directorate of Distance & Online Education


170 Computer Communication Network

 Residual error ratio: It is the fraction of the lost data with respect to the total data
sent over the network by source machine.
Notes  Protection priority: It is defined as the capability of the transport layer to provide
Protection against third parties who try to interfere with the data. It specifies the
priority of the important connections so that high priority connections are served
before the low priority connections in the event of congestion.
 Resilience: It is the capability of the transport layer to terminate a connection itself
spontaneously in the case of congestion.
 Transport layer cannot always fulfil all of the parameters as mentioned above. It
tries to implement a trade-off among the parameters of quality of service. This
process is called option negotiation.
8.2.2 Transport Service Primitives
They are utilised to access transport services by the application layer or the users. Each
transport service is defined with a unique transport primitive. The network layer provides
an unreliable service whereas the transport layer attempts to provide a reliable service
on top of the unreliable service. Example: Some of the examples of transport primitives
are listed below along with their functions:
 LISTEN: Broadcast willingness to accept connections and provide queue size.
 ACCEPT: Block the caller unless a communication attempt arrives.
 CONNECT: Actively try to establish a connection.
 SEND: Send data over the connection.
 RECEIVE: Receive data from the connection.
 CLOSE: Release the connection.
In the client server architecture, a machine (client) requests to another machine (server)
to create a connection for providing some service. The services running on the server
run on ports. The ports are application identifiers. The client machine should know the
address of the server machine for getting the desired services from this port and to
connect to the server machine. However, the server machine should not know the
address or the port of the client machine at the time of connection initiation. The first
packet transmitted by the client machine as a request to the server machine contains
details about the client which are further used by the server to send any information.
Client machine acts as the active device which makes the first move to establish the
connection whereas the server machine passively waits for such requests from some
client.
The transport layer uses the network layer primitives to send and receive TPDUs.
Transport Protocol Data Unit (TPDU) is a term used for exchanging data from transport
entity to transport entity.
The transport entity resides in:
 the host operating system kernel,
 a separate user process,
 a package of library routines running within the user’s address space, or
 a coprocessor chip or network board plugged into the host’s backplane.
The interface to the network layer is given as below:
 to_net(int cid, int q, int m, pkt_type pt, unsigned char *p, int bytes);
 from_net(int *cid, int *q, int *m, pkt_type *pt,

Amity Directorate of Distance & Online Education


Transport Protocol 171
 unsigned char *p, int *bytes);
The network layer packets that are used are given below: Notes
 CALL REQUEST: Sent to establish a connection
 CALL ACCEPTED: Response to CALL REQUEST
 CLEAR REQUEST: Sent to release a connection
 CLEAR CONFIRMATION: Response to CLEAR REQUEST
 DATA: Used to transport data
 CREDIT: Control packet for managing the window
When information is passed as procedure parameters rather than the actual outgoing or
incoming packet itself, the transport layer is shielded from the details of the network
layer protocol. The transport entity suspends transparently within to_net until there is
room in the window Apart from this transparent suspension mechanism, some explicit
procedures called by the transport entity to block/unblock itself are given below:
 sleep(): This procedure is called when the transport entity logically needs to
wait for an external event to happen. After calling the sleep procedure, the
(main stream of the) transport entity is blocked.
 wakeup(): This procedure is called by the event handling procedure (i.e.,
packet_arrival() ) to unblock the sleeping (main stream of the) transport entity.
 User programs call most of procedures in the transport entity directly. However,
there are two procedures that are effectively (software) interrupt routines and
are called only when the main stream of the transport entity is sleeping. They
are given as below:
 packet_arrival(): This is triggered by the packet arrival event. The underlying
network layer creates this procedure.
 clock(): The clock ticking event triggers this procedure.
A flow control mechanism based on credit is used in the example transport entity:
 When an application calls RECEIVE, a special credit message is sent to the
transport entity on the source machine and is recorded in the conn array.
 When SEND is called, the transport entity checks to see if a credit has received
on the specified connection.
If so, the message is transmitted in multiple packets, if needed, and the credit
decremented;
if not, the transport entity changes itself to sleep until a credit receives.
In the transport entity, each connection is expresses in one of the following seven
states:
 Idle: Connection not established yet.
 Waiting: CONNECT has been executed, CALL REQUEST sent.
 Queued: A CALL REQUEST has arrived; no LISTEN yet.
 Established: The connection has been established.
 Sending: The user is waiting for permission to send a packet.
 Receiving: A RECEIVE has been done.

Amity Directorate of Distance & Online Education


172 Computer Communication Network

 DISCONNECTING: a DISCONNECT has been done locally.


Notes The packets are contained in the frames exchanged by the data link layer. At the
destination machine, when a frame arrives the data link layer processes the frame
header and passes contents of the frame payload field up to the network entity. Similar
process takes place at the network layer. Example: The above situation may be
understood from an example, when a remote machine, say, client machine requests
another machine, say, server for connection. The client machine issues a TPDU
CONNECT to the server. The server has already transmitted a TPDU LISTEN to the
network to block the connection until a client machine turns up. On receiving the TPDU
CONNECT, it unblocks the server machine and a CONNECTION ACCEPTED TPDU is
sent back to the client machine and thus connection is established by unblocking the
client machine too. After this, the SEND and RECEIVE primitives enable exchange of
data.
You must keep in mind the following steps implemented by client machine to establish
the connection:
 Create a socket
 Connect the socket to the address of the server machine
 Send/Receive data
 Close the socket
Following are the steps implemented by the server machine to establish the connection:
 Create a socket
 Bind the socket to the port number known to all clients
 Listen for the connection request
 Accept connection request
 Send/Receive data
Other implementation details
Quite Time
It might happen that a host currently in communication crashes and reboots. At startup
time, all the data structures and timers will be reset to an initial value. To make sure that
earlier connection packets are gracefully rejected, the local host is not allowed to make
any new connection for a small period at startup. This time will be set in accordance
with reboot time of the operating system.
Initial Sequence number:
Initial sequence number used in the TCP communication will be initialized at boot time
randomly, rather than to 0. This is to ensure that packets from old connection should not
interfere with a new connection. So the recommended method is to
 Initialize the ISN at boot time by a random number
 For every 500 ms, increment ISN by 64K
 With every SYN received, increment ISN by 64K
Maximum Request backlog at server
As we have seen in Unix Networking programming, listen (sd,n), sets a maximum to the
number of requests to be obliged by the server at any time. So if there are already n
requests for connection, and n+1 request comes, two things can be done.
 Drop the packet silently
Amity Directorate of Distance & Online Education
Transport Protocol 173
 Ask the peer to send the request later.
The first option is recommended here because; the assumption is that this queue for Notes
request is a coincident and sometime later, the server should be frees to process the
new request. Hence if we drop the packet, the client will go through the time-out and
retransmission and server will be free to process it.
Also, Standard TCP does not define any strategy/option of knowing who requested the
connection. Only Solaris 2.2 supports this option.
Delayed Acknowledgment
TCP will piggyback the acknowledgment with its data. But if the peer does not have the
any data to send at that moment, the acknowledgment should not be delayed too long.
Hence a timer for 200 ms will be used. At every 200 ms, TCP will check for any
acknowledgment to be sent and send them as individual packets.
Small packets
TCP implementation discourages small packets. Especially if a previous relatively large
packet has been sent and no acknowledgment has been received so far, then this small
packet will be stored in the buffer until the situation improves.
But there are some applications for which delayed data is worse than bad data. For
example, in telnet, each key stroke will be processed by the server and hence no delay
should be introduced. As we have seen in Unix Networking programming, options for
the socket can be set as NO_DELAY, so that small packets are not discouraged.
ICMP Source Quench
We have seen in ICMP that ICMP Source Quench message will be send for the peer to
slow down. Some implementations discard this message, but few set the current
window size to 1.
But this is not a very good idea.

Retransmission Timeout
In some implementation (E.g.. Linux), RTO = RTT + 4 * delay variance is used to
instead of constant 2.
Also instead of calculating RTT(est) from the scratch, cache will be used to store the
history from which new values are calculated as discussed in the previous classes.
Standard values for Maximum Segment Life (MSL) will be between 0.5 to 2 minutes and
Time wait state = f(MSL)
Keep Alive Time
Another important timer in TCP is keep alive timer. It is basically used by a TCP peer to
check whether the other end is up or down. It periodically checks this connection. If the
other end did not respond, then that connection will be closed.
Persist Timer
As we saw in TCP window management, when source sends one full window of
packets, it will set its window size to 0 and expects an ACK from remote TCP to
increase its window size. Suppose such an ACK has been sent and is lost. Hence
source will have current window size = 0 and cannot send & destination is expecting
next byte. To avoid such a deadlock, a Persist Timer will be used. When this timer goes
off, the source will send the last one byte again. So we hope that situation has improved
and an ACK to increase the current window size will be received.

8.3 The internet transmission protocol

Amity Directorate of Distance & Online Education


174 Computer Communication Network

In this section, you will learn about the internet transmission protocol. The key protocols
of the Transport Layer are Transmission Control Protocol (TCP) and User Datagram
Notes Protocol (UDP). TCP enables reliable data delivery service with end-to-end error
detection and correction. UDP facilitates low-overhead, connectionless datagram
delivery service. Both protocols are responsible for delivering data between the session
layer and the network layer. Now, you will understand the concept of UDP and TCP.
8.3.1 User Datagram Protocol (UDP)
You must note that the User Datagram Protocol enables application programs to have
direct access to a datagram delivery service like the delivery service that IP provides.
This enables applications to exchange messages over the network with a minimum of
protocol overhead. UDP is connectionless unreliable datagram protocol in which the
sending terminal does not check whether data has been received by receiving terminal.
The unreliable service indicates that there is no guarantee that the data reaches at the
receiving end of the network correctly. You can understand it more clearly by Figure
1.1.
However, this protocol makes it possible to omit a variety of processes thus reducing
the load on the CPU. UDP has 16-bit Source Port and Destination Port numbers. Figure
1.2 shows the data structure of the UDP header. The simplicity of the UDP header
stems from the unsophisticated nature of the services it provides.

Figure 1.1: Comparison between TCP and UDP


0 16 31
Source Port Destination Port
Length UDP Checksum
Data
Figure 1.2: Format of the UDP Datagram

Figure 1.3: The correspondence between the UDP and IP Datagrams


You must note following is a brief description of each field:
Amity Directorate of Distance & Online Education
Transport Protocol 175
Source Port: Source port specifies port number of the application relating to the user
data.
Notes
Destination Port: As it name indicates, this pertains to the destination application.
Length: It describes the total length of the UDP datagram, including both data and
header information.
UDP Checksum: It gives an option of integrity checking.
At this point, it is important to understand the layering concept along with the need for
headers. You can see the relationship between the IP and UDP in Figure 7.3.
You must understand that there are a number of good reasons for choosing UDP as a
data transport service. When the amount of data being transmitted is small, UDP is
considered the most efficient choice for a transport layer protocol because of the
overhead for establishing connections and ensuring reliable delivery may be greater
than the work of re-transmitting the entire data. Applications for a query-response model
also works excellent for using UDP. The response is used as a positive
acknowledgment to the query. Example: Some examples of the usage of UDP are
Remote File Server (NFS), name translation (DNS), intra-domain routing (RIP), network
management (SNMP), multimedia applications and telephony.
See the example given below: Example: Voice Phone Call – VoIP. Various VoIP
(Voice over IP) systems make use of UDP for communication. They use UDP because
voice packets are notoriously sensitive to slowdowns on a network. VoIP packets also
don’t need to ensure that all packets get to their destination. Each VoIP packet is just a
little bit of audio from one phone to another, so if a tiny bit of audio goes missing during
a conversation it doesn’t really matter. We also wouldn’t want to try to have a phone re-
send that audio. This would either delay a conversation for no good reason or play a
little bit of audio out of order which would be more confusing then just dropping the bit of
audio and moving on with the conversation.
So, UDP like its cousin the Transmission Control Protocol (TCP) -- sits directly on top of
the base Internet Protocol (IP). In general, UDP implements a fairly "lightweight" layer
above the Internet Protocol. It seems at first site that similar service is provided by both
UDP and IP, namely transfer of data. But we need UDP for multiplexing/de multiplexing
of addresses.
UDP's main purpose is to abstract network traffic in the form of datagrams. A datagram
comprises one single "unit" of binary data; the first eight (8) bytes of a datagram contain
the header information and the remaining bytes contain the data itself.
UDP Headers
The UDP header consists of four (4) fields of two bytes each:

 source port number


 destination port number
 datagram size
 checksum

UDP port numbers allow different applications to maintain their own "channels" for data;
both UDP and TCP use this mechanism to support multiple applications sending and
receiving data concurrently. The sending application (that could be a client or a server)
sends UDP datagrams through the source port, and the recipient of the packet accepts
this datagram through the destination port. Some applications use static port numbers
that are reserved for or registered to the application. Other applications use dynamic
(unregistered) port numbers. Because the UDP port headers are two bytes long, valid

Amity Directorate of Distance & Online Education


176 Computer Communication Network

port numbers range from 0 to 65535; by convention, values above 49151 represent
dynamic ports.
Notes
The datagram size is a simple count of the number of bytes contained in the header and
data sections . Because the header length is a fixed size, this field essentially refers to
the length of the variable-sized data portion (sometimes called the payload). The
maximum size of a datagram varies depending on the operating environment. With a
two-byte size field, the theoretical maximum size is 65535 bytes. However, some
implementations of UDP restrict the datagram to a smaller number -- sometimes as low
as 8192 bytes.
UDP checksums work as a safety feature. The checksum value represents an encoding
of the datagram data that is calculated first by the sender and later by the receiver.
Should an individual datagram be tampered with (due to a hacker) or get corrupted
during transmission (due to line noise, for example), the calculations of the sender and
receiver will not match, and the UDP protocol will detect this error. The algorithm is not
fool-proof, but it is effective in many cases. In UDP, check summing is optional --
turning it off squeezes a little extra performance from the system -- as opposed to TCP
where checksums are mandatory. It should be remembered that check summing is
optional only for the sender, not the receiver. If the sender has used checksum then it is
mandatory for the receiver to do so.
Usage of the Checksum in UDP is optional. In case the sender does not use it, it sets
the checksum field to all 0's. Now if the sender computes the checksum then the
recipient must also compute the checksum an set the field accordingly. If the checksum
is calculated and turns out to be all 1's then the sender sends all 1's instead of all 0's.
This is since in the algorithm for checksum computation used by UDP, a checksum of
all 1's if equivalent to a checksum of all 0's. Now the checksum field is unambiguous for
the recipient, if it is all 0's then checksum has not been used, in any other case the
checksum has to be computed.
8.3.2 Transmission Control Protocol
It provides a connection type service. That is, a logical connection must be established
prior to communication. Because of this a continuous transmission of large amount of
data is possible. It ensures a highly reliable data transmission for upper layers using IP
protocol. This is possible because TCP uses positive acknowledgement to confirm the
sender about the proper reception of data as shown in Figure 1.4.
The sender keeps on send data at constant intervals until it receives a positive
acknowledgement.A negative acknowledgment implies that the failed data segment
needs to be retransmitted.
You must take into consideration what happens when a packet is lost on the network
and fails to reach its ultimate destination? When host A sends data, it starts a time
down counter. If the timer expires without receiving an acknowledgment, host A
assumes that the data segment was lost. Consequently, the sending computer
retransmits a duplicate of the failing segment.

Amity Directorate of Distance & Online Education


Transport Protocol 177

Data Segment 1 Notes


Time
Acknowledgment 1

Data Segment 2

Acknowledgment 2

A B

Figure 1.4: TCP Establishes Virtual Circuits


Its other functions include sequence control, error recovery and control, flow control and
identification of port number.
Figure 1.6 shows the format of the TCP data segment. The TCP header includes both
source and destination port fields for identifying the applications for which the
connection is established. The sequence and acknowledgment number fields underlie
the positive acknowledgment and retransmission technique. Integrity checks are
accommodated using the checksum field.

Figure 1.6: Data Segment Format of the TCP Protocol


You must note that TCP, therefore, unlike to UDP, TCP is a reliable connection-oriented
byte-stream protocol.
Reliable: TCP provides reliable delivery of data using Positive Acknowledgment with
Re-transmission (PAR) mechanism. PAR is a mechanism where the data is transmitted
again and again until it hears from the remote system that the data arrived correctly.
The unit of data exchanged between source and destination host is called a segment as
shown in the Figure 7.5. It is clear from the Figure 7.5 that each segment has a
checksum to verify that the data arrives at the destination end undamaged. When the
data segment is received undamaged, the receiver sends a positive acknowledgment

Amity Directorate of Distance & Online Education


178 Computer Communication Network

back to the source end. When the data segment is damaged, the destination machine
discards it.
Notes
When the source machine does not receive any positive acknowledgement within a
specified time out period, it re-transmits the data segment.
Connection-oriented: You must note that TCP creates a logical end-to-end connection
between the source and destination hosts. Handshake that is control information is
exchanged between the source and destination hosts to set a dialogue before data is
sent. TCP indicates the control function in a segment by setting the flag in a Flags field
in the segment header. TCP uses a three-way handshake that indicates that three
segments are exchanged. Figure 1.6 depicts the simplest form of the three-way
handshake.
Host A initiates the connection by transmitting host B a segment with the “Synchronize
sequence numbers” (SYN) bit set. This segment indicates to host B that host A
requests to create a connection. The segment also indicates to host B the sequence
number host A will use as a starting number for its segments so that data can be put in
the proper order. Host B replies to host A with a segment that has the
“Acknowledgment” (ACK) and SYN bits set. Host B’s segment acknowledges the
receipt of A’s segment and tells host A the Sequence Number host B will begin with.
Finally, host A transmits a segment that acknowledges receipt of host B’s segment.
Thus, host A transfers the first actual data.

Figure1.6: Three-way Handshake


This exchange of data also indicates to the TCP of host A has indication that the remote
TCP is active and ready to receive data. When the connection is created, data can be
exchanged. As soon as the source and destination machines have completed the data
exchange, they initiate a three-way handshake with segments containing the “No more
data from sender” bit (called the FIN bit) to release the connection. Thus, end-to-end
exchange of data using the logical connection between the source and host machines is
accomplished.
Continuous stream of bytes: TCP considers the data it transmits as a continuous
stream of bytes, not as independent packets. This necessitates TCP to take care to
maintain the sequence in which bytes are sent and received. The sequence number
and acknowledgment number fields in the TCP segment header keep track of the bytes.
In order to keep track of the data stream correctly, each end of the processes are
required to know the other end’s initial number. The source and destination ends of the
processes synchronize byte-numbering systems by exchanging SYN segments during
the handshake. The sequence number field in the SYN segment has the initial
sequence Number (ISN). This is considered the starting point for the byte-numbering
system. Thereafter, each byte of data is numbered sequentially from the ISN to start
with ISN+1 for the first real byte of data to be transmitted.
It is important to understand that the acknowledgment segment (ACK) has positive
acknowledgment and flow control functions. The acknowledgment indicates to the
Amity Directorate of Distance & Online Education
Transport Protocol 179
sender the amount of data received and the data which can be received further. The
acknowledgment number is the sequence number of the next byte the receiver is about
to receive. Notes
Figure 1.7 illustrates a TCP data stream that begins with an ISN of 0. The destination
machine has received and acknowledged 2000 bytes. Therefore, the current
acknowledgment number is 2001. The destination machine has enough buffer space for
another 6000 bytes. The source machine is currently transmitting a segment of 1000
bytes starting with sequence number 4001. The source machine has received no
acknowledgment for the bytes from 2001 onwards, but continues transmitting data as
long as it is within the window. When the source machine fills the window and receives
no acknowledgment of the data previously sent, it will, after time-out, transmit the data
again beginning from the first unacknowledged byte. In Figure 7.7 re-transmission
begins from byte 2001 when no further acknowledgments are received. This makes
source machine to believe that data is reliably received at the remote locations of the
network.
You may already be aware that TCP also ensures for delivering data received from IP
to the correct application. The application is identified by 16-bit port number. The source
machine and destination machine ports are included in the first word of the segment
header. Thus, transport layer passes data to and from the application layer correctly.

Figure 1.7: TCP Data Stream


Some of the applications of TCP are Electronic mail (SMTP), file transfer (FTP), remote
login (Telnet), web (HTTP), etc.
See the example given below:
Example: Web Traffic – HTTP
Web traffic makes use of a protocol known as HTTP. HTTP uses TCP because when a
computers web-browser asks for a webpage from a web server the server wants to
know that the browser actually received all the packets from the server. The web server
wants to make sure that all the content from a web page has been obtained by the
browser. If some of the packets from the web server were dropped by the network the
web server would need to send the packets again or the web page would not be
readable by the web browser. Webpages, particularly ones with numerous images, can
be relatively large in size. So having a little extra overhead during the webpage loading
“conversation” doesn’t actually add much time to the overall conversation but does add
a lot of value.
Topics to be Discussed relating TCP
1. Maximum Segment Size: It refers to the maximum size of segment (MSS) that
is acceptable to both ends of the connection. TCP negotiates for MSS using
OPTION field. In Internet environment MSS is to be selected optimally. An
arbitrarily small segment size will result in poor bandwith utilization since Data
to Overhead ratio remains low. On the other hand extremely large segment size

Amity Directorate of Distance & Online Education


180 Computer Communication Network

will necessitate large IP Datagrams which require fragmentation. As there are


finite chances of a fragment getting lost, segment size above "fragmentation
Notes threshold” decrease the Throughput? Theoretically an optimum segment size is
the size that results in largest IP Datagram, which do not require fragmentation
anywhere enroute from source to destination. However it is very difficult to find
such an optimum segmet size. In system V a simple technique is used to
identify MSS. If H1 and H2 are on the same network use MSS=1024. If on
different networks then MSS=5000.
2. Flow Control: TCP uses Sliding Window mechanism at octet level. The
window size can be variable over time. This is achieved by utilizing the concept
of "Window Advertisement" based on :
1. Buffer availabilty at the receiver
2. Network conditions (traffic load etc.)
In the former case receiver varies its window size depending upon the space
available in its buffers. The window is referred as RECEIVE WINDOW
(Recv_Win). When receiver buffer begin to fill it advertises a small Recv_Win so
that the sender does'nt send more data than it can accept. If all buffers are full
receiver sends a "Zero" size advertisement. It stops all transmission. When
buffers become available receiver advertises a Non Zero widow to resume
retransmission. The sender also periodically probes the "Zero" window to avoid
any deadlock if the Non Zero Window advertisement from receiver is lost. The
Variable size Recv_Win provides efficient end to end flow control.
The second case arises when some intermediate node (e.g. a router ) controls
the source to reduce transmission rate. Here another window referred as
COGESTION WINDOW (C_Win) is utilized. Advertisement of C_Win helps to
check and avoid congestion.
3. Congestion Control: Congestion is a condition of severe delay caused by an
overload of datagrams at any intermediate node on the Internet. If unchecked it
may feed on itself and finally the node may start dropping arriving
datagrams.This can further aggravate congestion in the network resulting in
congestion collapse. TCP uses two techniques to check congestion.
Slow Start : At the time of start of a connection no information about network
conditios is available. A Recv_Win size can be agreed upon however C_Win
size is not known. Any arbitrary C_Win size cannot be used because it may
lead to congestion. TCP acts as if the window size is equal to the minimum of (
Recv_Win & C_Win). So following algorithm is used.
1. Recv_Win=X
2. SET C_Win=1
3. for every ACK received C_Win++
Multiplicative decrease: This scheme is used when congestion is encountered
( ie. when a segment is lost ). It works as follows. Reduce the congestion
window by half if a segment is lost and exponentially backoff the timer ( double
it ) for the segments within the reduced window. If the next segment also gets
lost continue the above process. For successive losses this scheme reduces
traffic into the connection exponentially thus allowing the intermediate nodes to
clear their queues. Once congestion ends SLOW START is used to scale up
the transmission.
4. Congestion Avoidance: This procedure is used at the onset of congestion to
minimize its effect on the network. When transmission is to be scaled up it
should be done in such a way that it does'nt lead to congestion again. Following
algorithm is used.

Amity Directorate of Distance & Online Education


Transport Protocol 181
1. At loss of a segment SET C_Win=1
2. SET SLOW START THRESHOLD (SST) = Send_Win / 2 Notes
3. Send segment
4. If ACK Received, C_Win++ till C_Win <= SST
5. else for each ACK C_Win += 1 / C_Win
5. Time out and Retransmission : Following two schemes are used :
1. Fast Retransmit
2. Fast Recovery
When a source sends a segment TCP sets a timer. If this value is set too low it
will result in many unnecessary treransmissions. If set too high it results in
wastage of banwidth and hence lower throughput. In Fast Retransmit scheme
the timer value is set fairly higher than the RTT. The sender can therefore
detect segment loss before the timer expires. This scheme presumes that the
sender will get repeated ACK for a lost packet.
6. Round Trip Time (RTT): In Internet environment the segments may travel
across different intermediate networks and through multiple routers. The
networks and routers may have different delays, which may vary over time. The
RTT therefore is also variable. It makes difficult to set timers. TCP allows
varying timers by using an adaptive retransmission algorithm. It works as
follows.
1. Note the time (t1) when a segment is sent and the time (t2) when its
ACK is received.
2. Compute RTT(sample) = (t 2 - t 1 )
3. Again Compute RTT (new) for next segment.
4. Compute Average RTT by weighted average of old and new values of
RTT
5. RTT(est) = a *RTT(old) + (1-a) * RTT (new) where 0 < a < 1
A high value of 'a' makes the estimated RTT insensitive to changes that
last for a short time and RTT relies on the history of the network. A low
value makes it sensitive to current state of the network. A typical value
of 'a' is 0.75
6. Compute Time Out = b * RTT(est) where b> 1
A low value of 'b' will ensure quick detection of a packet loss. Any small
delay will however cause unnecessary retransmission. A typical value
of 'b' is kept at .2
8.4 Elements of transport protocol
In this section, you will learn the elements of transport protocol. To establish a reliable
service between two machines on a network, transport protocols are implemented
which somehow resembles to the data link protocols implemented at layer 2. You must
also note that the major difference lies in the fact that data link layer uses a physical
channel between two routers while the transport layer uses subnet. Following are the
issues for implementing transport protocols:
8.4.1 Reliable Delivery Service
The transport layer also determines the type of service provided to the users from the
session layer. An error-free point-to-point communication to deliver messages in the
order in which they were transmitted is one of the key functions of the transport layer.
However, the service may be reliable or may be reliable within certain limits or may be

Amity Directorate of Distance & Online Education


182 Computer Communication Network

unreliable entirely. The order of the received message may or may not be the same in
which it was transmitted. When the connection is created between two processes,
Notes transport layer determines the type of service to be provided to session layer at that
time.
8.4.2 Connection Establishment/Release
It is essential to understand that the transport layer creates and releases the connection
across the network. This includes naming mechanism so that a process on one
machine can indicate with whom it wishes to communicate. The transport layer enables
to establish and delete connections across the network to multiplexing several message
streams onto one communication channel.
8.4.3 Flow Control
The underlying rule of flow control is to maintain a synergy between a fast process and
a slow process. Transport layer enables a fast process to keep pace with a slow one.
Acknowledgements are sent back to manage end-to-end flow control. Go back N
algorithms is used to request retransmission of packets starting with packet number N.
Selective Repeat is used to request specific packets to be retransmitted.
8.4.4 Error Control
Error detection and error recovery are the integral part of a reliable service and
therefore they are necessary to perform error control mechanism on end-to-end basis.
To control errors from lost or duplicate segments the transport layer enables unique
segment sequence numbers to the different packets of message, creating virtual
circuits, allowing only one virtual circuit per session. Time-outs mechanism is also used
to remove the packets from the network segments that have been misrouted and have
remained on the network beyond a specified time.
End-to-end error control using checksums are also used to handle any corruption in
data.
8.4.5 Multiplexing/Demultiplexing
You must be aware that the transport layer establishes separate network connection for
each transport connection required by the session layer. To improve throughput, the
transport layer establishes multiple network connections. When the issue of throughput
is not important, it multiplexes several transport connections onto the same network
connection, thus reducing the cost for establishing and maintaining the network
connections. When several connections are multiplexed, they call for demultiplexing at
the receiving end. In case of transport layer, the communication takes place only
between two processes and not between two machines. Hence, the communication at
transport layer is also known as peer-to-peer or process-to-process communication.
8.4.6 Addressing
Transport Layer deals with addressing or labelling a frame. It also differentiates
between a connection and a transaction. Connection identifiers are ports or sockets that
label each frame so the receiving device knows which process it has been sent from.
This helps in keeping track of multiple-message conversations. Ports or sockets
address multiple conservations in the same location.
Example: The first line of a postal address is analogous of port and distinguishes
among several occupants of the same house.
Computer applications listen for information on their own ports and therefore more than
one network-based application may be used at the same time. The transaction
identifiers deal with the request or response frames. They are one-time events.
8.5 Performance issues
In view of thousands of computers in a network and scaling up of the network imposes
issues of complex interactions with unforeseen consequences that leads to poor
Amity Directorate of Distance & Online Education
Transport Protocol 183
performance and no one knows why. It is difficult to propound any scientific method to
measure network performance. It is not only the transport layer where performance
issues arise but it also includes network layer related to routing and congestion control. Notes
Based on experiences and examples, some rules of thumb have been proposed. They
are:
Performance problems in computer networks: Congestion is the one example of
such types of problems when more traffic suddenly arrives at a router than the router
can handle, it creates problem of overloads and leads to congestion to force the
performance deteriorate. The performance also degrades when there is a resource
imbalance. For example, when a high speed line is connected to a low end computer,
the performance will certainly degrades. Other factors of computer networks
responsible for degradation of the performance are broadcast storm of error messages
due to some bad parameters in TPDU, collapsing of RARP server when several
machines try to learn their true identity from a RARP server in case of power restoration
and booting of all machines together, setting of time-outs incorrectly, bandwidth-delay
product, etc.
Measuring network performance: It includes measuring of the relevant network
parameters and performance, understanding the bottleneck and reasons for it and
changing of some parameters. The most basic kind of measurement is to start a timer
at the beginning of some activity to see how long it takes, e.g. round trip time. Other
measurements are made with counters to record how often some event has happened,
e.g. number of lost TPDU’s. Finally, one is often interested in knowing the amount of
something, e.g. the number of bytes processed in a given time interval. To carry out the
measuring of network performance, it should be ensured that the sample size is large
enough; samples are representative that is there are no congestions at lunch time;
nothing unexpected is going on during the tests, etc.
System design for better performance: The network performance could be improved
considerably with the help of measuring and tuning. However, they are not substitute for
good design. System design is dependent not just on network design that includes
routers, interface boards, etc, but also on the software and operating system. Improved
CPU speed is one of the factors that enable getting the bits from the user’s buffer out on
the transmission media fast enough and having the receiving CPU process them as fast
as they come in. Reduced packet count to reduce software overhead to improve
processor’s performance, minimized context switches, minimized copying for an
incoming packet, etc are also important factors for design.
Fast TPDU processing: It separates out the normal case (data transfer in the
ESTABLISHED state, no PSH or URG, enough window space) and handles it
separately. Timer management is also optimized for the case of timers rarely expiring.
Protocols for gigabit networks: Some of the problems associated with it are
mentioned. The communication speeds have been improving much faster than
computing speeds and at a rate of 1 Gbps, use of 16 or 32 bit sequence numbers takes
only 32 sec to send 232 bytes and in the Internet packets live for 120 sec. The go back
n protocol works poorly on lines with a large bandwidth-delay product. The gigabit lines
are different from megabit lines. In the multimedia applications jitter in packet arrival
time is as important as the mean delay itself. However, old protocols were often
designed to minimize the number of bits on in the transmission media, frequently by
using small fields and packing them together into bytes and words. Therefore, with
gigabit networks, the protocol processing is the problem instead of the bandwidth.
Hence, protocols need to be designed to minimize it.

8.6 Summary
The transport layer responds to service requests from the session layer and issue
service requests to the network layer. Transport layer bridges the gap of the services
provided by the network and therefore enhances the quality of service provided to the
users. Transport Service Primitives are used to access transport services by the

Amity Directorate of Distance & Online Education


184 Computer Communication Network

application layer or the users. UDP is connectionless unreliable datagram protocol in


which the sending terminal does not check whether data has been received by
Notes receiving terminal. Transmission Control Protocol ensures a highly reliable data
transmission for upper layers using IP protocol. The transport layer determines the type
of service provided to the users from the session layer. The transport layer creates and
releases the connection across the network. The fundamental rule of flow control is to
maintain a synergy between a fast process and a slow process.

8.7 Check Your Progress


Multiple Choice Questions
1. Transport layer aggregates data from different applications into a single stream
before passing it to
a) network layer
b) data link layer
c) application layer
d) physical layer

2. Which one of the following is a transport layer protocol used in internet?


a) TCP
b) UDP
c) both (a) and (b)
d) none of the mentioned

3. User datagram protocol is called connectionless because


a) all UDP packets are treated independently by transport layer
b) it sends data as a stream of related packets
c) both (a) and (b)
d) none of the mentioned

4. Transmission control protocol is


a) connection oriented protocol
b) uses a three way handshake to establish a connection
c) recievs data from application as a single stream
d) all of the mentioned

5. An endpoint of an inter-process communication flow across a computer network is


called
a) socket
b) pipe
c) port
d) none of the mentioned

6. Socket-style API for windows is called


a) wsock
b) winsock
c) wins
d) none of the mentioned

7. Which one of the following is a version of UDP with congestion control?


a) datagram congestion control protocol
b) stream control transmission protocol
c) structured stream transport

Amity Directorate of Distance & Online Education


Transport Protocol 185
d) none of the mentioned

Notes
8. A _____ is a TCP name for a transport service access point.
a) port
b) pipe
c) node
d) none of the mentioned

9. Transport layer protocols deals with


a) application to application communication
b) process to process communication
c) node to node communication
d) none of the mentioned

10. Which one of the following is a transport layer protocol?


a) stream control transmission protocol
b) internet control message protocol
c) neighbor discovery protocol
d) dynamic host configuration protocol

8.8 Questions and Exercises


1. Discuss the function of Transport Protocol Data Unit (TPDU).
2. Define the steps implemented by the server machine to establish the
connection.
3. What are the key protocols of the transport layer?
4. Why UDP is used when it provides unreliable connectionless service to the
transport layer?
5. What is the purpose of flow control?
6. Discuss the services provided by the transport layer.
7. What are the different qualities of service parameters at the transport layer?
8. Explain the concept of User Datagram Protocol.

8.9 Key Terms


 Resilience: It is the capability of the transport layer to terminate a connection
itself spontaneously in the case of congestion.
 Throughput: It defines the number of bytes of user data transferred per second
in a defined time interval.
 Transit Delay: It is the time gap between a transmitted data from source
machine to the reception of the same data by the destination machine.
 Transmission Control Protocol (TCP): TCP enables reliable data delivery
service with end-to-end error detection and correction.

 User Datagram Protocol (UDP): UDP is connectionless unreliable datagram


protocol in which the sending terminal does not check whether data has been
received by receiving terminal.

Check Your Progress: Answers

Amity Directorate of Distance & Online Education


186 Computer Communication Network

1. a) network layer

Notes 2. c) both (a) and (b)


3. a) all UDP packets are treated independently by transport layer
4. d) all of the mentioned
5. a) socket
6. b) winsock
7. a) datagram congestion control protocol
8. a) port
9. b) internet control message protocol
10. b) process to process communication

6.13 Further Readings


 Anurag Kumar, D. Manjunath, Joy Kuri, Communication Networking: An Analytical
Approach, Academic Press,Copyright, 2004
 Sanjay Sharma, Communication system; analog and digital, S.K. Kataria & Sons,
2012
nd
 Prakash C. Gupta, Data Communications And Computer Networks, 2 edition, PHI
Learning Pvt. Ltd.. Copyright, 2014.
 Sanjay Sharma, Digital communication, S.K. Kataria & Sons, 2010
 By V.S.Bagad, I.A.Dhotre, Computer Networks – II, Technical Publications,
Copyright, 2009

Amity Directorate of Distance & Online Education

You might also like