Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 42

NEYWORK LAYER

Network layer Design issues and IP Addressing

The network layer or layer 3 of the OSI (Open Systems Interconnection) model is concerned
delivery of data packets from the source to the destination across multiple hops or links. It is
the lowest layer that is concerned with end − to − end transmission. The designers who are
concerned with designing this layer needs to cater to certain issues. These issues encompass
the services provided to the upper layers as well as internal design of the layer.

The design issues can be elaborated under four heads −

Store − and − Forward Packet Switching

Services to Transport Layer

Providing Connection Oriented Service

Providing Connectionless Service

Store − and − Forward Packet Switching

The network layer operates in an environment that uses store and forward packet switching.
The node which has a packet to send, delivers it to the nearest router. The packet is stored in
the router until it has fully arrived and its checksum is verified for error detection. Once, this is
done, the packet is forwarded to the next router. Since, each router needs to store the entire
packet before it can forward it to the next hop, the mechanism is called store − and − forward
switching.

Services to Transport Layer

The network layer provides service its immediate upper layer, namely transport layer, through
the network − transport layer interface. The two types of services provided are −

Connection − Oriented Service − In this service, a path is setup between the source and the
destination, and all the data packets belonging to a message are routed along this path.
Connectionless Service − In this service, each packet of the message is considered as an
independent entity and is individually routed from the source to the destination.

Providing Connectionless Service

In connectionless service, since each packet is transmitted independently, each packet contains
its routing information and is termed as datagram. The network using datagrams for
transmission is called datagram networks or datagram subnets. No prior setup of routes is
needed before transmitting a message. Each datagram belong to the message follows its own
individual route from the source to the destination. An example of connectionless service is
Internet Protocol or IP.

IP address definition

An IP address is a unique address that identifies a device on the internet or a local network. IP
stands for "Internet Protocol," which is the set of rules governing the format of data sent via
the internet or local network.

In essence, IP addresses are the identifier that allows information to be sent between devices
on a network: they contain location information and make devices accessible for
communication.

Types of IP addresses

There are different categories of IP addresses, and within each category, different types.

Consumer IP addresses

Every individual or business with an internet service plan will have two types of IP addresses:
their private IP addresses and their public IP address. The terms public and private relate to the
network location — that is, a private IP address is used inside a network, while a public one is
used outside a network.

Private IP addresses
Every device that connects to your internet network has a private IP address. This includes
computers, smartphones, and tablets but also any Bluetooth-enabled devices like speakers,
printers, or smart TVs. With the growing internet of things, the number of private IP addresses
you have at home is probably growing. Your router needs a way to identify these items
separately, and many items need a way to recognize each other. Therefore, your router
generates private IP addresses that are unique identifiers for each device that differentiate
them on the network.

Public IP addresses

A public IP address is the primary address associated with your whole network. While each
connected device has its own IP address, they are also included within the main IP address for
your network. As described above, your public IP address is provided to your router by your ISP.
Typically, ISPs have a large pool of IP addresses that they distribute to their customers. Your
public IP address is the address that all the devices outside your internet network will use to
recognize your network.

Public IP addresses

Public IP addresses come in two forms – dynamic and static.

Dynamic IP addresses

Dynamic IP addresses change automatically and regularly. ISPs buy a large pool of IP addresses
and assign them automatically to their customers. Periodically, they re-assign them and put the
older IP addresses back into the pool to be used for other customers. The rationale for this
approach is to generate cost savings for the ISP. Automating the regular movement of IP
addresses means they don’t have to carry out specific actions to re-establish a customer's IP
address if they move home, for example. There are security benefits, too, because a changing IP
address makes it harder for criminals to hack into your network interface.

Static IP addresses
In contrast to dynamic IP addresses, static addresses remain consistent. Once the network
assigns an IP address, it remains the same. Most individuals and businesses do not need a static
IP address, but for businesses that plan to host their own server, it is crucial to have one. This is
because a static IP address ensures that websites and email addresses tied to it will have a
consistent IP address — vital if you want other devices to be able to find them consistently on
the web.

This leads to the next point – which is the two types of website IP addresses.

There are two types of website IP addresses

For website owners who don’t host their own server, and instead rely on a web hosting package
– which is the case for most websites – there are two types of website IP addresses. These are
shared and dedicated.

Shared IP addresses

Websites that rely on shared hosting plans from web hosting providers will typically be one of
many websites hosted on the same server. This tends to be the case for individual websites or
SME websites, where traffic volumes are manageable, and the sites themselves are limited in
terms of the number of pages, etc. Websites hosted in this way will have shared IP addresses.

Dedicated IP addresses

Some web hosting plans have the option to purchase a dedicated IP address (or addresses). This
can make obtaining an SSL certificate easier and allows you to run your own File Transfer
Protocol (FTP) server. This makes it easier to share and transfer files with multiple people within
an organization and allow anonymous FTP sharing options. A dedicated IP address also allows
you to access your website using the IP address alone rather than the domain name — useful if
you want to build and test it before registering your domain.

The main functions performed by the network layer are:


Routing: When a packet reaches the router's input link, the router will move the packets to the
router's output link. For example, a packet from S1 to R1 must be forwarded to the next router
on the path to S2.

Logical Addressing: The data link layer implements the physical addressing and network layer
implements the logical addressing. Logical addressing is also used to distinguish between
source and destination system. The network layer adds a header to the packet which includes
the logical addresses of both the sender and the receiver.

Internetworking: This is the main role of the network layer that it provides the logical
connection between different types of networks.

Fragmentation: The fragmentation is a process of breaking the packets into the smallest
individual data units that travel through different networks.

Services Provided by the Network Layer

Guaranteed delivery: This layer provides the service which guarantees that the packet will
arrive at its destination.

Guaranteed delivery with bounded delay: This service guarantees that the packet will be
delivered within a specified host-to-host delay bound.

In-Order packets: This service ensures that the packet arrives at the destination in the order in
which they are sent.

Guaranteed max jitter: This service ensures that the amount of time taken between two
successive transmissions at the sender is equal to the time between their receipt at the
destination.

Security services: The network layer provides security by using a session key between the
source and destination host. The network layer in the source host encrypts the payloads of
datagrams being sent to the destination host. The network layer in the destination host would
then decrypt the payload. In such a way, the network layer maintains the data integrity and
source authentication services.
Classful addressing

Introduced in 1981, with classful routing, IP v4 addresses were divided into 5 classes (A to E).

Classes A-C: unicast addresses

Class D: multicast addresses

Class E: reserved for future use

Classless Inter-Domain Routing (CIDR)

CIDR or Class Inter-Domain Routing was introduced in 1993 to replace classful addressing. It
allows the user to use VLSM or Variable Length Subnet Masks.

CIDR notation:

In CIDR subnet masks are denoted by /X. For example, a subnet of 255.255.255.0 would be
denoted by /24. To work a subnet mask in CIDR, we have to first convert each octet into its
respective binary value. For example, if the subnet is of 255.255.255.0. then:

Classless addressing concerns the following three rules when assigning a block.
Rule 1 – All the IP addresses in the CIDR block must be contiguous.

Rule 2 – The block size should be presentable as a power of 2. Moreover, the number of IP
addresses in the block is equivalent to the size.

Rule 3 – First IP address of the block must be dividable by the block size.

For example, assume that the classless address is 192.168.1.35/27

The number of bits for the network portion is 27, and the number of bits for the host is 5. (32-
27)

Representing the address in binary is as follows.

11000000. 10101000. 00000001. 00100011

Difference Between Classful and Classless Addressing

Classful addressing is an IP address allocation method that allocates IP addresses according to


five major classes. Classless addressing is an IP address allocation method that is designed to
replace classful addressing to minimize the rapid exhaustion of IP addresses.

Usefulness

Another difference between classful and classless addressing is their usefulness. Classless
addressing is more practical and useful than classful addressing.

Network ID and Host ID

In classful addressing, the network ID and host ID changes depending on the classes. However,
in classless addressing, there is no boundary on network ID and host ID. Hence, this is another
difference between classful and classless addressing.

What is Subnetting?

Subnetting is a technique that is used to divide the individual physical network into a smaller
size called sub-networks. These sub-networks are called a subnet.
What is Supernetting?

Supernetting is the process that is used to combine several sub networks into a single network.
Its process is inverse of the subnetting process. In supernetting, mask bits are moved towards
the left of the default mask; network bits are converted into hosts bits. Supernetting is also
called router summarization and aggregation.
IP

An IP stands for internet protocol. An IP address is assigned to each device connected to a


network. Each device uses an IP address for communication.

There are two types of IP addresses:

IPv4

IPv6

Ipv6
Ipv4

Address length IPv4 is a 32-bit address. IPv6 is a 128-bit address.

IPv6 is an alphanumeric address


IPv4 is a numeric address that consists of
Fields that consists of 8 fields, which are
4 fields which are separated by dot (.).
separated by colon.

IPv4 has 5 different classes of IP address


IPv6 does not contain classes of IP
Classes that includes Class A, Class B, Class C,
addresses.
Class D, and Class E.

Number of IP IPv4 has a limited number of IP IPv6 has a large number of IP


address addresses. addresses.

It supports VLSM (Virtual Length Subnet


Mask). Here, VLSM means that Ipv4
VLSM It does not support VLSM.
converts IP addresses into a subnet of
different sizes.

Address It supports manual and DHCP It supports manual, DHCP, auto-


configuration configuration. configuration, and renumbering.
It generates 340 undecillion unique
Address space It generates 4 billion unique addresses
addresses.

End-to-end
In IPv4, end-to-end connection integrity In the case of IPv6, end-to-end
connection
is unachievable. connection integrity is achievable.
integrity

In IPv4, security depends on the


application. This IP address is not In IPv6, IPSEC is developed for
Security features
developed in keeping the security security purposes.
feature in mind.

Address In IPv4, the IP address is represented in In IPv6, the representation of the


representation decimal. IP address in hexadecimal.

Fragmentation is done by the senders Fragmentation is done by the


Fragmentation
and the forwarding routers. senders only.

It uses flow label field in the


Packet flow It does not provide any mechanism for
header for the packet flow
identification packet flow identification.
identification.

The checksum field is not available


Checksum field The checksum field is available in IPv4.
in IPv6.

On the other hand, IPv6 is


Transmission
IPv4 is broadcasting. multicasting, which provides
scheme
efficient network operations.

Encryption and It does not provide encryption and It provides encryption and
Authentication authentication. authentication.

Number of octets It consists of 4 octets. It consists of 8 fields, and each


field contains 2 octets. Therefore,
the total number of octets in IPv6
is 16.

Routing algorithm

Routing is the process of forwarding the packets from source to the destination but the best
route to send the packets is determined by the routing algorithm.

The Routing algorithm is divided into two categories:

Adaptive Routing algorithm

Non-adaptive Routing algorithm

Adaptive Routing algorithm

An adaptive routing algorithm is also known as dynamic routing algorithm.

This algorithm makes the routing decisions based on the topology and network traffic.

The main parameters related to this algorithm are hop count, distance and estimated transit
time.

An adaptive routing algorithm can be classified into three parts:

Centralized algorithm: It is also known as global routing algorithm as it computes the least-cost
path between source and destination by using complete and global knowledge about the
network. This algorithm takes the connectivity between the nodes and link cost as input, and
this information is obtained before actually performing any calculation. Link state algorithm is
referred to as a centralized algorithm since it is aware of the cost of each link in the network.

Isolation algorithm: It is an algorithm that obtains the routing information by using local
information rather than gathering information from other nodes.

Distributed algorithm: It is also known as decentralized algorithm as it computes the least-cost


path between source and destination in an iterative and distributed manner. In the
decentralized algorithm, no node has the knowledge about the cost of all the network links. In
the beginning, a node contains the information only about its own directly attached links and
through an iterative process of calculation computes the least-cost path to the destination. A
Distance vector algorithm is a decentralized algorithm as it never knows the complete path
from source to the destination, instead it knows the direction through which the packet is to be
forwarded along with the least cost path.

Non-Adaptive Routing algorithm

Non-Adaptive routing algorithm is also known as a static routing algorithm.

When booting up the network, the routing information stores to the routers.

Non-Adaptive routing algorithms do not take the routing decision based on the network
topology or network traffic.

The Non-Adaptive Routing algorithm is of two types:

Flooding: In case of flooding, every incoming packet is sent to all the outgoing links except the
one from it has been reached. The disadvantage of flooding is that node may contain several
copies of a particular packet.

Random walks: In case of random walks, a packet sent by the node to one of its neighbors
randomly. An advantage of using random walks is that it uses the alternative routes very
efficiently.

Distance Vector Routing Algorithm (Dynamic routing algorithm)

It works in the following steps-

Step-01:

Each router prepares its routing table. By their local knowledge. each router knows about-

All the routers present in the network

Distance to its neighboring routers

Step-02:
Each router exchanges its distance vector with its neighboring routers.

Each router prepares a new routing table using the distance vectors it has obtained from its
neighbors.

This step is repeated for (n-2) times if there are n routers in the network.

After this, routing tables converge / become stable.

Link State Routing

Link state routing is a technique in which each router shares the knowledge of its neighborhood
with every other router in the internetwork.

The three keys to understand the Link State Routing algorithm:


Knowledge about the neighborhood: Instead of sending its routing table, a router sends
the information about its neighborhood only. A router broadcast its identities and cost of the
directly attached links to other routers.

Flooding: Each router sends the information to every other router on the internetwork except
its neighbors. This process is known as Flooding. Every router that receives the packet sends the
copies to all its neighbors. Finally, each and every router receives a copy of the same
information.

Information sharing: A router sends the information to every other router only when the
change occurs in the information.

Link State Routing has two phases:

Reliable Flooding

Initial state: Each node knows the cost of its neighbors.

Final state: Each node knows the entire graph.

Route Calculation

Each node uses Dijkstra's algorithm on the graph to calculate the optimal routes to all nodes.

The Link state routing algorithm is also known as Dijkstra's algorithm which is used to find the
shortest path from one node to every other node in the network.

The Dijkstra's algorithm is an iterative, and it has the property that after kth iteration of the
algorithm, the least cost paths are well known for k destination nodes.
Link State Routing Protocols

A routing protocol is a routing algorithm that provides the best path from the source to the
destination.

In the link state routing protocol, a router transmits its IP address, MAC address, and signature
to its neighboring routers.

Open Shortest Path First is a routing protocol that uses the link state routing algorithm to
exchange information (about neighboring routers, cost of the route, etc.) among the inter-
network routers.

Optimized Link State Routing Protocol is an optimized link state routing protocol used in mobile
ad hoc networks and wireless ad hoc networks.

Disadvantage:

Heavy traffic is created in Line state routing due to Flooding. Flooding can cause an infinite
looping; this problem can be solved by using Time-to-leave field
hierarchical routing

In hierarchical routing, the routers are divided into regions. Each router has complete details
about how to route packets to destinations within its own region. But it does not have any idea
about the internal structure of other regions.

As we know, in both LS and DV algorithms, every router needs to save some information about
other routers. When network size is growing, the number of routers in the network will
increase. Therefore, the size of routing table increases, then routers cannot handle network
traffic as efficiently. To overcome this problem we are using hierarchical routing.

In hierarchical routing, routers are classified in groups called regions. Each router has
information about the routers in its own region and it has no information about routers in other
regions. So, routers save one record in their table for every other region.

For huge networks, a two-level hierarchy may be insufficient hence, it may be necessary to
group the regions into clusters, the clusters into zones, the zones into groups and so on.

Example

Consider an example of two-level hierarchy with five regions as shown in figure −

Let see the full routing table for router 1A which has 17 entries, as shown below −

Full Table for 1A


Dest
Line Hops
.

1A - -

1B 1B 1

1C 1C 1

2A 1B 2

2B 1B 3

2C 1B 3

2D 1B 4

3A 1C 3

3B 1C 2

4A 1C 3

4B 1C 4

4C 1C 4

5A 1C 4

5B 1C 5

5C 1B 5

5D 1C 6

5E 1C 5

When routing is done hierarchically then there will be only 7 entries as shown below −

Hierarchical Table for 1A


Dest
Line Hops
.

1A - -

1B 1B 1

1C 1C 1

2 1B 2

3 1C 2

4 1C 3

5 1C 4

Unfortunately, this reduction in table space comes with the increased path length.

Explanation

Step 1 − For example, the best path from 1A to 5C is via region 2, but hierarchical routing of all
traffic to region 5 goes via region 3 as it is better for most of the other destinations of region 5.

Step 2 − Consider a subnet of 720 routers. If no hierarchy is used, each router will have 720
entries in its routing table.

Step 3 − Now if the subnet is partitioned into 24 regions of 30 routers each, then each router
will require 30 local entries and 23 remote entries for a total of 53 entries.

When a device has multiple paths to reach a destination, it always selects one path by
preferring it over others. This selection process is termed as Routing. Routing is done by special
network devices called routers or it can be done by means of software processes.The software
based routers have limited functionality and limited scope.

A router is always configured with some default route. A default route tells the router where to
forward a packet if there is no route found for specific destination. In case there are multiple
paths existing to reach the same destination, router can make decision based on the following
information:

Hop Count

Bandwidth

Metric

Prefix-length

Delay

Unicast routing

Most of the traffic on the internet and intranets known as unicast data or unicast traffic is sent
with specified destination. Routing unicast data over the internet is called unicast routing. It is
the simplest form of routing because the destination is already known. Hence the router just
has to look up the routing table and forward the packet to next hop.

Broadcast routing

By default, the broadcast packets are not routed and forwarded by the routers on any network.
Routers create broadcast domains. But it can be configured to forward broadcasts in some
special cases. A broadcast message is destined to all network devices.

Broadcast routing can be done in two ways (algorithm):

A router creates a data packet and then sends it to each host one by one. In this case, the
router creates multiple copies of single data packet with different destination addresses. All
packets are sent as unicast but because they are sent to all, it simulates as if router is
broadcasting.

This method consumes lots of bandwidth and router must destination address of each node.

Secondly, when router receives a packet that is to be broadcasted, it simply floods those
packets out of all interfaces. All routers are configured in the same way.
Multicast Routing

Multicast routing is special case of broadcast routing with significance difference and
challenges. In broadcast routing, packets are sent to all nodes even if they do not want it. But in
Multicast routing, the data is sent to only nodes which wants to receive the packets.

Anycast Routing

Anycast packet forwarding is a mechanism where multiple hosts can have same logical address.
When a packet destined to this logical address is received, it is sent to the host which is nearest
in routing topology. Anycast routing is done with help of DNS server. Whenever an Anycast
packet is received it is enquired with DNS to where to send it. DNS provides the IP address
which is the nearest IP configured on it.

Unicast Routing Protocols

There are two kinds of routing protocols available to route unicast packets:

Distance Vector Routing Protocol

Distance Vector is simple routing protocol which takes routing decision on the number of hops
between source and destination. A route with a smaller number of hops is considered as the
best route. Every router advertises its set best routes to other routers. Ultimately, all routers
build up their network topology based on the advertisements of their peer routers,

For example, Routing Information Protocol (RIP).

Link State Routing Protocol

Link State protocol is slightly complicated protocol than Distance Vector. It takes into account
the states of links of all the routers in a network. This technique helps routes build a common
graph of the entire network. All routers then calculate their best path for routing purposes.for
example, Open Shortest Path First (OSPF) and Intermediate System to Intermediate System
(ISIS).

Multicast Routing Protocols


Unicast routing protocols use graphs while Multicast routing protocols use trees, i.e. spanning
tree to avoid loops. The optimal tree is called shortest path spanning tree.

DVMRP - Distance Vector Multicast Routing Protocol

MOSPF - Multicast Open Shortest Path First

CBT - Core Based Tree

PIM - Protocol independent Multicast

Protocol Independent Multicast is commonly used now. It has two flavors:

PIM Dense Mode

This mode uses source-based trees. It is used in dense environment such as LAN.

PIM Sparse Mode

This mode uses shared trees. It is used in sparse environment such as WAN.

Congestion

Simple Definition – Analysis on Allowed number of packet vs Actual number of present packets

Technical Definition – When too many packets are present in Network then it is called as
congestion

Congestion Control vs Flow Control

Congestion Control

Is all about making subnet capable enough for carrying required traffic

Flow Control

Related to P2P traffic between Sender and Receiver

Make sure that sender does not transmit data faster than capacity of Receiver to absorb it

Sliding Window Protocol


Example Congestion Control

Two or more same type system are trying to send more than 100kb data using 100 kbps
transmission media

Example Flow Control

Transmission of data from Super computer to Normal Computer on fiber optics media of
1gigabit/Sec speed

Principles of Congestion Control

Monitoring of System – Metric Used where rising number indicates increasing congestion

%Age of discarded Packets due to lack of Buffer space

Average Queue Length

Number of packets timed out

Average Delay

System Adjustments

Congestion

Congestion is an important issue that can arise in packet switched network. Congestion is a
situation in Communication Networks in which too many packets are present in a part of the
subnet, performance degrades. Congestion in a network may occur when the load on the
network (i.e. the number of packets sent to the network) is greater than the capacity of the
network (i.e. the number of packets a network can handle.). Network congestion occurs in case
of traffic overloading.

In other words when too much traffic is offered, congestion sets in and performance degrades
sharply.
The various causes of congestion in a subnet are:

• The input traffic rate exceeds the capacity of the output lines. If suddenly, a stream of packet
start arriving on three or four input lines and all need the same output line. In this case, a
queue will be built up. If there is insufficient memory to hold all the packets, the packet will be
lost. Increasing the memory to unlimited size does not solve the problem. This is because, by
the time packets reach front of the queue, they have already timed out (as they waited the
queue). When timer goes off source transmits duplicate packet that are also added to the
queue. Thus same packets are added again and again, increasing the load all the way to the
destination.

• The routers are too slow to perform bookkeeping tasks (queuing buffers, updating tables,
etc.).
• The routers’ buffer is too limited.
• Congestion in a subnet can occur if the processors are slow. Slow speed CPU at routers will
perform the routine tasks such as queuing buffers, updating table etc slowly. As a result of this,
queues are built up even though there is excess line capacity.
• Congestion is also caused by slow links. This problem will be solved when high speed links are
used. But it is not always the case. Sometimes increase in link bandwidth can further
deteriorate the congestion problem as higher speed links may make the network more
unbalanced. Congestion can make itself worse. If a route!” does not have free buffers, it start
ignoring/discarding the newly arriving packets. When these packets are discarded, the sender
may retransmit them after the timer goes off. Such packets are transmitted by the sender again
and again until the source gets the acknowledgement of these packets. Therefore multiple
transmissions of packets will force the congestion to take place at the sending end.

How to correct the Congestion Problem:

Congestion Control refers to techniques and mechanisms that can either prevent congestion,
before it happens, or remove congestion, after it has happened. Congestion control
mechanisms are divided into two categories, one category prevents the congestion from
happening and the other category removes congestion after it has taken place.

These two categories are:

1. Open loop

2. Closed loop

Open Loop Congestion Control

• In this method, policies are used to prevent the congestion before it happens.

• Congestion control is handled either by the source or by the destination.

• The various methods used for open loop congestion control are:
Retransmission Policy

• The sender retransmits a packet, if it feels that the packet it has sent is lost or corrupted.

• However retransmission in general may increase the congestion in the network. But we need
to implement good retransmission policy to prevent congestion.

• The retransmission policy and the retransmission timers need to be designed to optimize
efficiency and at the same time prevent the congestion.

Window Policy

• To implement window policy, selective reject window method is used for congestion control.

• Selective Reject method is preferred over Go-back-n window as in Go-back-n method, when
timer for a packet time out, several packets are resent, although some may have arrived safely
at the receiver. Thus, this duplication may make congestion worse.

• Selective reject method sends only the specific lost or damaged packets.

Acknowledgement Policy

• The acknowledgement policy imposed by the receiver may also affect congestion.

• If the receiver does not acknowledge every packet it receives it may slow down the sender
and help prevent congestion.

• Acknowledgments also add to the traffic load on the network. Thus, by sending fewer
acknowledgements we can reduce load on the network.

• To implement it, several approaches can be used:

1. A receiver may send an acknowledgement only if it has a packet to be sent.

2. A receiver may send an acknowledgement when a timer expires.

3. A receiver may also decide to acknowledge only N packets at a time.

Discarding Policy
• A router may discard less sensitive packets when congestion is likely to happen.

• Such a discarding policy may prevent congestion and at the same time may not harm the
integrity of the transmission.

Admission Policy

• An admission policy, which is a quality-of-service mechanism, can also prevent congestion in


virtual circuit networks.

• Switches in a flow first check the resource requirement of a flow before admitting it to the
network.

• A router can deny establishing a virtual circuit connection if there is congestion in the
“network or if there is a possibility of future congestion.

Closed Loop Congestion Control

• Closed loop congestion control mechanisms try to remove the congestion after it happens.

• The various methods used for closed loop congestion control are:

Backpressure

• Back pressure is a node-to-node congestion control that starts with a node and propagates, in
the opposite direction of data flow.
• The
backpressure technique can be applied only to virtual circuit networks. In such virtual circuit
each node knows the upstream node from which a data flow is coming.

• In this method of congestion control, the congested node stops receiving data from the
immediate upstream node or nodes.

• This may cause the upstream node on nodes to become congested, and they, in turn, reject
data from their upstream node or nodes.

• As shown in fig node 3 is congested and it stops receiving packets and informs its upstream
node 2 to slow down. Node 2 in turns may be congested and informs node 1 to slow down.
Now node 1 may create congestion and informs the source node to slow down. In this way the
congestion is alleviated. Thus, the pressure on node 3 is moved backward to the source to
remove the congestion.

Choke Packet

• In this method of congestion control, congested router or node sends a special type of packet
called choke packet to the source to inform it about the congestion.

• Here, congested node does not inform its upstream node about the congestion as in
backpressure method.

• In choke packet method, congested node sends a warning directly to the source
station i.e., the intermediate nodes through which the packet has traveled are not warned.
Implicit Signaling

• In implicit signaling, there is no communication between the congested node or nodes and
the source.

• The source guesses that there is congestion somewhere in the network when it does not
receive any acknowledgment. Therefore, the delay in receiving an acknowledgment is
interpreted as congestion in the network.

• On sensing this congestion, the source slows down.

• This type of congestion control policy is used by TCP.

Explicit Signaling

• In this method, the congested nodes explicitly send a signal to the source or destination to
inform about the congestion.

• Explicit signaling is different from the choke packet method. In choke packed method, a
separate packet is used for this purpose whereas in explicit signaling method, the signal is
included in the packets that carry data.

• Explicit signaling can occur in either the forward direction or the backward direction.

• In backward signaling, a bit is set in a packet moving in the direction opposite to the
congestion. This bit warns the source about the congestion and informs the source to slow
down.
• In forward signaling, a bit is set in a packet moving in the direction of congestion. This bit
warns the destination about the congestion. The receiver in this case uses policies such as
slowing down the acknowledgements to remove the congestion

Congestion Control in Virtual Circuit Subnet

Admission Control Technique

Explanation: Idea is when congestion occurs then no more Virtual Circuits will be developed,
until previous congestion gets resolved

It means transport layer will be at hold situation

Alternative Approach

Explanation: It allows creation of new Virtual Circuit

But Virtual Circuit should be handled carefully at congested area

Another Strategy

Explanation: Is to set up Virtual Circuit with some negotiations

Shape and Volume of Traffic

QOS

Reserved Resources for New Virtual Circuits

Disadvantage is wastage of resources


Congestion Control in Datagram Subnet (and also in virtual circuit subnets)

Choke packets

Load shedding

Jitter control

Choke Packet Technique:

A specialized packet that is used for flow control along a network.

The router sends a choke packet back to the source host, giving it the destination found in the
packet.

When the source gets the choke packet, it is required to reduce the traffic sent to the specified
destination by X percent.

If more packets arrive at the same destination, the host ignores some choke packets for a fixed
time interval.

After that period has expired, the host listens for more choke packets for another interval.

If no packets arrive during the listening period, the host may increase the flow again.
Hosts can reduce traffic by adjusting their policy parameters.

Routers can maintain threshold, depending on them the choke packet can contain a mild
warning, a stern warning, or an ultimatum.

Load Shedding

Load shedding is a technique to remove excess load from the system in order to keep query
processing up with the input arrival rates. As a result of load shedding, the system delivers
approximate query answers with reduced latency.

Admission control, choke packets, fair queuing are the techniques suitable for light congestion.

But if these techniques cannot make the congestion to disappear, then the load shedding
technique is to be used.

The principle of load shedding states that when the routers are being inaundated by the
packets away.

A router which is flooding with packets due to congestion can drop any packets at random. The
policy for dropping a packet depends on the type of packet. So the policy for file transfer called
wine (old is better than new) and that for the multimedia is called milk (new is better than old).
To implement such an intelligent discard policy, co-operation from the sender is essential. The
applications should mark their packets are to be discarded the routers can first drop packets
from the lowest class

Jitter Control

The jitter can be bounded by computing the expected transit time for each hop along the path.
When a packet arrives at a router, the router checks to see how much the packet is behind or
ahead of its schedule. This information is stored in the packet and updated at each hop. If the
packet is ahead of schedule, it is held just long enough to get it back on schedule. If it is behind
schedule, the router tries to get it out the door quickly.

In fact, the algorithm for determining which of several packets competing for an output line
should go next can always choose the packet furthest behind in its schedule.
In this way, packets that are ahead of schedule get slowed down and packets that are behind
schedule get speeded up, in both cases reducing the amount of jitter.

Traffic Shaping

It is about regulating average rate of data flow.

It is a method of congestion control by providing shape to data flow before entering the packet
into the network.

At connection set-up time, the sender and carrier negotiate a traffic pattern (shape)

There are two types of Traffic shaping algorithm: -

1. Leaky Bucket Algorithm.

2. Token Bucket Algorithm.

Leaky bucket Algorithm

The Leaky Bucket Algorithm used to control rate in a network.

It is implemented as a single-server queue with constant service time.

If the bucket (buffer) overflows then packets are discarded.

In this algorithm the input rate can vary but the output rate remains constant.

This algorithm saves busty traffic into fixed rate traffic by averaging the data rate.
Algorithm:

Step - 1 : Initialize the counter to ‘n’ at every tick of clock.

Step - 2 : If n is greater than the size of packet in the front of queue send the packet into the
network and decrement the counter by size of packet. Repeat the step until n is less than the
size of packet.

Step - 3 : Reset the counter and go to Step - 1.


Example: Consider a frame relay network having a capacity of 1Mb and data is input at the rate
of 25mbps.Calculate:

1. What is the time needed to fill the bucket.

2. If the output rate is 2 mbps , the time needed to empty the bucket.

Ans. Here , C is Capacity of bucket = 1mb Data input rate = 25 mbps output rate = 2mbps.

T = C/input rate = 1/25 =1000/25= 40 msec

1T=1000MBPS

2. T = C/output rate = ½ = 500 msec

1000/2=500

Token Bucket Algorithm


The Token Bucket Algorithm compare to Leaky Bucket Algorithm allow the output rate vary
depending on the size of burst.

In this algorithm the buckets holds token to transmit a packet, the host must capture and
destroy one token.

Tokens are generated by a clock at the rate of one token every t sec.

Idle hosts can capture and save up tokens (up to the max. size of the bucket) in order to send
larger bursts later.

Algorithm:

Step - 1 : A token is added at every ∆t time.

Step - 2 : The bucket can hold at most b-tokens. If a token arrive when bucket is full it is
discarded.

Step - 3 : When a packet of m bytes arrived m tokens are removed from the bucket and the
packet is sent to the network.
Step – 4 : If less than n tokens are available no tokens are removed from the buckets and the
packet is considered to be non conformant. The non conformant packet may be enqueued for
subsequent transmission when sufficient token has been accumulated in the bucket. If C is the
maximum capacity of bucket and ρ is the arrival rate and M is the maximum output rate then
Burst Length S can be calculated as:

C + ρS = MS

Example: Consider a frame relay network having a capacity of 1Mb of data is arriving at the rate
of 25mbps for 40msec.The Token arrival rate is 2mbps and the capacity of bucket is 500 kb with
maximum output rate 25mbps.Calculate: C + ρS = MS

1. The Burst Length.

2. Total output time.

Ans. Here , C is Capacity of bucket = 500kb M= 25 mbps ρ = 2mbps.

S= 500/((25-2)*1000) =500/(23)= 21.73msec ~= 22msec

2 For 22msec the output rate is 25msec after that the output rate becomes 2mbps i.e. token
arrival rate. Therefore, for another 500 kb the time taken will be. 500/(2000) = 250 msec (T =
C/output rate)

Therefore, total output time = 22 +250 = 272 msec.

Packet switching

It is a method of transferring the data to a network in form of packets. In order to transfer the
file fast and efficiently manner over the network and minimize the transmission latency, the
data is broken into small pieces of variable length, called Packet. At the destination, all these
small parts (packets) have to be reassembled, belonging to the same file. Packet Switching
uses Store and Forward technique while switching the packets; while forwarding the packet
each hop first stores that packet then forward. This technique is very beneficial because
packets may get discarded at any hop due to some reason. More than one path is possible
between a pair of sources and destinations. Each packet contains Source and destination
address using which they independently travel through the network.

Modes of Packet Switching:

Virtual Circuits:

1. It is connection-oriented simply meaning that there is a reservation of resources like buffers,


CPU, bandwidth etc. for the time in which the newly setup VC is going to be used by a data
transfer session.

2. First packet goes and reserves resources for the subsequent packets which as a result follow
the same path for the whole connection time.

3. Since all the packets are going to follow the same path, a global header is required only for
the first packet of the connection and other packets generally don’t require global headers.

4. Since data follows a particular dedicated path, packets reach in order to the destination.

5. From above points, it can be concluded that Virtual Circuits are highly reliable means of
transfer.

6. Since each time a new connection has to be setup with reservation of resources and extra
information handling at routers, its simply costly to implement Virtual Circuits.

Datagram Networks:

1. It is connectionless service. There is no need of reservation of resources as there is no


dedicated path for a connection session.

2. All packets are free to go to any path on any intermediate router which is decided on the go
by dynamically changing routing tables on routers.
3. Since every packet is free to choose any path, all packets must be associated with a header
with proper information about source and the upper layer data.

4. The connectionless property makes data packets reach destination in any order; means they
need not reach in the order in which they were sent.

5. Datagram networks are not reliable as Virtual Circuits.

6. But it is always easy and cost efficient to implement datagram networks as there is no extra
headache of reserving resources and making a dedicated each time an application has to
communicate.

Tunneling

Tunneling is a protocol that allows for the secure movement of data from one network to
another. Tunneling involves allowing private network communications to be sent across a
public network, such as the Internet, through a process called encapsulation.

Internetworking: routing

Internet routing is the process of transmitting and routing IP packets over the Internet between
two or more nodes.

2 levels of routing:
Within a network

Internetwork routing

Interior gateway protocol

Between networks

Internetwork routing

Exterior gateway protocol

Internetwork routing

Graph construction

Every router can directly access routers on the same network

Packet forwarding + tunneling if necessary

Differences with internetwork routing

Cross international boundaries adopt national laws

Agreements between operators (transit traffic)


Fragmentation

As datagrams move from source to destination, they travel through different networks to reach
their destination.

Each network imposes some maximum size restrictions on its packet. Thus network designers
are not free to choose any maximum packet size they wish.

The maximum packet size varies from network to network.

Problem: Large packet through network with smaller maximum packet size

Solution:

Break large packet into fragments

Send each fragment as a separate packet

Reassemble: transparent <> non transparent?

Transparent fragmentation

Strategy

Gateway breaks large packet into fragments

Each fragment addressed to same exit gateway

Exit gateway does reassembly


Simple, but some problems

Gateway must know when it has all pieces

Performance loss: all fragments through same gateway

Overhead: repeatedly reassemble and refragment

Example: ATM segmentation

Nontransparent fragmentation

Strategy

Gateway breaks large packet into fragments

Each fragment is forwarded to destination


problems

Every host must be able to reassembly

More headers

Example: IP fragmentation

You might also like