Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 30

LECTURE-12

Congestion
Congestion
►ifthe load on the network i.e.the number of packets sent to
the network is greater than the capacity of the network or the
number of packets a network can handle.
►Too many packets present in (a part of) the network causes
packet delay and loss that degrades performance.
►This situation is called congestion.
►The network and transport layers share the responsibility for
handling congestion.
►Since congestion occurs within the network, it is the network
layer that directly experiences it and must ultimately
determine what to do with the excess packets.
Congestion
►However, the most effective way to control congestion is to
reduce the load that the transport layer is placing on the
network.
►This requires the network and transport layers to work
together.
Causes of Congestion
• Congestion occurs when a router receives data faster
than it can send it
– Insufficient bandwidth
– Slow hosts
– Data simultaneously arriving from multiple lines
destined for the same outgoing line.
• The system is not balanced
– Correcting the problem at one router will probably
just move the bottleneck to another router.
Congestion Causes More Congestion
– Incoming messages must be placed in queues
• The queues have a finite size
– Overflowing queues will cause packets to be dropped
– Long queue delays will cause packets to be resent
– Dropped packets will cause packets to be resent
• Senders that are trying to transmit to a congested
destination also become congested
– They must continually resend packets that have been
dropped or that have timed-out
– They must continue to hold utgoing/unacknowledged
messages in memory.
Congestion Control
• Congestion control refers to techniques and mechanisms that
can either prevent congestion, before it happens, or remove
congestion, after it has happened.
Open-Loop Congestion Control
In open-loop congestion control, policies are applied to prevent
congestion before it happens. In these mechanisms, congestion
control is handled by either the source or the destination.
Retransmission Policy :
 Retransmission is sometimes unavoidable.
 If the sender feels that a sent packet is lost or corrupted, the
packet needs to be retransmitted. Retransmission in general may
increase congestion in the network.
 So good retransmission policy can prevent congestion. So the
retransmission policy and the retransmission timers must be
designed to optimize efficiency and at the same time prevent
congestion.
Window Policy :
 The type of window at the sender may also affect congestion.
 The Selective Repeat window is better than the Go-Back-N
window for congestion control.
 Acknowledge Policy :
 If the receiver does not acknowledge every packet it receives,
it may slow down the sender and help prevent congestion.
 A receiver may send an acknowledgment only if it has a
packet to be sent or a special timer expires.
 A receiver may decide to acknowledge only N packets at a
time.
Discarding Policy:
 A good discarding policy by the routers may prevent
congestion and at the same time may not harm the integrity
of the transmission.
Admission Policy :
 An admission policy, which is a quality-of-service mechanism.
 can also prevent congestion in virtual-circuit networks.
 Switches in a flow, first check the resource requirement of a
flow before admitting it to the network. A router can deny
establishing a virtual circuit connection if there is congestion
in the network or if there is a possibility of future congestion.
Closed-Loop Congestion Control
Closed-loop congestion control mechanisms try to alleviate
congestion after it happens. Several mechanisms have been
used by different protocols.
Backpressure :
 in which a congested node stops receiving data from the
immediate upstream node or nodes.
 This may cause the upstream node or nodes to become
congested, and they, in turn, reject data from their upstream
nodes or nodes.
 Node III in the figure has more input data than it can handle. It drops
some packets in its input buffer and informs node II to slow down.
 Node II, in turn, may be congested because it is slowing down the
output flow of data. If node II is congested, it informs node I to slow
down, which in turn may create congestion.
 If so, node I inform the source of data to slow down. This, in time,
alleviates the congestion.
Choke Packet :
 A choke packet is a packet sent by a node to the source to inform it
of congestion.
 In the choke packet method, the warning is from the router, which
has encountered congestion, to the source station directly.
 The intermediate nodes through which the packet has
traveled are not warned.
 When a router in the Internet is overwhelmed with IP
datagrams, it may discard some of them; but it informs the
source host.
 The warning message goes directly to the source station; the
intermediate routers, and does not take any action
Implicit Signaling :
 In implicit signaling, there is no communication between the congested node
or nodes and the source.
 The source guesses that there is congestion somewhere in the network from
other symptoms.
 For example, when a source sends several packets and there is no
acknowledgment for a while, one assumption is that the network is congested
so the source should slow down.
Exmplicit Signaling :
 The node that experiences congestion can explicitly send a signal to the source
or destination.
 The signal is included in the packets that carry data. Explicit signaling, in Frame
Relay congestion control, can occur in either the forward or the backward
direction.
• (i) Backward Signaling
A bit can be set in a packet moving in the direction opposite
to the congestion. This bit can warn the source that there is
congestion and that it needs to slow down to avoid the
discarding of packets. 
(Ii) Forward Signaling
A bit can be set in a packet moving in the direction of the
congestion. This bit can warn the destination that there is
congestion. The receiver in this case can use policies, such as
slowing down the acknowledgments, to alleviate the
congestion.
Open-Loop Control
• Network performance is guaranteed to all traffic flows that
have been admitted into the network
• Initially for connection-oriented networks
• Key Mechanisms
– Admission Control
– Policing
– Traffic Shaping
– Traffic Scheduling
Admission Control
• Flows negotiate contract
with network
Peak rate • Specify requirements:
– Peak, Avg., Min Bit rate
– Maximum burst size
Average rate – Delay, Loss requirement
• Network computes
resources needed
– “Effective” bandwidth
• If flow accepted, network
allocates resources to
Time ensure QoS delivered as
long as source conforms to
Typical bit rate demanded by a contract
variable bit rate information
source
Policing
• Network monitors traffic flows continuously to ensure they
meet their traffic contract
• When a packet violates the contract, network can discard or
tag the packet giving it lower priority
• If congestion occurs, tagged packets are discarded first
• Leaky Bucket Algorithm is the most commonly used policing
mechanism
– Bucket has specified leak rate for average contracted rate
– Bucket has specified depth to accommodate variations in arrival
rate
– Arriving packet is conforming if it does not result in overflow
Traffic Shaping
• Another method of congestion control is to “shape” the
traffic before it enters the network.
• Traffic shaping controls the rate at which packets are sent
(not just how many). Used in ATM and Integrated Services
networks.
• At connection set-up time, the sender and carrier negotiate
a traffic pattern (shape).
• Two traffic shaping algorithms are:
– Leaky Bucket
– Token Bucket

19
The Leaky Bucket Algorithm
• The Leaky Bucket Algorithm used to control rate in a
network. It is implemented as a single-server queue with
constant service time. If the bucket (buffer) overflows then
packets are discarded.
• The leaky bucket enforces a constant output rate (average
rate) regardless of the burstiness of the input. Does nothing
when input is idle.
• The host injects one packet per clock tick onto the network.
• This results in a uniform flow of packets, smoothing out
bursts and reducing congestion.

20
The Leaky Bucket Algorithm

(a) A leaky bucket with water. (b) a leaky bucket with packets.
21
• When packets are the same size (as in ATM cells), the one
packet per tick is okay. For variable length packets though,
it is better to allow a fixed number of bytes per tick. E.g.
1024 bytes per tick will allow one 1024-byte packet or two
512-byte packets or four 256-byte packets on 1 tick.

22
A leaky bucket algorithm shapes busty traffic into fixed-rate
traffic by averaging the data rate. It may drop the packets if the
bucket is full.
Leaky Bucket Traffic Shaper
Size N
Incoming traffic Shaped traffic
Ser
ver
Packet

•Buffer incoming packets


•Play out periodically to conform to parameters
•Surges in arrivals are buffered & smoothed out
•Possible packet loss due to buffer overflow
•Too restrictive, since conforming traffic does not need to be
completely smooth
Token Bucket Algorithm
• In contrast to the LB, the Token Bucket Algorithm, allows
the output rate to vary, depending on the size of the burst.
• In the TB algorithm, the bucket holds tokens. To transmit a
packet, the host must capture and destroy one token.
• Tokens are generated by a clock at the rate of one token
every t sec.
• Idle hosts can capture and save up tokens (up to the max.
size of the bucket) in order to send larger bursts later.

32
The Token Bucket Algorithm

5-
34

(a) Before. (b) After. 33


Token
bucket
Token Bucket Traffic Shaper
Tokens arrive
periodically

An incoming packet must


have sufficient tokens
before admission into the
network Size K
Token

Size N
Incoming traffic Shaped traffic
Server

Packet
•Token rate regulates transfer of packets
•If sufficient tokens available, packets enter network without delay
•K determines how much burstiness allowed into the network
Leaky Bucket vs Token Bucket
• LB discards packets; TB does not. TB discards tokens.
• With TB, a packet can only be transmitted if there are
enough tokens to cover its length in bytes.
• LB sends packets at an average rate. TB allows for large
bursts to be sent faster by speeding up the output.
• TB allows saving up tokens (permissions) to send large
bursts. LB does not allow saving.

29
Load Shedding
• When buffers become full, routers simply discard packets.
• Which packet is chosen to be the victim depends on the
application and on the error strategy used in the data link
layer.
• For a file transfer, for, e.g. cannot discard older packets
since this will cause a gap in the received data.
•For real-time voice or video it is probably better to throw
away old data and keep new packets.
• Get the application to mark packets with discard priority.

30

You might also like