Professional Documents
Culture Documents
A On Congestion Control
A On Congestion Control
A On Congestion Control
REPORT
On
CONGESTION CONTROL
NOIDA (U.P)
The written words have an unfortunate tendency to degenerate the feeling of genuine
gratitude into a still formality but I have no other way to record my feeling permanently.
First of all, I, Zafaryab Haider of BT-CS, would like to express my deep sense gratitude
towards my guide Mr.Braham deo Sah, Lecturer, Computer Science & Engineering
Department for his valuable guidance, constant encouragement & inspiring efforts for
completion of this dissertation. Without his efforts this dissertation cannot be completed.
I would like to thank Mr.Mohd Haider, Head of Computer Science & Engineering
Department for the college facilities he provided.
I am also thankful to my friends for their timely advice, moral support, and
encouragement.
I also acknowledge the co-operation of all the other individuals who directly & indirectly
helped me in making this report a success.
I
ABSTRACT
Congestion is said to occur in the network when the resource demands exceed the
capacity and packets are lost due to too much queuing in the network. During congestion, the
network throughput may drop to zero and the path delay may become very high. A congestion
control scheme helps the network to recover from the congestion state.
A congestion avoidance scheme allows a network to operate in the region of low delay
and high throughput. Such schemes prevent a network from entering the congested state.
Congestion avoidance is a prevention mechanism while congestion control is a recovery
mechanism.
We compare the concept of congestion avoidance with that of flow control and
congestion control. A number of possible alternative for congestion avoidance have been
identified. From these a few were selected for study. The criteria for selection and goals for these
schemes have been described. In particular, we wanted the scheme to be globally efficient, fair,
dynamic, convergent, robust, distributed, configuration independent, etc. These goals and the test
cases used to verify whether a particular scheme has met the goals have been described.
We model the network and the user policies for congestion avoidance as a feedback
control system. The key components of a generic congestion avoidance scheme are: congestion
detection, congestion feedback, feedback selector, signal filter, decision function, and
increase/decrease algorithms. These components have been explained.
The congestion avoidance research was done using a combination of analytical modeling and
simulation techniques. The features of simulation model used have been described. This is the
first report in a series on congestion avoidance schemes. Other reports in this series describe the
application of these ideas leading to the development of specific congestion avoidance schemes.
II
INDEX
Acknowledgement I
Abstract II
List of Figure IV
1. Introduction 1
2. Congestion Control 2-3
2.1 What is Congestion? 2
2.2 Causes of Congestion 2
3. Principles of Congestion Control 4
4. Congestion Control Techniques 5-7
4.1 Open Loop Techniques 5
4.2 Closed Loop Techniques 6
4.3 Load Shedding 7
5. Example of Congestion Control in TCP 8
6. Congestion Control at Routers 10
7. Traffic Shaping 12-13
7.1 Leaky Bucket 12
7.2 Token Bucket 13
8. Conclusion 14
9. References 15
LIST OF FIGURES
IV
INTRODUCTION
When the number of packets dumped into the network is within the carrying capacity,
they all are delivered, except a few that have to be rejected due to transmission errors. And then
the number delivered is proportional to the number of packets sent. However, as traffic increases
too far, the routers are no longer able to cope, and they begin to lose packets. This tends to make
matter worse. At very high traffic, performance collapse completely, and almost no packet is
delivered. In the following sections, the causes of congestion, the effects of congestion and
various congestion control techniques are discussed in detail.
1
CONGESTION
WHAT IS CONGESTION?
Congestion occurs when the source sends more packets than the destination can handle.
When this congestion occurs performance will degrade.
The packets are normally temporarily stored in the buffers of the source and the destination
before forwarding it to their upper layers. Congestion occurs when these buffers gets filled on the
destination side. At a very high traffic rate, performance collapses completely and no packets are
delivered.
This can be very well demonstrated through the graph given below:-
3
PRICIPLES OF CONGESTION CONTROL:-
It refers to the techniques and mechanism that can either prevent congestion, before it happens,
or remove the congestion, after it has happened.
Main steps followed in congestion control are:-
1)For Closed Loop solution which is based on feedback loop:-
• Monitor the segments to detect when and where congestion occurs.
• Pass this information to places where action can be taken.
• Adjust system operations to correct the problem.
2)For Open Loop solutions attempt to solve the problem via good design because once the
system is up and running, midcourse corrections are not made. it includes deciding when to
accept new traffic, deciding when to discard packets and which ones, and making scheduling
decisions at various points in the network.
4
CONGESTION CONTROL TECHNIQUES
Congestion Control techniques are broadly divided into two broad categories:-
1. Open loop congestion control (prevention).
• Retransmission policy
• Window policy
• Acknowledgement policy
• Discarding policy
• Admission policy
Window policy: Type of window at the sender’s end may also effect the congestion. Selective
repeat window is better than Go-Back-N window because in case selective there are no chances
of duplication.
Acknowledgement policy: Acknowledgement policy of receiver can also effect the congestion.
If receiver does not acknowledge every packet it receives, it may slow down the sender, thereby
preventing congestion. Several approaches are used for this. Sending fewer acknowledgement
means imposing less load on network.
Discarding policy: A good discarding policy by routers may prevent congestion and at the same
time may not harm the integrity of the transmission. Eg:- discard less sensitive packets in audio
transmission thereby preserving quality of sound and preventing congestion.
Admission policy: In this first of all resource requirements of flow are checked before admitting
to network. If there is congestion or possibility of congestion in future than router is denied
establishing a virtual circuit connection.
Explicit Signaling: In this case the node experiencing the congestion can explicitly signal the
source or destination. It is different from choke packet as in this case signal is included in the
packet that carries the data, no new packet is used. It can be either forward or backward.
• Backward Signaling: Signal warns the source that there is congestion and that it needs
to slow down to avoid discarding of packets.
• Forward Signaling: Signal warns destination that there is congestion and that it needs to
slow down in sending the acknowledgements.
LOAD SHEDDING:
When none of the above techniques are able to make the congestion disappear, routers can bring
out the heavy artillery: Load Shedding. It is one of the simplest and more effective techniques. In
this method, whenever a router finds that there is congestion in the network, it simply starts
dropping out the packets. There are different methods by which a host can find out which
packets to drop. Simplest way can be just choose the packets randomly which has to be dropped.
More effective ways are there but they require some kind of cooperation from the sender too. For
many applications, some packets are more important than others. So, sender can mark the
packets in priority classes to indicate how important they are. If such a priority policy is
implemented than intermediate nodes can drop packets from the lower priority classes and use
the available bandwidth for the more important packets
7
EXAMPLE OF CONGESTION CONTROL
Congestion Control in TCP:-
Nowadays sender’s window is not only controlled be the receiver but also by the congestion in
the network. Sender has two piece information i.e. receiver advertised window size (rwnd) and
congestion window size (cwnd). The actual size of sender’s window is the minimum of these
two.
Actual window size=minimum (rwnd, cwnd)
TCP’s general policy for handling congestion is based on three phases: slow start, congestion
avoidance and congestion detection.
Slow Start: Exponential Increase: This is an algorithm that is based on the idea that the size of
cwnd starts with one maximum segment size (MSS) i.e. cwnd = 1MSS and that the sender’s
window size is always equals to cwnd as it much smaller that rwnd. After every
acknowledgement cwnd is incremented by 1MSS.
Start -- cwnd = 1
After round 1 -- cwnd = 21=2
After round 2 -- cwnd = 22=4
After round 3 -- cwnd = 23=8
In case of delayed ACK’s the increase in the size of the window is less than the power of 2.
Slow Start cannot grow indefinitely. There’s a threshold to stop this phase. Mostly its value is
65535bytes.
Congestion Avoidance: Additive Increase: It is an algo that slows down the exponential
growth of previous phase, once it has reached the threshold, thereby avoiding the congestion. It
undergoes an Additive Increase i.e. size of congestion window is increased by one each time the
whole window is acknowledged (each round), until congestion is detected. To illustrate this see
the previous Fig.:-
Start -- cwnd = 1
After round 1 -- cwnd = 1+1=2
After round 2 -- cwnd = 2+1=3
After round 3 -- cwnd = 3+1=4
In this case, after the sender has received acknowledgements for a complete window size of
segment, the size of window is increased by 1 segment.
Priority Queuing: Packets are first marked with a priority. Implement multiple FIFO queues,
one for each priority class. Always transmit out of the highest priority non-empty queue. System
does not stop serving until queue is empty. It is better than FIFO as higher priority data is
transferred first.
Problem:: high priority packets can ‘starve’ lower priority class packets.
One practical use in the Internet is to protect routing update packets by giving them a higher
priority and a special queue at the router
Fair Queuing: The basic problem with FIFO is that it does not discriminate between different
packet sources. Another problem with FIFO was that an “ill-behaved” flow can capture an
arbitrarily large share of the network’s capacity. Thus Fair Queuing (FQ) aldo was introduced. It
maintained a separate queue for each flow, and Fair Queuing (FQ) services these queues in a
round-robin fashion.
Flo w 1
Flo w 2
Round-robin
service
Flo w 3
Flo w 4
Weighted Fair Queuing(WFQ): Here we assign a weight to each flow (queue) such that the
weight logically specifies the number of bits to transmit each time the router services that queue.
Higher priority has higher weight. This controls the percentage of the link capacity that the flow
will receive. If the weights are 3,2,1 it means that the three packets are processed from the first
queue, 2 from second and 1 from 3rd.
If the system doesn’t impose priority then the weights would have been same.
• An issue – how does the router learn of the weight assignments?
– Manual configuration
– Signaling from sources or receivers.
11
TRAFFIC SHAPING
It is the mechanism to control the amount and the rate of the traffic sent to the network. There are
two methods to do it:
Leaky Bucket: Consider a Bucket with a small hole at the bottom, whatever may be the rate of
water pouring into the bucket; the rate at which water comes out from that small hole is constant.
This scenario is depicted in figure 6(a). Once the bucket is full, any additional water entering it
spills over the sides and is lost (i.e. it doesn’t appear in the output stream through the hole
underneath).
The same idea of leaky bucket can be applied to packets, as shown in Fig. 6(b). Conceptually
each network interface contains a leaky bucket. And the following steps are performed:
When the host has to send a packet, the packet is thrown into the bucket.
The bucket leaks at a constant rate, meaning the network interface transmits packets at a
constant rate.
Bursty traffic is converted to a uniform traffic by the leaky bucket.
In practice the bucket is a finite queue that outputs at a finite rate.
This arrangement can be simulated in the operating system or can be built into the hardware.
Implementation of this algorithm is easy and consists of a finite queue. Whenever a packet
arrives, if there is room in the queue it is queued up and if there is no room then the packet is
discarded.
Fig.6 (a) A leaky bucket with water. (b) A leaky bucket with packets
Token Bucket: The leaky bucket algorithm described above, enforces a rigid pattern at the output
stream, irrespective of the pattern of the input. For many applications it is better to allow the output
to speed up somewhat when a larger burst arrives than to lose the data. Token Bucket algorithm
provides such a solution. In this algorithm leaky bucket holds token, generated at regular intervals.
Main steps of this algorithm can be described as follows:
Figure 7 shows the two scenarios before and after the tokens present in the bucket have been
consumed. In Fig. 7(a) the bucket holds two tokens, and three packets are waiting to be sent out of
the interface, in Fig. 7(b) two packets have been sent out by consuming two tokens, and 1 packet is
still left.
The token bucket algorithm is less restrictive than the leaky bucket algorithm, in a sense that it
allows bursty traffic. However, the limit of burst is restricted by the number of tokens available in the
bucket at a particular instant of time.
The implementation of basic token bucket algorithm is simple; a variable is used just to count the
tokens. This counter is incremented every t seconds and is decremented whenever a packet is sent.
Whenever this counter reaches zero, no further packet is sent out as shown in Fig. 7.5.5.
With the development of Internet, more and more real-time multimedia services
employing noncongestion-controlled protocols (usually UDP) have constituted the major share
of the Internet traffic. This may lead to network breakdown.
In order to avoid such situations we take some control measures which are Open Loop
(preventive) or Closed Loop (detective). If these are in any case unable to outdo congestion then
we can go for Load Shedding technique.
Traffic shaping techniques are also very helping in avoiding the congestion. As it is
always preferable to avoid or prevent than to detect and cure, so we should always apply traffic
shaping in the networks.
14
REFERENCES
15