Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

Lecture 24

Packet Scheduling

inside router there were input and output port and there was a switching fabric
not all lines can be connected directly to the output
Algorithms that allocate router resources among the packets of a flow and between competing
flows are called packet scheduling algorithms.
FIFO or FCFS (First-Come First-Serve).
till service it is kept in a buffer (queue)
when que is full they just drop the packet, tail drop
slides
Each router buffers packets in a queue for each output line until they can be sent, and they are
sent in the same order that they arrived. This algorithm is known as FIFO (First-In First-Out), or
equivalently FCFS (First-Come First-Serve).
FIFO routers usually drop newly arriving packets when the queue is full. Since the newly arrived
packet would have been placed at the end of the queue, this behavior is called tail drop
Any drawbacks to FIFO when there are multiple flows? (SIR Q)
when there are competing flows how will give resources to each of the flows
someone who sends more gets more resources than someone who sends less : this case
should not happen

Round Robin Scheduling


Many packet scheduling algorithms have been devised that provide stronger isolation between
flows and thwart attempts at interference
fair queueing
we have 1 que per 1 flow
service first packet from flow 1 then flow2 and .. and then again restart
if flow doesnot have packet we skip it otherwise we serve that packet
This provides isolation, there is still an issue . what is it ? (SIR q)
say 3 flows all sending similar packets
bandwidth taken by each flow, every packet of tranmission time t
t/3 is taken by each
flow 1 has double packet size (transmission size 2t) of 2 and 3 (transmission size t)
now since packet is being served, flow 1 will taken 4t/2t = half and other two flows get quarter
each

Byte-by-Byte Round Robin


go to each flow and send out 1 byte instead of 1 packet
if there are n flows irrsepective of how big the packet is each flow will get 1/n
but we cant send 1 byte at a time.
we can only send 1 packet or quantum
we compute the virtual finishing time for all the packets at the start of the queue if we were able to
send byte by byte
now packets are sorted in ascending order of finishing time and sent in that order
i guess this virutal thingy is not exactly a translation of the byte by byte transmission thing
What drawback does this have ??
gives all host the same priority, but in some cases we might want to have different priorities
for different hosts
basically this is virtual fiunishing time computing then shortest is send first and so on
this is not round robin (cyclic) in any way
im guessing that this is trying to emulate the byte by byte RR

Weighted Fair Queueing

One shortcoming of the previous algorithm in practice is that it gives all hosts the same priority.
In many situations, it is desirable to give, for example, video servers more bandwidth than, say,
file servers.
Let the number of bytes per round be the weight of a flow W

Fi = max(Ai , Fi−1 ) + Li /W
​ ​ ​ ​

finishing time is computed in a different manner


where
Ai is the arrival time,
Fi is the finish time,
Li is the length of packet i.
this is not round robin
at each point the smallest virtual ending time is send across
Example
overhead : maintain this DS , priority que to get the min cost always - log(n)

You might also like