Professional Documents
Culture Documents
Lecture12 Routers
Lecture12 Routers
Vincent Liu
Spring 2018
Lecture 12
Network
Congestion control
Please slow down!
Network
Sliding window at receiver
RWND =Receiving
B - (LastByteReceived
process - LastByteRead)
Window
Loss
Exponential t
“slow start”
Why AIMD?
• Every RTT, we can do
• Multiplicative increase or decrease: CWND® a*CWND
• Additive increase or decrease: CWND® CWND + b
• Four alternatives:
• AIAD: gentle increase, gentle decrease
• AIMD: gentle increase, drastic decrease
• MIAD: drastic increase, gentle decrease
• MIMD: drastic increase and decrease
Simple model of congestion control
1 Efficiency line Fairness line
• Two users (x1+x2 = 1) (x1 =x2)
• rates x1 and x2
i e nt
c
effi
in
ß
Congested: x1+x2=1.2
(0.7, 0.3) d à
ste
Inefficient: x1+x2=0.7 n ge
co
Efficient: x1+x2=1 i e nt
c
Not fair effi Efficiency
in
ß line
User 1: x1 1
AIAD
Fairness
(x1-aD+aI),
• Increase: x + aI x2-aD+aI))
line
• Decrease: x - aD
• Does not converge (x1,x2)
to fairness
User 2: x2
(x1-aD,x2-aD)
d à
ste
n ge
co
i e nt
c
effi Efficiency
in
ß line
User 1: x1
0
10
20
30
40
50
60
A
D
1
28
55
x2
x1
82
109
136
163
190
217
244
271
298
325
352
AIAD Sharing Dynamics
379
406
433
460
E
B
487
MIMD
Fairness
• Increase: x*bI (x1,x2)
line
d à
ste
e
ong
c
i e nt
c
effi Efficiency
in
ß line
User 1: x1
MIAD
Fairness
• Increase: x*bI (bI(x1h-aD), bI(x2h-aD)) line
• Decrease: x - aD
• Does not converge to (x1h,x2h)
fairness
User 2: x2
• Does not converge to
efficiency
(x1h-aD,x2h-aD)
d à
• “Analysis of the Increase and ste
Decrease Algorithms for n ge
co
Congestion Avoidance in
Computer Networks” i e nt
c
-- Chiu and Jain effi Efficiency
in
ß line
User 1: x1
AIMD
Fairness
• Increase: x+aI (x1,x2) line
d à
ste
n ge
co
i e nt
c
effi Efficiency
in
ß line
User 1: x1
AIMD Sharing Dynamics
A x1 B
x2 50 packets/sec
D E
60
50
40
Rates equalize à fair share
30
20
10
0
1
28
55
82
109
136
163
190
217
244
271
298
325
352
379
406
433
460
487
Efficiency vs. Fairness
• Cannot always have both!
• Example network with traffic AàB, BàC and AàC
• How much traffic can we carry?
A B C
1 1
Efficiency vs. Fairness (2)
• If we care about fairness:
• Give equal bandwidth to each flow
• AàB: ½ unit, BàC: ½, and AàC, ½
• Total traffic carried is 1 ½ units
A B C
1 1
Efficiency vs. Fairness (3)
• If we care about efficiency:
• Maximize total traffic in network
• AàB: 1 unit, BàC: 1, and AàC, 0
• Total traffic rises to 2 units!
A B C
1 1
Max-Min fairness
• Given set of bandwidth demands ri and total
bandwidth C, max-min bandwidth allocations are:
• ai = min(f, ri)
• where f is the unique value such that Sum(ai) = C
• If you don’t get full demand, no one gets more than
you
• This is what round-robin service gives if all packets
are the same size
Computing Max-Min Fairness
• To find it given a network, imagine “pouring water
into the network”
1. Start with all flows at rate 0
2. Increase the flows until there is a new bottleneck in
the network
3. Hold fixed the rate of the flows that are bottlenecked
4. Go to step 2 for any remaining flows
ACK Clocking
ACK Clocking
• Consider what happens when sender injects a burst
of segments into the network
Queue
Segments
“spread out”
Slow link
Acks maintain spread
ACK Clocking (4)
• Sender clocks new segments with the spread
• Now sending at the bottleneck link without queuing!
Slow link
ACK clocking mitigates bursts
• Helps the network run with low levels of loss and
delay!
• Notion of TCP-friendliness
This Lecture
• Router design
• Queueing/Scheduling
• Router-assisted congestion control
What an IP router does:
The normal case
1. Receive incoming packet from link input interface
2. Lookup packet destination in forwarding table
(destination, output port(s))
3. Validate checksum, decrement ttl, update checksum
4. Buffer packet in input queue
5. Send packet to output interface (interfaces?)
6. Buffer packet in output queue
7. Send packet to output interface link
Many types of routers
• Core
• R = 10/40/100 Gbps
• NR = O(100) Tbps (Aggregated)
• Edge
• R = 1/10/40
• NR = O(100) Gbps
• Small business
• R = 10/100/1000 Mbps
• NR < 10 Gbps
What’s inside a router?
Control Route/Control
Plane Processor
2 2
Interconnect
(Switching)
Fabric
N N
What’s inside a router?
• Linecards
• Input linecards process packets on their way in
• Output linecards process packets on way out
• Input and output for the same port are on the same
physical linecard
• Interconnect/switching fabric
• Transfers packets from input to output ports
Primary responsibilities
1. Determining the appropriate output port
• IP Prefix Lookup
• Challenge: speed!
• 100B packets @ 40Gbps à new packet every 20 ns!
• Typically implemented with specialized ASICs (network
processors)
This Lecture
• Router design
• Queueing/Scheduling
• Router-assisted congestion control
How do we handle multiple
inputs/outputs?
Control Route/Control
Plane Processor
2 2
Interconnect
(Switching)
Fabric
N N
Interconnect Fabric
Input
ports Where do we
need queues?
Output ports
• If inputs and outputs are the same speed, there can be contention
• Location of queue depends on the speed of the linecards/fabric
Shared memory
• One giant block of memory
• N writes, N reads every ‘tick’
• Pros
• Simple to use
• Ability to dynamically carve up of available space
• Cons
• Fast memory is expensive
• Size usually limited
Output queuing
• Output interfaces buffer packets
Input Output
Switch
• Pros
• Faster/cheaper than shared memory
• Single congestion point
• Cons
• N inputs may send to the same
output
• Still requires speedup of N
• i.e., output ports must run N times
quicker than input ports to handle
worst case
Input queuing
l Input interfaces buffer packets Input Output
Switch
l Pros
u Single congestion point
u 1 input, 1 output per ‘tick’
l Cons
u Must implement flow control
u Low utilization due to
Head-of-Line (HoL) Blocking
Head-of-Line Blocking
Problem: The packet at the front of the queue experiences contention for
the output queue, blocking all packets behind it.
Input 1 Output 1
Input 2 Output 2
Input 3 Output 3
Input 1
Output 1
Output 2
Input 2
Output 3
Input 3
Typical queue management policy
• Access to the bandwidth: first-in first-out queue
• Packets only differentiated when they arrive
Flow 2 1 2 3 4 5
(arrival traffic) time
Service 1 2 1 2 3 4 5 6
in fluid flow 3 4 5
time
system
FQ 1 2 1 3 2 3 4 4 5 5 6
Packet time
system
Fair Queuing (FQ)
• Implementation of round-robin generalized to the
case where not all packets are equal sized
• Weighted fair queuing (WFQ): assign different flows
different shares
• Today, some form of WFQ implemented in almost
all routers
• Not the case in the 1980-90s, when CC was being
developed
• Mostly used to isolate traffic at larger granularities (e.g.,
per-prefix)
FQ vs. FIFO
• FQ advantages:
• Isolation: cheating flows don’t benefit
• Bandwidth share does not depend on RTT
• Flows can pick any rate adjustment scheme they want
• Disadvantages:
• More complex than FIFO: per flow queue/state,
additional per-packet book-keeping
FQ in the big picture
• FQ does not eliminate congestion à it just
manages the congestion
s
5G
bp
bp
0M
10
s
1Gbps
1G
1 G
400Mbps from
bp
s
the green flow
Blue and Green get If the green flow doesn’t drop its sending rate to
0.5Gbps; any excess 100Mbps, we’re wasting 400Mbps that could be
will be dropped usefully given to the blue flow
FQ in the big picture
• FQ does not eliminate congestion à it just
manages the congestion
• Robust to cheating, variations in RTT, details of delay,
reordering, retransmission, etc.
• But congestion (and packet drops) still occurs
• We still want end-hosts to discover/adapt to their
fair share!
• What would the end-to-end argument say w.r.t.
congestion control?
Fairness is a controversial goal
• What if you have 8 flows, and I have 4?
• Why should you get twice the bandwidth?
• What if your flow goes over 4 congested hops, and
mine only goes over 1?
• Why shouldn’t you be penalized for using more scarce
bandwidth?
• What is a flow anyway?
• TCP connection
• Source-Destination pair?
• Source?
Router-Assisted Congestion
Control
• CC has three different tasks:
• Isolation/fairness
• Rate adjustment
• Detecting congestion
Why not let routers tell what rate
end hosts should use?
• Packets carry “rate field”
• Routers insert “fair share” f in packet header
• End-hosts set sending rate (or window size) to f
• Hopefully (still need some policing of end hosts!)
• This is the basic idea behind the “Rate Control
Protocol” (RCP) from Dukkipati et al. ’07
• Flows react faster
Router-Assisted Congestion Control