Download as pdf or txt
Download as pdf or txt
You are on page 1of 45

ECTE482/882/982 Network Engineering

IP Traffic Management
& QoS Concepts
Dr. Le Chung Tran
Email: LCTRAN@UOW.EDU.AU
Room: 35.G32. Ext: 3846
Consultation: Mon 09.30am – 11.30am (via email)
Tue 09.30am – 11.30am
(Students are advised to email for an appointment)

1
This lecture
• Traffic parameters
• Policing mechanisms
- Leaky bucket
- Token bucket
• Scheduling algorithms
- FIFO
- Priority queuing
- Round robin, and Weighted round robin
- Fair queuing, and Weighted fair queuing
• Two main architectures to provide QoS guarantees:
IntServ and DiffServ

2
QoS and IP networks
• QoS:
– network ability to provide better service to selected traffic over various underlying
technologies, incl., ATM, Ethernet, WiFi, IP-routed networks
– collection of techniques allows applications to request and receive predictable service
levels (e.g., bandwidth, delay, jitter, loss)
• IP was originally designed to provide a ‘best effort’ service (default QoS)
– IP routers do not distinguish between different packets and treat all the same
– Other people’s traffic can affect the QoS perceived by a particular user
– No encouragement for any user to limit traffic for the good of all
• Solution: multiple classes of service (‘best effort’ is not enough)
– partition traffic into multiple classes
– network treats different traffic classes differently, based on
– three main principles of treatment (see next slides)
• Implementation: IETF defines 2 architectures for providing QoS
– Integrated Service (IntServ) and
– Differentiated Service (DiffServ)

3
Scenario 1: mixed FTP + audio
• An IP phone flow & a FTP flow share a 2 Mbps link
– FTP bursts can congest routers, causing audio loss
– How to give priority to audio?

Principle 1
R1 R2
- packet marking is required for
routers to distinguish between
different traffic classes; and
- new router policy is needed to
treat packets accordingly

• Q: How to mark pkts?


• A: Giving appropriate value for
− ToS (8 bits) in IPv4 header
− Traffic Class (8 bits) in IPv6 header

4
Scenario 2: Traffic Isolation
• what if applications misbehave (audio sends higher than declared
rate, causing FTP user starves)
– Policing is needed to force source to obey its bandwidth allocations
• marking and policing are done at network edge

1 Mbps
phone
R1 R2

2 Mbps link

packet marking and policing


Principle 2
Policing is needed to provide protection (isolation) for one flow from
others so that one flow is not adversely affected by another
misbehaving flow
5
Principles for QoS Guarantees (more)
• Allocating fixed (non-sharable) bandwidth to flows: inefficient use
of bandwidth if a flow doesn’t use its allocation

1 Mbps 1 Mbps logical link


phone R1
R2

2 Mbps link

0.5 Mbps logical link

Principle 3
while providing isolation, it is desirable to use resources as efficiently
as possible (i.e., a common resource pool can be shared between
multiple flows – via scheduling)

6
Basic concepts – traffic flow
• Flow: a chain of pkts from a sending application to a
receiving application (one direction) traversing network
elements, all covered by the same request of QoS*.
– a video stream is a traffic flow

– all the packets belonging to the same http request are in a traffic flow

• Packets come from same 5-tuple


<Source IP, Dest IP, Source Port, Dest Port, Protocol>
are belongs to the same flow.
• Flow could also represents aggregated traffic from multiple
sources
* Jha, S., Hassan, M. “Engineering Internet QoS”, Artech House 2002 7
Traffic Parameters
Three main parameters characterise a variable rate traffic flow:
• Average Rate: how many pkts can be sent per unit time (in the long run)
– crucial question: what is the interval length: 10 pkts/sec or 600 pkts/min have
same average!
• Peak Rate: max rate transmitted in a time unit, e.g., 600 pkts/min avg.;
1000 pkts/min peak rate
• (Max.) Burst Size: max. number of pkts sent consecutively (with no
intervening idle)
Speed Peak Rate

Ave Rate

Time
Max Burst Size (pkts)

8
Policing Mechanisms

• Policing (or shaping): mechanisms to limit traffic not to


exceed declared parameters (e.g., rate, burst size)
• Two common shaping/policing algorithms: Leaky bucket
and Token bucket
– leaky bucket will be sufficient to shape/police the peak rate
– token bucket will be used for average rate and burst size

9
Leaky Bucket

• Pkts equally spaced at a constant rate


(called leak rate), even arrival pkts
may be generated in a bursty manner
Burst
• Bucket can hold b pkts – b is called
bucket depth
– b determines how large a burst can be
tolerated without loss b (bytes)
• May drop pkts if the bucket is full (thus
incurs packet loss) r
Constant
• A more flexible, no-packet loss rate
algorithm is required for many
applications

10
Leaky Bucket Implementation
Remove packets at
b a constant rate

Arrival N
Full? Departure

Queue
Y
Discard

Data rate
12 Mbps
• Shapes bursty traffic into fixed-
rate traffic
2 Mbps
• Assume: Leak rate = 3 Mbps;
bucket depth = b (Mb) 0 1 2 3 4 5 6 7 8 9
Input: Bursty data
10 Time (s)

Data rate
• Peak rate at the output never 3 Mbps
exceeds 3 Mbps
0 1 2 3 4 5 6 7 8 9 10 Time (s)
• What is bmin in this case? Output

11
Token Bucket - 1
r tokens / sec (long term rate)

bucket holds up to b
Token Bucket Implementation } tokens

packets token remove to network


wait token

Variable Burst Size Constant Burst Size

• Instead of holding arriving packets, bucket holds up to b tokens


• Tokens generated & placed into bucket at a constant (long term) rate r
(tokens/s) unless bucket is full
• Maximum burst size (pkts) at the output is equal to the bucket depth b
• Average rate (pkts/s) at the output is equal to the token rate r
12
Token Bucket - 2
• when a pkt arrives and a token is
available, the pkt is transmitted
immediately, and number of tokens
pkts
is reduced by one
– if there is no token, the pkt waits for
the new token
b

• when bucket is full, tokens are


discarded (no pkts dicarded) Time (t)

• Number of pkts transmitted over an interval of length t is ≤ (r t + b)*


• Token bucket allows bursty flow, but the burst sizes are upper bounded

* Upperbound might be achieved if the input and output links of the token bucket mechanism have infinite
capacity
13
Token Bucket Example -1
Verify the formula rt + b by yourself
Time 0

Time 1 Time2 Time 3 Time 4

5 pkts 5 pkts

All 5 pkts depart 1 pkt departs 1 pkt departs 1 pkt departs


0 pkts left 4 pkts left 3 pkts left 2 pkts left

14
Token Bucket Example -2
Time 2 Time 4

Time 0

Time 3 Time 5 Verify the


formula rt + b
Time 1
by yourself

5 pkts 4 pkts

All 5 depart 4 Pkts Depart

15
Scheduling Techniques
• Scheduling: mechanisms to determine the order of packets leaving
the queue
– E.g.: FIFO (First In First Out), Priority queuing, Round robin, Weighted round
robin, Fair queuing, Weighted fair queuing

• FIFO (or FCFS) – default scheduling mechanism used for best


effort networks
– All packets are queued in one buffer and the scheduler serves the packets from
the head of queue
– Discard policies: tail drop, random early detection (RED) etc.

16
Priority Queuing
• Multiple queues (classes) with different priorities
– Type of Service, Source/dest IP Addresses, Source/dest TCP Port numbers, etc.
• Transmit highest priority packets in the queue first
• Simple, but may lead to starvation of lower priority classes
– If server is too busy serving the higher priority class

17
Round Robin & WRR
• Round Robin (RR): maintains one queue for each class of traffic,
cyclically scans queues, serving one pkt from each class (if available)
• Weighted Round Robin (WRR): generalized RR, each class has a
weight, scheduler may serve a number of packets from each queue
– If a queue at its turn does not have enough data for transmission, all pkts available at
that time in this queue will be transmitted before the scheduler changes to another
queue
– Priority can be given to a certain queue by changing its weight

18
Packet Size Problem

• Fairness problem: RR & WRR are packetwise schedulers.


• Packet size is not considered in both mechanisms
• They are unfair if packets have different sizes (biased toward
large pkts)
• Consider a router using simple RR with two flows with the
same rates (packets/s) but different packet sizes:
– Flow 1: 1000 byte packets - occupies 2/3 of link’s bandwidth
– Flow 2: 500 byte packets - occupies 1/3 of link’s bandwidth

• Flow 1 gets preferential treatment – unfair!


• Better to have a bitwise scheduler (a bit-by-bit round robin),
instead of packetwise scheduler, but it is infeasible to
interleave bits from different packets

19
Fair Queuing
• Overcomes the problem with different pkt sizes
• Allows to approximate the behaviour of a bit-by-bit round robin
• Determines when a pkt would finish as if it was transmitted
using a bit-by-bit robin
F1(t)
– Each pkt is tagged with the time its last bit would be transmitted F2(t)
as if a bit-by-bit round robin scheduler was used

– This time tag is called virtual finishing time F(t)


F3(t)

• This virtual finishing time determines the transmission order of


pkts
‒ Pkt with the lowest virtual finishing time is transmitted first

20
Why Virtual Time?

• Modeling the actual finish time is computationally intensive


(this clock advances very quickly – see example later)
• Virtual time (V(t)) is used to reduce the computational effort
• Virtual time is incremented by one each time n bits (or more
generally, data units) are transmitted for n active flows
– e.g., if there are
3 active flows, we increment V(t) by 1 when 3 bits are transmitted
4 active flows, we increment V(t) by 1 when 4 bits are transmitted
– i.e., if n active flows, virtual clock increments with the speed 1/n

• So this clock advances more slowly when there are more


active flows

21
Real Time vs Virtual Time
Real clock

3 3 1
Flow 1 0 1 2 3 4 5 6 7 8 Time (sec)

real
2 5
time
Flow 2 0 1 2 3 4 5 6 7 8 Time (sec)

5
Flow 3 0 1 2 3 4 5 6 7 8 Time (sec)

Virtual time V(t) 0 1 1.5 1.83 2.83 3.33 3.66 4 5 5.5 6.5

Virtual clock

virtual time

22
Real Time vs Virtual Time
1
Real clock

3 3 1
Flow 1 0 1 2 3 4 5 6 7 8 Time (sec)

2 5
Flow 2 0 1 2 3 4 5 6 7 8 Time (sec)

5
Flow 3 0 1 2 3 4 5 6 7 8 Time (sec)
1

Virtual time V(t) 0 1 1.5 1.83 2.83 3.33 3.66 4 5 5.5 6.5

Virtual clock

23
Real Time vs Virtual Time
2
Real clock

3 3 1
Flow 1 0 1 2 3 4 5 6 7 8 Time (sec)

2 5
Flow 2 0 1 2 3 4 5 6 7 8 Time (sec)

5
Flow 3 0 1 2 3 4 5 6 7 8 Time (sec)
1.5

Virtual time V(t) 0 1 1.5 1.83 2.83 3.33 3.66 4 5 5.5 6.5

Virtual clock

24
Real Time vs Virtual Time
Real clock 3

3 3 1
Flow 1 0 1 2 3 4 5 6 7 8 Time (sec)

2 5
Flow 2 0 1 2 3 4 5 6 7 8 Time (sec)

5
Flow 3 0 1 2 3 4 5 6 7 8 Time (sec)
1.83

Virtual time V(t) 0 1 1.5 1.83 2.83 3.33 3.66 4 5 5.5 6.5

Virtual clock

25
Real Time vs Virtual Time
Real clock
4

3 3 1
Flow 1 0 1 2 3 4 5 6 7 8 Time (sec)

2 5
Flow 2 0 1 2 3 4 5 6 7 8 Time (sec)

5
Flow 3 0 1 2 3 4 5 6 7 8 Time (sec)

2.83
Virtual time V(t) 0 1 1.5 1.83 2.83 3.33 3.66 4 5 5.5 6.5

Virtual clock

26
Real Time vs Virtual Time
Real clock

3 3 1 5

Flow 1 0 1 2 3 4 5 6 7 8 Time (sec)

2 5
Flow 2 0 1 2 3 4 5 6 7 8 Time (sec)

5
Flow 3 0 1 2 3 4 5 6 7 8 Time (sec)

3.33
Virtual time V(t) 0 1 1.5 1.83 2.83 3.33 3.66 4 5 5.5 6.5

Virtual clock

27
Real Time vs Virtual Time
Real clock

3 3 1 6
Flow 1 0 1 2 3 4 5 6 7 8 Time (sec)

2 5
Flow 2 0 1 2 3 4 5 6 7 8 Time (sec)

5
Flow 3 0 1 2 3 4 5 6 7 8 Time (sec)

Virtual time V(t) 0 1 1.5 1.83 2.83 3.33 3.66 4 5 5.5 6.5 3.66

Virtual clock

28
Fair Queuing
Definitions
Fik - Virtual finish time of packet k of flow i
Lki - Length of packet k of flow i
V(t) - Virtual time when packet k of flow i arrives at the router

Virtual finishing time of packet k of flow i is

Fik  max(Fik 1 , V(t))  Lki

29
Fair Queuing - Example
3 6 7
3 3 1
F1 0 1 2 3 4 5 6 7 8 Time (sec)
3 8.33
2 5
F2 0 1 2 3 4 5 6 7 8 Time (sec)
6.5
5
Fik  max(Fik 1 , V(t))  Lki F3 0 1 2 3 4 5 6 7 8 Time (sec)

V(t) 0 1 1.5 1.83 2.83 3.33 3.66 4 5 5.5 6.5

F11 = max(F10,V(0)) +L11 = max (0, 0) + 3 = 3 Note: V(4), for


F12 = max(F11,V(4)) +L12 = max (3, 2.83) + 3 = 6 example, denotes
F13 = max(F12,V(8)) +L13 = max (6, 5) + 1 = 7 the virtual time
corresponding to the
F21 = max(F20,V(1)) +L21 = max (0, 1) + 2 = 3 real time 4 (when
F22 = max(F21,V(5)) +L22 = max (3, 3.33) + 5 = 8.33 Pkt 2 of Flow 1
F31 = max(F30,V(2)) +L31 = max (0, 1.5) + 5 = 6.5 arrives).
Thus V(4) = 2.83

All possible delivery orders are F11 , F21 , F12 , F31 , F13 , F22

F21 , F11 , F12 , F31 , F13 , F22 30


Weighted Fair Queuing (WFQ)
• Give more weight to a given flow, i.e., more service time
k 1
F
• Virtual finish time of packet k of flow i is i
k
 max(Fi , V(t))  Lk
i /wi

• WFQ reduces to FQ in the case wi = 1


• Example: give flow F2 a weight of 2 in the following example
Finish Time of Each Packet before Weighting Finish Time of Each Packet after Weighting

C:150 B:100 A:50 C:150 B:100 A:50


F1
E:180 D:90 E:90 D:45
F2
F:120 F:120
F3

• Output of FQ: A, D, B, F, C, E
• Output of WFQ: D, A, E, B, F, C (flow F2 tends to be served first)

31
Integrated Services Model - IntServ

• is early computer architecture providing QoS guarantees on networks


(e.g., allow video and audio to reach the receiver without interruption)
• developed in 1994 (RFC 1633 July 1994)
• is a flow-based mechanism: provide individual QoS guarantees to
individual application per flow
• requires
– every router in network to implement IntServ model, and
– every application (that requires some kind of guarantee) has to use Resource
Reservation Protocol (RSVP) to make request & reserve resources along the path
from the source to the destination
– if all nodes in this path can satisfy the required QoS, the transmission will be started

32
Integrated Services
Application can request for 1 of 3 levels of guarantee in IntServ
• Best effort service: normal internet service, no guarantee
• Controlled-load: flow receives QoS with
– delay and rate ≈ the desired values,
– but no firm bounds
• Guaranteed services: provides firm bounds on data throughput & delays
Important components in the IntServ architecture:
• Classifier
• Packet scheduler
• Admission control (or Call admission)
• Reservation protocol (RSVP)

33
IntServ Architecture
• Classifier Background functions
– maps each incoming pkt into some flowspec
Routing Reservation Admission Management
class Control Agent
protocols protocol

– same classes will receive same


Routing Traffic
treatments from schedulers database control
database
• Scheduler
– forwarding different pkt streams in QoS queuing
Classifier Packet
different manners using a set of Scheduler Best-effort
queuing
queues
– part of its functions is policing Forward functions
– might be FIFO, Priority Queuing,
IntServ Architecture Implemented at Routers
Round Robin, Weighted Round
Robin, Fair Queuing, or Weighted Figure 19.10 in W. Stalling, Data and Computer
Communications, 8th Ed., Pearson Education, Inc., USA, 2008.
Fair Queuing

34
IntServ Architecture
• RSVP Background functions
– signaling protocol used by each flowspec
Routing Reservation Admission Management
host and router along the path Control Agent
protocols protocol

– to reserve resources for a new


Routing Traffic
flow at a given level of QoS database control
database
• Admission control
– RSVP passes the required QoS Classifier Packet
QoS queuing
Scheduler Best-effort
(called: flowspec) to Admission queuing
control
Forward functions
– Admission control is used by each
end-host and router along the path
IntServ Architecture Implemented at Routers
to determine if resources are
Figure 19.10 in W. Stalling, Data and Computer
available for requested QoS
Communications, 8th Ed., Pearson Education, Inc., USA, 2008.

35
Integrated Services - Signaling
• routers must maintain state info, i.e., records of allocated resources
routers must refresh the state info regularly
• based on state info & the request, routers respond to new call setup
requests
• all routers on the path must participate, to ensure QoS to be met
• what must client application send?

36
Flowspec = Tspec + Rspec
• The followings must be sent to all routers on the path to reserve
resources
– required QoS service class (best effort, controlled-load or guaranteed services),
– required guarantees (called Rspec – Reservation Specifications), and
– required traffic specifications (called Tspec-Traffic Specifications)

• Rspec: service requested from network – what guarantee does the flow
need?
– Bandwidth required
– Tolerable delay

• Tspec: flow’s traffic characteristics – what does the traffic look like?
– Sending rate + burst size: these parameters are typically the token bucket parameters:
including
• Token rate r (decides the average rate)
• Bucket depth b (decides the burst size)
• Peak-rate

37
Call Admission
Each router:
• Receives Flowspec from RSVP
• ‘Compares’ Flowspec and the current
state info (resource allocated to other
calls) and decides to admit or reject
calls
• Refreshes regularly the state info to
cope with dynamic behaviour of the
Internet (change of routes etc.)
– Typically every 30 seconds

38
Differentiated Services – DiffServ

• IntServ problem: complicated routers


– process lots of flows,
– reservation must be made for each flow,
– reservation state stored at each router and must be refreshed periodically,
– also need to classify, police and schedule these flows
• IntServ typically applied in Intranets only
• DiffServ proposed to support scalability for QoS in the Internet (RFC
2474 - Dec 1998)
• Diffserv works at class level (an aggregate of many flows treated in the
same way)
• Routers only need to distinguish a relatively smaller number of traffic
classes

39
Diffserv Architecture
Edge router: r
marking
• per-flow traffic management
• marking packets as well as
b
policing flows

Core router:
• per-class traffic management
• scheduling based on marking at
edge routers

scheduling
Simple in network core,
relatively complex at edge
routers (or hosts) ..
.
40
Edge Routers
• Marking pkts based on rules (by admin, or
by determined protocol)
– i.e. setting the 6-bit Differentiated Service Code Rate r
Point (DSCP) in the field ToS (IPv4), Traffic Class
(IPv6)
• Policing the flow, typically using the token b
bucket mechanism with pre-negotiated rate
r and bucket size b User pkts
• decide either delay and then forward, or
discard pkts if pkts do not conform declared
parameters.

41
Core Routers
• forward pkts, based on their marks
• same marks - same forwarding manner,
though they might belong to different flows
– if flows H2→H4 & H1→H3: same marks, R3
treats these pkts as an aggregate flow (one
class)

• Forwarding manner is referred to as Per-


Hop Behaviour (PHB). PHB is specified for
each particular pkt class
– PHB influences how a router shares buffer and
link bandwidth with the competing classes of
traffic

• Advantage over IntServ: QoS is provided


without having to store state information
for an individual flow/application at routers!

42
Example of Edge Routers

Source: http://www.cs.stir.ac.uk/~kjt/servprov/sample.html

43
Example of Core Routers

EF: Expedited Forwarding – high priority pkt handling,


processed as fast as the arrival rate
AF: Assured Forwarding – similar to Controlled Load service in IntServ.
Source: http://www.cs.stir.ac.uk/~kjt/servprov/sample.html

44
Summary
This lecture Next lecture
• Traffic parameters
• Policing mechanisms • Application Layer
- Leaky bucket – Web
– HTTP
- Token bucket
• In-class quiz
• Scheduling algorithms
- FIFO
- Priority queuing
- Round robin, and
Weighted round robin
- Fair queuing, and
Weighted fair queuing
• IntServ and DiffServ
45

You might also like