Brkarc 2031

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 115

#CLUS

QoS Configuration
Migrations From
Classic IOS to IOS XE
David Roten, Technical Marketing Engineer

BRKARC-2031

#CLUS
Cisco Spark

Questions?
Use Cisco Spark to chat with the speaker after the session

How
1 Find this session in the Cisco Events App
2 Click “Join the Discussion”
3 Install Spark or go directly to the space
4 Enter messages/questions in the space

Cisco Spark spaces will be cs.co/ciscolivebot#BRKACC-2031


available until June 28, 2018.

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 3
Agenda
• Introduction
• What is new with QoS in IOS XE
• IOS XE vs IOS Classic behavior
differences
• Platform specific caveats for
IOS XE
• Specific use case migrations
from IOS classic to IOS XE
• Conclusion

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 4
Coverage and prerequisites
• We will be covering QoS on the routing platforms. We will not be
covering switching platforms.

Please be familiar with the concepts of shaping, queuing, policing


and MQC style configuration.

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 5
Introduction
• Multicore forwarding
• More forwarding horsepower prevents masking of problems by linear processing
• More scheduling parameters for detailed control
• Identical configurations give different behavior on different platforms
• More rules for constructing hierarchies
• Added flexibility for non-traditional hierarchies in QoS
• Etherchannel QoS combination
• No longer are you restricted to VLAN balancing or member link QoS
• Multiple platforms behave the same way now
• Branch and headend routers now run the same code with the same behavior

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 6
QoS data path forwarding
ASR1000 QFP hardware engine handles Resource BQS pkt buffers
TCAM
packet forwarding. In ISR4000 routers memory
this is done via software using the same Policer
microcode. assist B
QFP scheduling
complex
A. Available core is allocated for QFP forwarding complex
packet processing (ingress QoS,
police, WRED drops, mark, classify) A
B. Based on default and user
configurations, packets are
Dispatcher / buffers
scheduled for transmission based
on the egress physical interface.
Interconnect
All scheduling and queuing
happens in this second phase.

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 7
What’s new with QoS in
IOS XE
What’s new with QoS in IOS XE
• 2 versus 3 parameter scheduler • Shaping parameters ignored
• Queue-limit options • Priority versus PAK_PRI
• 2 levels of priority • Fair-queue scaling
• Aggregate Etherchannel QoS
• Tunnel QoS configurations
• Performance licenses
• Service-fragments
• Service groups

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 9
2 versus 3 parameter
scheduling
IOS XE QoS – 3 parameter scheduler
• IOS XE provides an advanced 3 parameter scheduler
• Minimum - bandwidth
• Excess - bandwidth remaining
• Maximum- shape

• 3 parameter schedulers share excess bandwidth equally in default


configuration
• bandwidth and bandwidth remaining may not be configured in the
same policy-map

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 11
What are the three parameters?
Minimum is defined by the bandwidth
policy-map child
command or priority classes with policers.
class voice
priority level 1 Classes with these directives are guaranteed to
police cir 2000000 (bit/sec) receive at least and maybe more bandwidth.
class critical_services
bandwidth 5000 2 Mb/sec
class internal_services Minimums
shape average percent 100 5 Mb/sec
class class-default Excess is defined by the

25 Mb/s
! bandwidth remaining

Maximum
policy-map parent command. Excess is the
class class-default amount of bandwidth 6 Mb/sec
shape average 25000000 available once all the
service-policy child minimums guarantees are Excess
satisfied. By default, 6 Mb/sec
Maximum is implemented by classes have a remaining
shapers. Traffic rates beyond ratio of 1 even if bandwidth
the shaper rates will be held in remaining is not configured. 6 Mb/sec
queues.

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 12
3 versus 2 parameter scheduler behavior
policy-map test 100
class D1 90
bandwidth percent 12
class D2 80
bandwidth percent 6
class class-default 70

bandwidth
Excess
bandwidth percent 2 c-d excess
60
D2 excess
• 2 parameter schedulers share excess bandwidth
50 D1 excess
proportionally class-default
40
if the above policy-map is overdriven, the 80% of the remaining D2
bandwidth is proportionally split amongst all three queues 30 D1
12% + (12 / (12+6+2) * 80%) = 60% 20

 3 parameter schedulers share excess bandwidth equally 10


Minimum
guarantee
if the above policy-map is overdriven, the 80% of the remaining
0
bandwidth is equally split amongst all three queues 3 param 2 param
12% + (80% / 3) = 12% + 26.67% = 38.67%

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 13
2 ways to manage bandwidth remaining (excess)
• The bandwidth remaining (BR) adjust sharing of excess bandwidth

• Two options are available:


• bandwidth remaining ratio X, where X ranges from 1 to 1000, with variable
base
• bandwidth remaining percent Y, where Y ranges from 1 to 100, with fixed
base of 100
• bandwidth remaining percent (BR%) based allocations remain the same as classes are
added to a configuration
• bandwidth remaining ratio (BRR) based allocations adjust as more queuing classes are
added to a configuration with or without BRR configured
• base changes as new classes are added with their own ratios defined or with a
default of 1
• By default, all classes have a bandwidth remaining ratio or percent value of 1

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 14
3 versus 2 parameter scheduler behavior

• 3 parameter schedulers share excess bandwidth equally after satisfying


minimum requirements
• 2 parameter schedulers share excess bandwidth proportionally based on
the minimum rate configured
Gives even more of an advantage to traffic that was given a heavy weight in the
first place
• To get 2 parameter behavior from IOS XE scheduler, use BRR configuration
This grants no minimums but the excess sharing will provide the same type of
behavior and be proportional from the satisfied minimum to max rate which is
essentially the same as what the 2 parameter schedule provides with minimum +
proportional excess.

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 15
Required minimums
• 1st tier guarantees
• priority classes (with policers that are a hard limit on bandwidth)
• bandwidth classes

• 2nd tier guarantees (aka excess sharing)


• Only applies after minimum requirements from 1st tier have been satisfied
• By default equal sharing
• Sharing can be modified with bandwidth remaining command
• In isolation acts like Classic IOS

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 16
QoS 3 parameter scheduler – minimum
policy-map child

dropped
class voice 9 Mb/s Calculate
Required minimums

priority level 1 Satisfy mins excess sharing


police cir 1000000 (bit/sec)
10 Mb/sec 1 Mb/sec
class critical_services
bandwidth 10000 (kbit/sec)
class internal_services

Injected traffic
10 Mb/sec 10 Mb/sec
bandwidth 10000 (kbit/sec)

25 Mb/s
class class-default
bandwidth 1000 (kbit/sec)
!
policy-map parent 10 Mb/sec 10 Mb/sec
class class-default
shape average 25000000
10 Mb/sec 1 Mb/sec
service-policy child
3 Mb/sec
dropped

6 Mb/s

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 17
QoS 3 parameter scheduler – excess ratio
Required minimums

policy-map child

dropped
class voice 9 Mb/sec Calculate
priority level 1 Satisfy mins excess sharing
police cir 1000000 (bit/sec)
10 Mb/sec 1 Mb/sec

Injected traffic
class critical_services
bandwidth remaining ratio 4 5 Mb/sec 5 Mb/sec
class internal_services
bandwidth remaining ratio 1

25 Mb/s
class class-default
bandwidth remaining ratio 1 9.5 Mb/sec
! 15 Mb/sec

dropped
policy-map parent
class class-default 5.5 Mb/sec
shape average 25000000
service-policy child 10 Mb/sec 9.5 Mb/sec

4:1:1 ratio on 24 Mb/sec


16:4:4 Mb/sec
dropped

0.5 Mb/sec
#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 18
QoS 3 parameter scheduler – excess %
Required minimums

policy-map child

dropped
class voice 9 Mb/sec Calculate
priority level 1 Satisfy mins excess sharing
police cir 1000000 (bit/sec)
10 Mb/sec 1 Mb/sec

Injected traffic
class critical_services
bandwidth remain percent 40 5 Mb/sec 5 Mb/sec
class internal_services
bandwidth remain percent 10 15 Mb/sec

25 Mb/s
class class-default
9 Mb/sec
!
!

dropped
policy-map parent
6 Mb/sec
class class-default
shape average 25000000
10 Mb/sec
service-policy child
10 Mb/sec
40: 10:50 ratio on 24 Mb/sec
9.6:2.4:12 Mb/sec
#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 19
BR% vs BRR behavior
policy-map test
class D1 D1 D2 D3 c-d
bandwidth remaining percent 12
class D2 BR percent 12% 6% 2% 80%
bandwidth remaining percent 6
class D3 57% 29% 9.5% 4.5%
bandwidth remaining percent 2 BR ratio
12 / 21 6 / 21 2 / 21 1 / 21

policy-map test • Each class is driven with a packet stream


class D1
which exceeds the parent shaper rate
bandwidth remaining ratio 12
class D2
• BR% offers class-default 80% since it is
bandwidth remaining ratio 6
class D3 based on a total pool of 100
bandwidth remaining ratio 2
• BRR offers class-default only 4.5% since it is
based on a pool of 21 (12+6+2+1)

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 20
Queue limit options
Queue-limit options
• Classic IOS offered only units of packets for defining queue limits
• IOS XE on ASR1000 offers packets and also offers time and byte based
• ISR4000 and CSR1000V allows packet based configuration only
• Time and byte based are essentially the same thing since time is simply
changed into bytes based on the speed of the interface (or parent shaper)
• 150 ms latency on GigabitEthernet interface and OC3 POS

1𝐸9 𝑏𝑖𝑡𝑠 𝑏𝑦𝑡𝑒


0.150 𝑠𝑒𝑐 × × = 18,750,000 𝑏𝑦𝑡𝑒𝑠
𝑠𝑒𝑐 8 𝑏𝑖𝑡𝑠

155𝐸6 𝑏𝑖𝑡𝑠 𝑏𝑦𝑡𝑒


0.150 𝑠𝑒𝑐 × × = 2,960,250 𝑏𝑦𝑡𝑒𝑠
𝑠𝑒𝑐 8 𝑏𝑖𝑡𝑠
#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 22
Queue-limit options
• Time units allows flexibility to have a single policy-map work for
multiple interfaces at different speeds
• Using time / bytes based units gives a consistent latency profile
• The latency associated with 4000 packets is much different if those
packets are 64 or 1500 bytes
• With a GigE interface the difference is 2 msec vs. 48 msec (64 vs. 1500
byte packets)
𝑏𝑦𝑡𝑒𝑠 𝑏𝑖𝑡𝑠 𝑏𝑖𝑡𝑠
4000 𝑝𝑎𝑐𝑘𝑒𝑡𝑠 × 64 × 8 ÷ 1𝐸9 = 2 𝑚𝑠𝑒𝑐
𝑝𝑎𝑐𝑘𝑒𝑡 𝑏𝑦𝑡𝑒 𝑠𝑒𝑐𝑜𝑛𝑑

𝑏𝑦𝑡𝑒𝑠 𝑏𝑖𝑡𝑠 𝑏𝑖𝑡𝑠


4000 𝑝𝑎𝑐𝑘𝑒𝑡𝑠 × 1500 × 8 ÷ 1𝐸9 = 48 𝑚𝑠𝑒𝑐
𝑝𝑎𝑐𝑘𝑒𝑡 𝑏𝑦𝑡𝑒 𝑠𝑒𝑐𝑜𝑛𝑑
#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 23
Queue-limit options
policy-map queue-limit
class class-default
queue-limit 150 ms

Rtr#show policy-map int GigabitEthernet0/3/0 Rtr#show policy-map int Serial1/1/0


GigabitEthernet0/3/0 Serial1/1/0

Service-policy output: queue-limit Service-policy output: queue-limit

Class-map: class-default (match-any) Class-map: class-default (match-any)


0 packets, 0 bytes 0 packets, 0 bytes
5 minute offered rate 0000 bps, drop rate 0000 bps 5 minute offered rate 0000 bps, drop rate 0000 bps
Match: any Match: any

queue limit 150 ms/ 18750000 bytes queue limit 150 ms/ 828937 bytes
(queue depth/total drops/no-buffer drops) 0/0/0 (queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0 (pkts output/bytes output) 0/0

• All queue-limit units in a policy and its related hierarchy must be the same units.
• Precludes modification on the fly, must remove, modify and reapply

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 24
IOS XE – queuing memory management
• Packet Buffer DRAM treated as one large buffer pool
• QFP has specific pool of memory for packet buffering, ISR4400 platforms use
shared FP memory, ISR4300 platforms use common memory for control and data
planes
• Two main building blocks for packet buffer DRAM on ASR1000
Block: each queue gets 1 Kbyte blocks of memory
Particle: packets are divided into 16 byte particles and linked together
• Advantages to such an implementation:
Less complex than buffer carving schemes
Fragmentation is minimal & predictable due to small sized blocks & particles
• Thresholds exist to protect internal control traffic and priority traffic
#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 25
Default queue-limit
• Classic IOS used 64 packets as the default queue-limit for all queues
• IOS XE uses 512 packets for priority queues and 50ms (with one exception
of 25ms on ESP-40) of MTU sized packets for all other queues, with a
strict minimum of 64 packets
• Don’t adjust priority queue limits unless you have a really good reason
• IOS XE platforms ignore the following buffering parameters for interface
QoS
• interface hold-queue
• global IOS buffers configuration
• IOS displays the affects of the commands, but no effect on traffic forwarding or
QoS behavior

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 26
IOS XE – queue limit management

• Without MQC QoS configuration, interface default queues have 50 ms of buffering in


a packets based configuration (except on ASR1000 with ESP-40 which uses 25 ms)

𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑒_𝑠𝑝𝑒𝑒𝑑𝑏𝑖𝑡𝑠/𝑠𝑒𝑐 × 0.050𝑠𝑒𝑐
𝑞𝑢𝑒𝑢𝑒_𝑙𝑖𝑚𝑖𝑡𝑝𝑎𝑐𝑘𝑒𝑡𝑠 =
𝑏𝑖𝑡𝑠
𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑒_𝑚𝑡𝑢𝑏𝑦𝑡𝑒𝑠/𝑝𝑎𝑐𝑘𝑒𝑡 × 8
𝑏𝑦𝑡𝑒
• MQC created queues have 50 ms of buffering by default in a packets based
configuration (on all IOS XE platforms). The speed used however depends…

𝑠𝑝𝑒𝑒𝑑𝑏𝑖𝑡𝑠/𝑠𝑒𝑐 × 0.050𝑠𝑒𝑐
𝑞𝑢𝑒𝑢𝑒_𝑙𝑖𝑚𝑖𝑡𝑝𝑎𝑐𝑘𝑒𝑡𝑠 =
𝑏𝑖𝑡𝑠
𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑒_𝑚𝑡𝑢𝑏𝑦𝑡𝑒𝑠/𝑝𝑎𝑐𝑘𝑒𝑡 × 8
𝑏𝑦𝑡𝑒

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 27
IOS XE – queue limit management
• Default queue limits are based on 50ms (except for ASR1000 ESP-40 at
25ms)
• If bandwidth parameter is configured, then that rate is used as the speed
value
• If bandwidth percent parameter is configured, then that rate based on the
percentage from the parent is used as the speed value
• If a shape parameter is configured, then that rate is used as the speed value
• If only bandwidth remaining is configured, then the parent’s speed value is
used
𝑠𝑝𝑒𝑒𝑑𝑏𝑖𝑡𝑠/𝑠𝑒𝑐 × 0.050𝑠𝑒𝑐
𝑞𝑢𝑒𝑢𝑒_𝑙𝑖𝑚𝑖𝑡𝑝𝑎𝑐𝑘𝑒𝑡𝑠 =
𝑏𝑖𝑡𝑠
𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑒_𝑚𝑡𝑢𝑏𝑦𝑡𝑒𝑠/𝑝𝑎𝑐𝑘𝑒𝑡 × 8
𝑏𝑦𝑡𝑒
#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 28
IOS XE – queue limit management
policy-map child priority classes always have a default queue depth of
class p7 512 packets
priority
police cir percent 2
class p6 bandwidth classes use the configured bandwidth value to
bandwidth 10000
shape average 200000000
come up with 10𝐸6×0.050 Τ1500 ×8 = 41 → 64 𝑝𝑎𝑐𝑘𝑒𝑡𝑠
class p5
shape average 200000000 shape only classes use the configured shape value to
class class-default
! come up with 200𝐸6×0.050 Τ1500 ×8 = 832 𝑝𝑎𝑐𝑘𝑒𝑡𝑠
policy-map parent
class class-default
shape average 800000000
service-policy child
𝑠𝑝𝑒𝑒𝑑𝑏𝑖𝑡𝑠/𝑠𝑒𝑐 × 0.050𝑠𝑒𝑐
𝑞𝑢𝑒𝑢𝑒_𝑙𝑖𝑚𝑖𝑡𝑝𝑎𝑐𝑘𝑒𝑡𝑠 =
𝑏𝑖𝑡𝑠
𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑒_𝑚𝑡𝑢𝑏𝑦𝑡𝑒𝑠/𝑝𝑎𝑐𝑘𝑒𝑡 × 8
𝑏𝑦𝑡𝑒

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 29
IOS XE – queue limit management
policy-map child priority classes always have a default queue depth of
class p7 512 packets
priority
police cir percent 2
class p6 Classes with shape use the configured shape value as
bandwidth remaining ratio 20 follows:
shape aver 30000000 30𝐸6×0.050 Τ
class p5 1500 ×8 = 125 𝑝𝑎𝑐𝑘𝑒𝑡𝑠
bandwidth remaining ratio 10
class class-default only classes use the parent
Bandwith remaining
bandwidth only classes
remaining use the parent shape
!
policy-map parent value 8value
shape Mbit/sec as follows:
8 Mbit/sec as follows:
8𝐸6×0.050 Τ
class class-default 1500 ×8 = 33,333 𝑝𝑎𝑐𝑘𝑒𝑡𝑠
shape average 800000000
service-policy child
𝑠𝑝𝑒𝑒𝑑𝑏𝑖𝑡𝑠/𝑠𝑒𝑐 × 0.050𝑠𝑒𝑐
𝑞𝑢𝑒𝑢𝑒_𝑙𝑖𝑚𝑖𝑡𝑝𝑎𝑐𝑘𝑒𝑡𝑠 =
𝑏𝑖𝑡𝑠
𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑒_𝑚𝑡𝑢𝑏𝑦𝑡𝑒𝑠/𝑝𝑎𝑐𝑘𝑒𝑡 × 8
𝑏𝑦𝑡𝑒

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 30
2 levels of priority
2 levels of priority
• IOS Classic offers a single level of priority
• IOS XE offers 2 levels of low latency queuing
• IOS Classic priority configuration migrates forward without
modification
• It is recommended to utilize priority level 1 for voice and level 2 for
video
• Priority level 1 queues are always drained before priority level 2
queues.

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 32
Priority differences
• IOS XE: Naked / strict priority consumes ALL available bandwidth from the
parent
• No other classes in that policy can use bandwidth command to reserve a
minimum
• IOS Classic: Naked / strict priority consumes 99% of all available bandwidth
from the parent, 1% reserved for all other classes in aggregate
• IOS Classic priority commands with policer percent, use the parent’s
minimum rate for calculation. (IOS XE QoS uses the parent’s maximum
rate.)
• IOS Classic limits priority throughput to the minimum guarantee of the
parent during physical interface congestion.

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 33
Priority behavior differences
IOS Classic IOS XE
Maximum throughput is minimum guaranteed for the Maximum throughput is up to the parent maximum
particular child
By default, minimum guaranteed is physical Minimum cannot be configured in parent. Control
bandwidth / number of classes. Minimum can be with bandwidth remaining ratio instead.
Strict Priority
configured in the parent.
If parent oversubscribed, each child will get up to its If parent oversubscribed, each child will get
minimum of priority traffic proportional bandwidth according to the parent
excess configuration
Maximum throughput during congestion is calculated Maximum congestion throughput is always calculated
based on the parent minimum or maximum value, on the parent maximum
whichever is less.
Priority percent
n/a If physical interface is oversubscribed, each child will
get proportional bandwidth according to the parent
excess configuration
Determined by maximum rate Always determined by maximum rate

Police percent When used in conjunction with priority, the Minimum guaranteed for priority propagation uses
percentage of the minimum guaranteed is deducted rate determined from the maximum.
from bandwidth available to other classes. #CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 34
Aggregate Etherchannel
QoS
IOS XE Etherchannel QoS support
• With VLAN based load balancing
1. Egress MQC queuing configuration on Port-channel sub-interfaces
2. Egress MQC queuing configuration on Port-channel member link
3. Policy Aggregation – Egress MQC queuing on sub-interface
4. Ingress policing and marking on Port-channel sub-interface
5. Egress policing and marking on Port-channel member link
6. Policy Aggregation – Egress MQC queuing on main-interface (XE2.6 and higher)
• Active/standby with LACP (1+1)
7. Egress MQC queuing configuration on Port-channel member link (XE2.4 and higher)
9. Egress MQC queuing configuration on PPPoE sessions, model D.2 (XE3.7 and higher)
10. Egress MQC queuing configuration on PPPoE sessions, model F (XE3.8 and higher)
• Etherchannel with LACP and load balancing (active/active)
8. Egress MQC queuing configuration supported on Port-channel member link (XE2.5 and higher)
11. Aggregate Etherchannel QoS support on Port-channel main-interface (XE3.12 and higher)

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 36
IOS XE Aggregate GEC QoS
• Without μcode gymnastics, all queuing QoS hierarchies must be rooted to
a physical interface
• Runs in direct opposition to the concept for a logical Etherchannel interface
since multiple interfaces need QoS in aggregate
platform qos port-channel-aggregate #
• Aggregate GEC QoS creates a new hierarchy that is rooted on a logical
recycle interface
• After packets run through the aggregate schedule, they are recycled for an
abbreviated run through the PPEs for Etherchannel processing. Queued
again through internally defined member-link hierarchies.

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 37
IOS XE Aggregate GEC QoS
• Policy-maps can be attached to Port-channel main-interface,
sub-interfaces, and service-groups on the Port-channel for input and
output directions
• No restrictions for contents of the policy-map other than what exist already
for GigabitEthernet interfaces
• For example, all of the following are supported:
• 3 levels of hierarchy (queuing + nonqueuing)
• 2 levels of priority
• WRED
• fair-queue, plus other features

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 38
IOS XE Aggregate GEC QoS
• Aggregate GEC member-links are configured internally with high and low
priority queues.
• Traffic classified as high priority in the aggregate GEC policy, are enqueued
through the high priority queues on the member-link hierarchies.
• Priority levels 1 and 2 in the user-defined policy-map on the Port-channel
main-interface will use the same priority queue on the physical interface

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 39
IOS XE Aggregate GEC QoS
• If Port-channel does not have enough bandwidth to service all
configured minimums, policy-map will go into suspended mode
• For example, there are more than 1Gb/sec of minimums and a 2 member
Port-channel interfaces has one of the member links go down
• Once enough member links are available to provide the guaranteed
minimums, service-policy will leave suspended mode and go into effect.

• Support for TenGigabitEthernet interfaces as of


XE 3.16.3 and XE 16.3
• Support for ISR4000 series as of IOS XE 16.6.1

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 40
IOS XE Aggregate GEC QoS
• If one of the member links is overdriven, it will exert back pressure
on the aggregate hierarchy
• Flow based load balancing could be lopsided and one of the member
links is utilized at 100% and its egress queues fill up
• When this happens, backpressure is exerted on the aggregate hierarchy
and all traffic will be queued even if it is destined for an underutilized
member link
• This happens because the flow base load balancing happens after the
queuing policy-map has been applied on the aggregate hierarchy

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 41
Aggregate GEC QoS backpressure
Feedback from Ethernet to grandparent happens fast enough that
the grandparent can slow down before Ethernet starts tail dropping.

child

parent QFP GigE


QFP
PPEs PPEs
grandparent GigE
Feature Etherchannel
processing flow load
recycle balancing SIP

Available bandwidth, but not available due to


recycle queue under backpressure.
#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 42
Tunnel QoS
Basic IOS XE tunnel QoS rules
• IOS XE
• Tunnel hierarchies have to be rooted to a physical interface in the hardware
• This is the source of most of the restrictions with tunnel QoS
• In IOS XE, traffic can only be queued once. So if there isn’t magic glue to link
together a queueing policy-map on the tunnel to the egress physical interface then
the configuration is not supported.
• The only supported case is physical interface with class-default only shaper
• Classic IOS
• Supports a full queuing policy-map on the tunnel, and then another full queuing
policy-map with user based queues on the physical interface.
• Queues the packet twice
• Classifies the packet twice

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 44
IOS XE Egress GRE tunnel QoS Details
Interface and Tunnel Policy Combination Use Cases
physical interface QoS physical interface QoS
policy with queuing actions policy without queuing actions
Class-default only policy-map on physical interface
supported.
tunnel policy-map Tunnel packets bypass interface policy-map
Packets will be managed by tunnel policy then rate
with queuing actions limited by the interface policy (class-default only).
Tunnel packets go through tunnel policy-map fully
Tunnel packets go through tunnel policy-map fully and and then through interface policy-map (interface
tunnel policy-map then through interface policy-map (class-default only) default queue). Interface policy-map has no
without queuing actions effect. Traffic is only classified once in IOS XE.

• Maximum of two level policy-map hierarchies allowed on GRE tunnels.

• Encryption is executed prior to egress queuing.


• In XE2.5 software and later, traffic is classified and prioritized by the tunnel QoS policy-map
before being enqueued for cryptography. Packets marked as priority levels 1 and 2 will move
through a high priority queue for the crypto hardware.

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 45
Per SA QoS for DMVPN
• Single tunnel interface configuration with multiple QoS policies available
• Hub QoS hierarchies have the these
• 2 level queuing hierarchy
• Parent can have only class-default with shaper / bandwidth remaining

• Additional support for physical interface class-default only policy-map


• If egress interface changes, QoS on DMVPN tunnel stays intact after switching
physical egress interfaces
• Supports up to 4000 DMVPN tunnels with QoS
• RP2 / RP3 / 1001-X / 1002-X / 1002-HX for this scale, other hardware varies

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 46
Per SA QoS for DMVPN config
policy-map child interface Tunnel 1
class voice ip nhrp map group 10meg service-policy output parent-10meg
police cir 100000 ip nhrp map group 15meg service-policy output parent-15meg
priority ip nhrp map group 20meg service-policy output parent-20meg
class critical_data !
bandwidth remaining ratio 10 interface GigabitEthernet0/0/0
class class-default service-policy output gr-parent
bandwidth remaining ratio 1
!
policy-map parent-15meg
class class-default
shape average 15000000 Client identifies a
bandwidth remaining ratio 15
service-policy child specific NHRP group
! and matching QoS
policy-map parent-10meg Grandparent shaper is
class class-default policy is applied.
shape average 10000000
allowed on egress
Client is manually
bandwidth remaining ratio 10 interface for DMVPN
service-policy child configured with the
!
tunnels.
matching NHRP group.
policy-map gr-parent
class class-default
shape average 10000000

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 47
Adaptive QoS for DMVPN
• Gives you the ability to have tunnels that dynamically adjust their
shaping rate. Ideal for scenarios where the transmit rate is limited
by conditions receiving end of the tunnel (DSL conditions, saturated
DOCSIS networks)
• Supported only on DMVPN tunnels
• Limited to configuration of class-default only parent of egress
policy-map
• Must use NHRP group to apply policy-map as NHRP is used to
communicate information about available bandwidth

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 48
Adaptive QoS
policy-map example
class class-default
shape adaptive upper-bound 50M lower-bound 30M
!
interface tunnel 0
ip address 192.168.1.1 255.255.255.0
ip nhrp authentication cisco123
ip nhrp map multicast dynamic
ip nhrp network-id 1
tunnel source GigabitEthernet0/0/0
tunnel mode gre multipoint
tunnel protection ipsec profile cisco
nhrp map group 1 service-policy output example

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 49
DSCP Tunnel Marking example
policy-map change_marking
class class-default
set precedence tunnel 6
!
interface Tunnel 1
ip address 1.0.0.1 255.255.255.252
tunnel source 20.0.0.1
tunnel destination 30.0.0.2
service-policy output change_marking
!
interface GigabitEthernet0/0/0
ip address 30.0.0.1 255.255.255.0

L2 IP header
Packet at ingress payload
header IP prec=3

L2 IP header GRE IP header


Packet at egress payload
header IP prec=6 header IP prec=3

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 50
Performance Licenses
and their impact on QoS
Performance License
• Performance licenses on the ASR1000 and ISR4000 routers are implemented by
additional high level QoS nodes inserted into the queueing hierarchies
• Performance licenses affect all To service To front panel
external physical interfaces container GigE intf
Punt to control
• Includes switch modules To UCS-E
plane
module
and UCS-E modules in
ISR4000s
Root
• Does NOT include any Performance License
traffic punted to the control shaper node
plane or service containers External physical
interfaces
• Does NOT include traffic
Internal “physical
that stays within a switch
Interfaces”
module or UCS-E module
Queues

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 52
Verify license rate in the hardware
R#show plat hard qfp active infra bqs int GigabitEthernet0/0/0 hier detail | beg License

Index 1 (SID:0x0, Name: License Node)


Software Control Info:
sid: 0x3f011, parent_sid: 0x3fc03, obj_id: 0x6, parent_obj_id: 0x5
evfc_fc_id: 0xffff, fc_sid: 0x3f011, num_entries (act): 8, service_frag: False
num_children: 9, total_children (act/inact): 8, presize_hint: 0
debug_name: License Node
sw_flags: 0x080203ca, sw_state: 0x00000801, port_uidb: 0
orig_min : 0 , min: 5000000000
min_qos : 0 , min_dflt: 5000000000
orig_max : 0 , max: 5000000000
max_qos : 0 , max_dflt: 5000000000
share : 500

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 53
Performance licensing gotcha workarounds
• With SourceFire on a UCS-E module, traffic is routed to UCS-E
module and then back out physical interface. This double counts
against the license.
• Use policer based licensing instead. Policer can distinguish between the
UCS-E generically and UCS-E configured as SourceFire.
platform hardware throughput exceed-action drop
• With Smart Licensing on the CSR1000V, the router will configure
queue-limits based on default values. After the SL server upgrades
the throughput, default queue depths are not adjusted.
• Configure “bandwidth qos-reference X” on interfaces for the
anticipated final allowed platform bandwidth

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 54
Service fragments
ASR 1000 QoS service-fragments
• Typically, hierarchies are very rigid with strict parent – child
relationships
• Service-fragments allow QoS hierarchies which allow queues to
be parented by an entity outside of the strict hierarchy
• Model 4
Allows queue conservation in scale configurations

• Model 3
Allows aggregation shaping of some traffic while having per session / sub-
interface limits on other classes of traffic

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 56
ASR 1000 QoS service-fragments model 4
multiple instances of
sub-interface fragment
Useful for queue conservation in scaled session /
BE service deployments.
policy-map sub-int-mod4
class EF
police 1000000
class AF4
service-fragment BE police 2000000
class class-default fragment BE
shape average 75000000
!
policy-map int-mod4
class data service-fragment BE
shape average 128000000
class EF
interface priority
priority level 1
queues class AF4
priority level 2
interface interface default queue class AF1
schedule shape average 50000000
random-detect
!
SIP root interface GigabitEthernet0/0/0
schedule service-policy output int-mod4
!
interface GigabitEthernet0/0/0.2
service-policy output sub-int-mod4
#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 57
ASR 1000 QoS service-fragments model 3
policy-map int-mod3
class data service-fragment BE
shape average 128000000
class class-default
service-fragment BE shape average 100000000
!
policy-map sub-int-mod3
class EF
police 1000000
priority level 1
class AF4
police 2000000
priority level 2
class class-default fragment BE
shape average 115000000
interface interface class- service-policy sub-int-child-mod3
schedule default queue !
policy-map sub-int-child-mod3
class AF1
SIP root shape average 100000000
schedule class class-default
shape average 25000000

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 58
ASR 1000 QoS service-fragments model 3

fragment BE policy-map int-mod3


class data service-fragment BE
shape average 128000000
class class-default
service-fragment BE shape average 100000000
!
policy-map sub-int-mod3
class EF
police 1000000
priority level 1
class AF4
police 2000000
sub-interface priority priority level 2
queues class class-default fragment BE
shape average 115000000
interface interface class- service-policy sub-int-child-mod3
schedule default queue !
policy-map sub-int-child-mod3
class AF1
SIP root shape average 100000000
schedule class class-default
shape average 25000000

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 59
ASR 1000 QoS service-fragments model 3

multiple instances of
policy-map int-mod3
fragment BE class data service-fragment BE
shape average 128000000
class class-default
service-fragment BE shape average 100000000
!
policy-map sub-int-mod3
class EF
police 1000000
priority level 1
class AF4
police 2000000
multiple sub-interface priority level 2
priority queues class class-default fragment BE
shape average 115000000
interface interface class- service-policy sub-int-child-mod3
schedule default queue !
policy-map sub-int-child-mod3
class AF1
SIP root shape average 100000000
schedule class class-default
shape average 25000000

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 60
Service groups
What are service-groups?
• Service-groups allow linking multiple L3 sub-interfaces and L2
service instances together for the purpose of aggregated QoS
• Before service-groups
• QoS policies could be applied to individual L3 sub-interfaces, individual
L2 service instances, or to ethernet main interfaces
• In order to group multiple L3 or L2 entities together for QoS, a “mega-
policy” on the main interface which classified multiple vlans in the
topmost layer was required.
• If various groups of vlans on the same physical interface required QoS,
the configuration quickly became unmanageable.

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 62
Support for service-groups
• Added in XE3.15, released March 2015
• Supported on ASR1000, CSR1000V, and ISR4000 series platforms
• Same functionality across all above platforms
• Supported on physical interfaces and Aggregate Port-channels
• Same scalability for all platforms

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 63
New configuration commands for service-groups
policy-map alpha
class-default
shape average 10000000
!
interface GigabitEthernet0/0/0
service instance 11 ethernet
Use the group keyword to put
encapsulation dot1q 11
group 10 service instances and sub-
service instance 12 ethernet
encapsulation dot1q 12 interfaces into a service-group.
group 10
!
interface GigabitEthernet0/0/0.13
encapsulation dot1q 13
group 10
!
interface GigabitEthernet0/0/0.14
encapsulation dot1q 14 Use the service-group
group 10
! command as the application
service-group 10
service-policy alpha point for QoS policies.

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 64
Service-group configuration
• Ingress and egress policy-maps are supported on service-groups
• Up to three levels in policy-maps (ingress and egress)
• Same hierarchy restrictions as policy-maps applied to main-interfaces
• Support for all Ethernet interfaces and Aggregate Port-channels
• No support for non-ethernet interface types
• Statistics are collected on a per service-group level and will not be
available per service-group member unless explicitly classified as
part of the service policy

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 65
Restrictions
• All members of a given service-group must be on the same
physical interface
• A sub-interface or service instance can belong to only one service-
group at time
• Sub-interfaces and service instances in a service-group can not
have a policy-map applied other than on the service-group
• Tunnels with QoS egressing through service-group not supported

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 66
Disallowed configuration examples
policy-map alpha policy-map alpha
class-default class-default
shape average 10000000 shape average 10000000
! !
policy-map beta policy-map beta
class-default class-default
shape average 10000000 shape average 10000000
! !
interface GigabitEthernet0/0/0 interface GigabitEthernet0/0/0.10
service instance 10 ethernet encapsulation dot1q 10
encapsulation dot1q 10 group 10
service-policy alpha !
group 10 interface GigabitEthernet0/0/1.20
! encapsulation dot1q 20
service-group 10 group 10
service-policy beta !
service-group 10
service-policy beta

Policy-map is not allowed on sub- Service-group members can not


interface or service instance when span across multiple physical
included in a service-group. interfaces.

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 67
Service groups use case

• Left side branch


• Traditional deployment with a single VLAN servicing the
entire branch
• Right side branch
• serviced by a single downlink across the WAN but uses
VLANs (blue and yellow) to differentiate business traffic
and customer BYOD traffic.
• From the headend ASR1000, it is necessary to rate limit
traffic to the CPE as a whole but also make guarantees to
the business class traffic over the BYOD traffic via vlan
classification

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 68
Service groups use case
policy-map service-group-1000 voice
class-default i
shape average 10000000 business vlan101
e
service-policy right-branch best effort
!
policy-map right-branch
class vlan106
bandwidth remaining ratio 20
service-policy workers
class vlan107
bandwidth remaining ratio 1

service-group
!
service-policy guests
voice GigE

1000
policy-map workers business vlan106
class voice best effort
priority
police cir 1000000
class business
bandwidth 8000
marketing
class class-default vlan107 i
! best effort
policy-map guests
e
class marketing
bandwidth remaining ratio 10
class class-default
bandwidth remaining ratio 1 vlanX classes are basic
match vlan X class-maps.

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 69
Service groups use case
vlanX classes used in the following voice
i
slides are basic match vlan X class- business vlan101
e
maps. best effort

class-map match-any vlan101


match vlan 101
!
class-map match-any vlan106
match vlan 106

service-group
!
class-map match-any vlan107 voice GigE
match vlan 107

1000
business vlan106
!
class-map match-any vlans-106-107 best effort
match vlan 106
match vlan 107

marketing
vlan107 i
best effort
e

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 70
Service groups use case
policy-map service-group-1000 policy-map left-branch voice
level1

class-default class class-default i


shape average 10000000 shape average 5000000 business vlan101
service-policy workers e
service-policy right-branch best effort
! !
policy-map right-branch interface Gig0/0/0.101
class-map vlan106 encapsulation dot1q 101
level 2

bandwidth remaining ratio 20 service-policy left-branch


service-policy workers !
class-map vlan107 !
bandwidth remaining ratio 1 !

service-group
!
service-policy guests !
! voice GigE

1000
policy-map workers ! business vlan106
class-map voice interface Gig0/0/0.106
best effort
priority encapsulation dot1q 106
police cir 1000000 group 1000
class-map business !
level 3

bandwidth 8000 interface Gig0/0/0.107


encapsulation dot1q 107 marketing
class-map class-default vlan107 i
! group 1000 best effort
policy-map guests !
e
class-map marketing service-group 1000
bandwidth remaining ratio 10 service-policy service-group-1000
class-map class-default
bandwidth remaining ratio 1

vlanX classes are basic match vlan X class-maps.


#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 71
Service groups use case
voice
i
business vlan101
e
vlan 101 vlans 106 and 107 best effort
Hierarchy Hierarchy
QQ QQQ
2 levels of queuing 3 levels of queuing

service-group
voice GigE

1000
business vlan106
service-group 1000 best effort

marketing
vlan107 i
best effort
e

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 72
Same setup without service groups
• An identical configuration without service-groups is not possible
• In order to build a QoS mega-policy on the main interface, it would
require:
• 3 levels of queuing
• grandparent level that classifies based on vlan
• previous two items items are mutually exclusive

• The following slide presents that invalid configuration.


• The permitted configuration would only allow 2 levels of queuing
and then a third level that could mark or police traffic only

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 73
Same setup without service groups
policy-map GigE
class vlans-106-107
level1
shape average 10000000 interface Gig0/0/0.106
service-policy right-branch voice
encapsulation dot1q 106 i
class vlan-101 ! business vlan101
shape average 5000000 interface Gig0/0/0.107 e
best effort
service-policy workers encapsulation dot1q 107
! !
policy-map right-branch interface Gig0/0/0.101
class-map vlan106 encapsulation dot1q 101
level 2

bandwidth remaining ratio 20 !


service-policy workers interface Gig0/0/0
class-map vlan107 service-policy GigE

service-group
bandwidth remaining ratio 1
service-policy guests voice GigE
!

1000
business vlan106
policy-map workers
class-map voice best effort
priority Note that this is a invalid
police cir 1000000
class-map business configuration because
level 3

bandwidth 8000
class-map class-default
the grandparent level is marketing
vlan107 i
! not class-default only. best effort

policy-map guests e
class-map marketing
bandwidth remaining ratio 10
class-map class-default
bandwidth remaining ratio 1

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 74
Multiple service groups
voice
i
• Multiple service groups per business vlan101
e
physical interface are
best effort

allowed
• A given service group is
unique per chassis and can

service-group
voice GigE
not be reused across

1000
business vlan106
multiple physical interfaces best effort

• Service group numbers are


only locally significant marketing
vlan107 i
best effort
e

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 75
Scalability
• Up to 1000 service-groups per system
• Up to 1000 members of a single service group
• Up to 8000 sub-interfaces and service instances can be members
of a service group per system

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 76
New commands available

• show running-config service-group

• show service-group {service-group-identifier | all}

• show service-group interface type number

• show service-group stats

• show service-group state

• show service-group traffic-stats

• show policy-map target service-group {service-group-identifier}

• show ethernet service instance detail

• clear service-group traffic-stats

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 77
New internal commands available

• debug service-group {all | error | feature | group | interface | ipc |


member | qos | stats}

• show platform software interface {rp | fp} active [brief]

• show platform hardware qfp active interface platform {service-group if


name}
• show platform hardware qfp active feature qos interface-string {service-
group if name}

Acquire the service-group if name with the show platform software interface
command.

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 78
Shaper parameters
ignored
Shaper parameters ignored
IOSXE(config-pmap-c)#shape average ?
<8000-10000000000> Target Bit Rate (bits/sec).
percent % of interface bandwidth for Committed information rate

IOSXE(config-pmap-c)#shape average 200000000 ?


<32-800000000> bits per interval, sustained. Recommend not to configure, algo finds the best v
account Overhead Accounting
<cr>

IOSXE(config-pmap-c)#shape average 200000000 100000 ?


<0-154400000> bits per interval, excess.
account Overhead Accounting
<cr>

• IOS XE schedulers ignore the bc and be parameters given with the


shape command. Any special tuning done for classic IOS shapers will
be lost when brought to IOS XE.
• Policer bc and be parameters are used and behavior is the same as
classic IOS.
#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 80
Priority versus PAK_PRI
What is PAK_PRI
• Internally generated packets considered so important that they are
always “no drop”
• Associated with protocols where reliable delivery is considered
highly desirable
• Not all packets for a given protocol are considered PAK_PRI

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 82
How is PAK_PRI handled
• PAK_PRI packets will show up in QoS class classification stats but
will be queued in the interface default queues (non-ATM)
• Classification and queuing stats won’t match
• ATM interfaces have a default queue on a per VC basis

• PAK_PRI packets are never subject to dropping


• Only if physical packet memory is exhausted will PAK_PRI be dropped
• 5% of packet memory is reserved for PAK_PRI packet only

• PAK_PRI packets are not treated with LLQ unless classified into that
class, and they will actually move through a low latency queue

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 83
Interface default queue for PAK_PRI
PAK_PRI packets may classify in user classes and
be counted but not queued
Sub-interface
child interface class-default queue

PAK_PRI packets are always queued here

policy-map child
Sub-interface parent class AF4
bandwidth 20000
class class-default
!
policy-map parent
interface class class-default
schedule shape average 30000000
service-policy child
!
SIP root interface GigabitEthernet0/0/0
schedule !
interface GigabitEthernet0/0/0.2
service-policy output parent

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 84
PAK_PRI protocols
Layers 1 and 2 IPv4 Layer 3
• ATM Address Resolution Protocol Negative Acknowledgement (ARP NAK) o • Protocol Independent Multicast (PIM) hellos
ATM ARP requests • Interior Gateway Routing Protocol (IGRP) hellos
• ATM host ping operations, administration and management cell(OA&M) • OSPF hellos
• ATM Interim Local Management Interface (ILMI) • EIGRP hellos
• ATM OA&M • Intermediate System-to-Intermediate System (IS-IS) hellos, complete
• ATM ARP reply sequence number PDU (CSNP), PSNP, and label switched paths (LSPs)
• Cisco Discovery Protocol • ESIS hellos
• Dynamic Trunking Protocol (DTP) • Triggered Routing Information Protocol (RIP) Ack o TDP and LDP hellos
• Ethernet loopback packet • Resource Reservation Protocol (RSVP)
• Frame Relay End2End Keepalive • Some L2TP control packets o Some L2F control packets o GRE IP Keepalive
• Frame Relay inverse ARP • IGRP CLNS
• Frame Relay Link Access Procedure (LAPF) • Bidirectional Forwarding Protocol (BFD)
• Frame Relay Local Management Interface (LMI)
• Hot standby Connection-to-Connection Control packets (HCCP)
• High-Level Data Link Control (HDLC) keepalives IPv6 Layer 3
• Link Aggregation Control Protocol (LACP) (802.3ad) • Miscellaneous protocols
• Port Aggregation Protocol (PAgP)
• PPP keepalives
• Link Control Protocol (LCP) Messages This list is not considered to be complete nor exhaustive and is subject to change
• PPP LZS-DCP in various versions of software.
• Serial Line Address Resolution Protocol (SLARP)
• Some Multilink Point-to-Point Protocol (MLPP) control packets (LCP)

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 85
Fair queue scaling
Scaled fair-queue configurations tax resources
• On ASR1000 platforms using the 2nd generation QFP ASIC, using scaled
fair-queue configurations can tax a resource called LEAF:STEM
connections.
• ESP100, ESP200, ASR1001-X, ASR1001-HX, ASR1002-X, ASR1002-HX
• The more classes you have with fair-queue in a given policy-map, the
more of the resource will be consumed.

Total Classes Classes with Number of times Percent of LEAF:STEM


(ASR1001-X) fair-queue applied free
4 1 750 12%
4 2 400 24%
4 3 300 22 %
8 3 200 30%
8 4 200 24%
#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 87
Scaled fair-queue configurations tax resources
• Target is to keep “remaining” resources above 20% in order to allow
headroom during configuration changes.
• Usage can be observed with this command:
ASR1001-X# show plat hard qfp active infrastructure bqs sorter memory
available
CPP 0; Subdev 0
Level:Class Total Available Remaining Utilization
============= ====== ========= ========= ===========
ROOT:ROOT 16 16 100% 100%
ROOT:BRANCH 256 256 100% 100%
ROOT:STEM 1024 1006 98% 99%
HIER:ROOT 1024 1016 99% 100%
HIER:BRANCH 16384 16247 99% 100%
HIER:STEM:INT 27632 25616 92% 100%
HIER:STEM:INTL2 1024 1024 100% 100%
HIER:STEM:EXT 36848 28468 77% 99%
LEAF:ROOT 2304 2304 100% 100%
LEAF:BRANCH 36864 36864 100% 100%
LEAF:STEM 147440 18676 12% 99%

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 88
Classic to IOS XE QoS
behavior differences
Changes in behavior – classic to IOS XE
Classic IOS IOS XE
Can consume up to 100% of parent
Can consume up to 99% of parent
Priority (naked) bandwidth. Class-default and user
bandwidth up to min guarantee of
priority classes can be completely starved.
parent.
No parent min guarantee limit.
Priority (naked)
Cannot be used in the same policy-map with bandwidth.
priority

priority oversubsubscription Allowed via tunnel / sub-interface config, not in a given policy-map

Police (with or without priority)


Percent rate is based on parent’s maximum rate.
police cir percent X
Rate is based on lesser of max or Rate is based on max rate of
Conditional priority min for parent. parent.
priority percent X Policer is activated if parent node Policer is activated if parent node
or physical interface is congested. or physical interface is congested.

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 90
Changes in behavior – classic to IOS XE
Classic IOS IOS XE
All granted to class-default or
Unallocated bandwidth Equally spread amongst all non-
spread proportionally if class-
(min configuration) priority classes
default has a minimum configured.
Can only be used at the leaf level
bandwidth Can be used at any level
of a hierarchy.
Rate is based on parent’s
bandwidth remaining percent X Rate is based on lesser of parent’s
maximum rate minus any priority
(excess configuration) maximum or minimum rate.
class minimums satisfied.
Unallocated bandwidth remaining Allocated equally amongst classes with no excess configuration.
percent X
If all classes have excess config, granted proportionally amongst all
(excess configuration) excess configured classes.

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 91
Changes in behavior – classic to IOS XE
Classic IOS IOS XE
Generally no.
Supported
Certain restricted configurations
Multiple policy-maps in egress path
Queuing policy-map with are supported with hierarchical
(MPOL)
classification on Tunnel and then policy-map on sub-interfaces,
again on egress physical interface tunnels, etc. with class-default only
shaper on main-interface.
Defaults to 1 ms of burst time. bc
Defaults to 1 second of bursting.
and be parameters are accepted
shape burst parameters bc and be parameters are
on the CLI but have no affect on
adjustable.
operation.
Up to 4096 fair queues per class, 16 fair queues per class.
fair-queue scalability
configurable. Scalability not configurable.

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 92
Platform specific
caveats
QFP queue distribution for ESP100 and ESP200
QFP 3
QFP 2
QFP 1
QFP 0

ASR1006 or ASR1009-X ASR1013 with ASR1013 with


with ESP100 & SIP40 / ELC ESP100 & SIP40 / ELC ESP200 & SIP40 / ELC

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 94
QFP queue distribution for ESP100 and ESP200
QFP 1 QFP 3
QFP 0 QFP 2

ASR1002-HX

ASR1006-X with ASR1009-X with


ESP100 and SIP40 ESP100 and SIP40

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 95
ASR1000 ESP specific QoS specifications
Card / Chassis Packet memory Max Queues TCAM
ASR 1001-X 512 MB 16,000 10 Mb
ASR 1002-X 512 MB 116,000 40 Mb
ASR1001-HX 512 MB 116,000 40 Mb
ASR 1002-HX 1G (512 MB x 2) 232,000 * 80 Mb (40 Mb x 2)
ESP-20 256 MB 128,000 40 Mb
ESP-40 256 MB 128,000 40 Mb
ESP-100 1G (512 MB x 2) 232,000 * 80 Mb (40 Mb x 2)
ESP-200 2G (512 MB x 4) 464,000 * 80 Mb (40 Mb x 2)

* Queues must be distributed amongst physical locations in the chassis.

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 96
ISR4000 QoS specifications
Card / Chassis Packet memory Max Queues TCAM
ISR4321 Variable 8,000 Software lookup
ISR4331 Variable 16,000 * Software lookup
ISR4351 Variable 16,000 * Software lookup
ISR4431 Variable 16,000 * Software lookup
ISR4451 Variable 16,000 * Software lookup

* Current tested maximum, scale could reach 32,000 with additional validation .

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 97
4000 node aggregation
• On platforms using the 2nd generation and newer ASICs (1001-X, 1002-X, 1002-HX,
ESP100, ESP200), there is a limit of 4000 children per QoS node
• This difference does not impact 99% of enterprise configurations

• Broadband configurations most likely affected.

• Any location which has more than 4000 children will be split into two nodes that equally
divide the children in a random fashion.
• No affect on rate limiting, but there is a impact on bandwidth sharing during congestion

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 98
4000 node aggregation
One or a few very fast • If the bottommost node in each hierarchy is
sessions on the left can get congested sharing (excess) must be used to
1:1 sharing with thousands of allocate bandwidth.
slower sessions on the right.
This equal sharing happens at • The scenario on the right will give equal sharing
the intermediate node
amongst all.
inserted.
• The left scenario will allow the two sessions in
red to get up to 50% of the bandwidth causing
multiple sessions on the right node to suffer.
• Workaround: distribute sessions across multiple
ethernet sub-interfaces with bandwidth
remaining ratio configuration

Equal sharing amongst


all 4000 child nodes

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 99
Use case migrations
Microbursts – queue
limit tuning
New unexplained drops
What’s with all these drops on a better / faster box?
• Previously installed 7200s with QoS worked fine. When we replaced
the boxes with ASR1000s we started to see tons of drops on the
egress WAN interfaces. Average utilization via CLI and SNMP never
shows interface utilization high enough to cause the parent shaper to
drop traffic.
• With all this fancy QoS and packet buffer memory why do we start to
have these problems now?

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 102
New unexplained drops
What’s with all these drops on a better / faster box?
policy-map EGRESS Class-map: MISSION_CRITICAL_DATA (match-any)
class VoIP 24718716075 packets, 21181619094739 bytes
priority percent 15 30 sec offered rate 237687000 bps, drop rate 262000 bps
Match: ip dscp cs3
class PRIORITY_DATA
Queueing
bandwidth remaining percent 30 queue limit 3041 packets
class MISSION_CRITICAL_DATA (queue depth/total drops/no-buffer drops) 0/324178/0
bandwidth remaining percent 40 (pkts output/bytes output) 24718391897/21181154463832
class SCAVENGER_DATA bandwidth remaining 12%
bandwidth remaining percent 1
random-detect
class class-default MISSION_CRITICAL_DATA should always have
!
bandwidth remaining percent 3
access to at least 730E6*85%*40% = 248 Mbit/sec.
policy-map SHAPE_730Mbps This class is running close to that on a 30 second
class class-default
shape average 730000000 average which almost promises that there are going
service-policy EGRESS to be microbursts higher than that.
Any 10GE interfaces that feed into this 730Mb/sec
uplink would amplify that affect over GigE interfaces.

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 103
Microbursts
What’s with all these drops on a better / faster box?
• Newer platforms can forward traffic truly at line rate
• No delays with ingress buffering or delays
• No delays with packet processing with multiple cores servicing traffic
• Older platforms were single threaded and processed packets more or less
serially compared to in parallel on the newer boxes
• When bursty traffic arrived on older platforms, those bursts were smoothed
in the forwarding phase by the ingress buffering and serial processing
before hitting egress QoS queuing code.
• Newer platforms do not have those “smoothers” and need more egress
buffering to compensate. Sometimes the default buffers of 50ms just are
not enough.

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 104
Microbursts
What’s with all these drops on a better / faster box?
Fix is 3 fold
1. Realize that the function of QoS is not to preserve all packets.
It is to prioritize traffic, rate limit, and provide minimum
guarantees under congestion
2. Increase queue limits to deal with bursty conditions
Consider the use of time or byte based queue-limits to have true max
latency protection and maximum capacity to absorb bursts
3. Use policer markdown with WRED to tamp down TCP bursting if
applicable
still drops traffic but uses better management

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 105
Microbursts – increase queue limits
What’s with all these drops on a better / faster box?
policy-map EGRESS policy-map EGRESS
class VoIP class VoIP
priority percent 15 priority percent 15
class PRIORITY_DATA class PRIORITY_DATA
bandwidth remaining percent 30 bandwidth remaining percent 30
queue-limit 100 ms -or- queue-limit 6083
queue-limit 6083
class MISSION_CRITICAL_DATA
bandwidth remaining percent 40 class MISSION_CRITICAL_DATA
queue-limit 150 ms -or- queue-limit 9125 bandwidth remaining percent 40
class SCAVENGER_DATA queue-limit 9125
bandwidth remaining percent 1 class SCAVENGER_DATA
random-detect bandwidth remaining percent 1
queue-limit 200 ms -or- queue-limit 12166 queue-limit 12166
class class-default random-detect
bandwidth remaining percent 3 police cir 7300000 pir 14600000
queue-limit 200 ms -or- queue-limit 12166
conform-action transmit
!
policy-map SHAPE_730Mbps exceed-action transmit-set-dscp af12
class class-default violate-action transmit-set-dscp af13
shape average 730000000 class class-default
service-policy EGRESS bandwidth remaining percent 3
queue-limit 12166
If these deep queues are constantly full, you have
a bandwidth problem and not a bursting problem.

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 106
Bandwidth configuration
behavior
Bandwidth configurations
• Due to the 2 vs 3 parameter scheduler changes, minor tweaks are necessary to
keep the same behavior with excess sharing
policy-map child policy-map child
class voice class voice
priority percent 15 priority percent 15
class BUSINESS_APP_1 class BUSINESS_APP_1
bandwidth percent 30 bandwidth remaining percent 30
class BUSINESS_APP_2 class BUSINESS_APP_2
bandwidth percent 40 bandwidth remaining percent 40
class SCAVENGER class SCAVENGER
bandwidth percent 1 bandwidth remaining percent 1
random-detect random-detect
class class-default class class-default
bandwidth percent 3 bandwidth remaining percent 3
! !
policy-map PARENT policy-map PARENT
class class-default class class-default
shape average 50000000 shape average 50000000
service-policy child service-policy child

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 108
Conclusion
Conclusion
• Multicore forwarding is great, but….
• It exposes your network to bursty traffic. Be armed with knowledge to explain and the tools to
handle it.

• More is better, with parameters that is…


• Just be sure to understand how the excess bandwidth is shared evening vs. proportionally
between configurations

• All these rules…


• With hardware forwarding comes a bit more rigidity. Know which use cases are covered,
where the gaps are, and who to yell at to cover use cases that are missing

• Have QoS and Etherchannel make nice


• It is finally possible to do aggregate queueing across flow based load balanced interfaces.

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 110
Cisco Webex Teams
Questions?
Use Cisco Webex Teams (formerly Cisco Spark)
to chat with the speaker after the session

How
1 Find this session in the Cisco Events App
2 Click “Join the Discussion”
3 Install Webex Teams or go directly to the team space
4 Enter messages/questions in the team space

Webex Teams will be moderated cs.co/ciscolivebot#BRKACC-2031


by the speaker until June 18, 2018.

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 111
Complete your online session evaluation

Give us your feedback to be entered


into a Daily Survey Drawing.
Complete your session surveys through
the Cisco Live mobile app or on
www.CiscoLive.com/us.
Don’t forget: Cisco Live sessions will be available for viewing
on demand after the event at www.CiscoLive.com/Online.

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 112
Continue
your Demos in
the Cisco
Walk-in
self-paced
Meet the
engineer
Related
sessions
education campus labs 1:1
meetings

#CLUS BRKARC-2031 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 113
Thank you

#CLUS
#CLUS

You might also like