Professional Documents
Culture Documents
Quality of Service
Quality of Service
4 votes
IP packets have a field called the Type of Service field (also known as the TOS
byte). The original idea behind the TOS byte was that we could specify a priority
and request a route for high throughput, low delay and high reliable service.
The TOS byte has been defined back in 1981 in RFC 791 but the way we use it has
changed throughout the years. This makes it confusing to understand since there is
a lot of terminology and some of is not used anymore nowadays. In this tutorial I’ll
explain everything there is to know about the TOS byte, IP precedence and DSCP
values.
Let’s take a look at the TOS byte:
Above you see the IP header with all its fields, including the TOS byte.
Don’t mix up TOS (Type of Service) and COS (Class of Service). The first one is found in the header
of an IP packet (layer 3) and the second one is found in the header of 802.1Q (layer 2). It’s used for
Quality of Service on trunk links…
So what does this byte look like? We’ll have to take some history lessons here…
IP Precedence
In the beginning the 8 bits of the TOS byte were defined like this:
The first 3 bits are used to define a precedence. The higher the value, the more
important the IP packet is, in case of congestion the router would drop the low
priority packets first. The type of service bits are used to assign what kind of delay,
throughput and reliability we want.
It’s a somehow confusing that we have a type of service “byte” and that bit 3-7 are called the type of
service “bits”. Don’t mix them up, these are two different things.
Precedence:
000 Routine
001 Priority
010 Immediate
011 Flash
101 Critic/Critical
Type of Service:
This is what they came up with in 1981 but the funny thing is that the “type of
service” bits that specify delay, throughput and reliability have never really been
used. Only the precedence bits are used to assign a priority to the IP packets.
About 10 years later, in 1992 RFC 1349 was created that changes the definition of
the TOS byte to look like this:
The first 3 precedence bits remain unchanged but the type of service bits have
changed. Instead of 5 bits, we now only use 4 bits to assign the type of service and
the final bit is called MBZ (Must Be Zero). This bit isn’t used, the RFC says it’s only
been used for experiments and routers will ignore this bit. The type of service bits
now look like this:
Differentiated Services
The year is 1998 and 6 years have passed since the last changes to the TOS
byte. RFC 2474 is created which describes a different TOS byte. The TOS byte gets a
new name and is now called the DS field(Differentiated Services) and the 8 bits
have changed as well. Here’s what it looks like now:
The first 6 bits of the DS field are used to set a codepoint that will affect the PHB
(Per Hop Behavior)at each node.The codepoint is also what we call the DSCP value.
Let me rephrase this in plain english…
The codepoint is similar to precedence that we used in the TOS byte, it’s used to set
a certain priority.
PHB is another fancy term that we haven’t seen before, it requires some more
explanation. Imagine we have a network with 3 routers in a row, something like
this:
Above we have two phones and 3 routers. When we configure QoS to prioritize the
VoIP packets, we have to do it on all devices. When R1 and R3 are configured to
prioritize VoIP packets while R2 treats it as any other IP packet, we can still
experience issues with the quality of our phone call when there is congestion on R2.
To make QoS work, it has to be configured end-to-end. All devices in the path
should prioritize the VoIP packets to make it work. There are two methods to do
this:
Use reservations, each device in the network will “reserve” bandwidth for the phone call that
we are about to make.
Configure each device separately to prioritize the VoIP packets.
Making a reservation sounds like a good idea since you can guarantee that we can
make the phone call, it’s not a very scalable solution however since you have to
make reservations for each phone call that you want to make. What if one of the
routers loses its reservation information? The idea of using reservations to enforce
end-to-end QoS is called IntServ (Integrated Services).
The opposite of IntServ is DiffServ (Differentiated Services) where we configure
each device separately to prioritize certain traffic. This is a scalable solution since
the network devices don’t have to exchange and remember any reservation
information Just make sure that you configure each device correctly and that’s it…
With 6 bits for codepoints we can create a lot of different priorities…in theory, there
are 64 possible values that we can choose from.
The idea behind PHB (Per Hop Behavior) is that packets that are marked with a
certain codepoint will receive a certain QoS treatment (for example
queuing, policing or shaping). Throughout the years, there have been some changes
to the PHBs and how we use the codepoints. Let’s walk through all of them…
Default PHB
The default PHB means that we have a packet that is marked with a DSCP value of
000000. This packet should be treated as “best effort”.
Class-Selector PHB
There was a time when some older network devices would only support IP
precedence and newer network devices would use differentiated services. To make
sure the two are compatible, we have the class-selector codepoints. Here’s what it
looks like:
We only use the first three bits, just like we did with IP precedence. Here is a list of
the possible class-selector codepoints that we can use:
As you can see, CS1 is the same as "priority" and CS4 is the same as "flash
override". We can use this for compatibility between the "old" TOS byte and the
"new" DS field.
The default PHB and these class-selector PHBs are both described in RFC 2474 from
1998.
Assured Forwarding PHB
About a year later, RFC 2597 arrives that describes assured forwarding. The AF
(Assured Forwarding) PHB has two functions:
1. Queueing
2. Congestion Avoidance
There are 4 different classes and each class will be placed in a different queue,
within each class there is also a drop probability. When the queue is full, packets
with a "high drop" probability will be deleted from the queue before the other
packets. In total there are 3 levels for drop precedence. Here's what the DS field
looks like:
The first 3 bits are used to define the class and the next 3 bits are used to define
the drop probability. Here are all the possible values that we can use:
Class 4 has the highest priority. For example, any packet from class 4 will always get
better treatment than a packet from class 3.
Some vendors prefer to use decimal values instead of AF11, AF32, etc. A quick way to convert the
AF value to a decimal value is by using the 8x + 2y formula where X = class and Y = drop
probability. For example, AF31 in decimal is 8 x 3 + 2 x 1 = 26.
Expedited Forwarding
The EF (Expedited Forwarding) PHB also has two functions:
1. Queueing
2. Policing
The goal of expedited forwarding is to put packets in a queue where they
experience minimum delay and packet loss. This is where you want the packets of
your real-time applications (like VoIP) to be. To enforce this we use something
called a priority queue. Whenever there are packets in the priority queue, they will
be sent before all other queues. This is also a risk, there's a chance that the other
queues won't get a chance to send their packets so we need to set a "rate limit" for
this queue, this is done with policing.
The DSCP value is normally called "EF" and in binary it is 101110, the decimal value
is 46.
The real world
You should now have good understanding of the difference between IP precedence
and DSCP values. It's quite a long story right?
There's one thing that I should mention. We talked a lot about PHB (Per Hop Behavior) and the word
"behavior" makes it sound like when you use a certain DSCP value, the router will automatically
queue, police or drop the packets. The funny thing is that your router won't do anything! We have to
configure the "actions" that the router will perform ourselves...
We have a lot of different values that we can use for the TOS byte..IP precedence,
CS, AF and EF. So what do we really use on our networks?
The short answer is that it really depends on the networking vendor. IP Precedence
value 5 or DSCP EF is normally used for voice traffic while IP precedence value 3 or
DSCP CS3 or AF31 is used for call signaling.
See if your networking vendor has a Quality of Service design guide, they usually do
and give you some examples what values you should use.
I hope this tutorial has been helpful to understand the TOS byte, IP Precedence and
DSCP. If you have any questions feel free to leave a comment.
QoS Classification on Cisco IOS
Router
2 votes
On most networks you will see a wide range of applications, each application is
unique and has its own requirements when it comes to bandwidth, delay, jitter, etc.
For example, an FTP application used for backups of large files might require a lot
of bandwidth but delay and jitter won’t matter since it’s not an interactive
application.
Voice over IP on the other hand doesn’t require much bandwidth but delay and
jitter are very important. When your delay is too high your calls will become walkie-
talkie conversations and jitter screws up the sound quality.
To make sure each application gets the treatment that it requires we have to
implement QoS (Quality of Service).
The first step when implementing QoS is classification, that’s what this tutorial is all
about.
By default your router doesn’t care what kind of IP packets it is forwarding…the only
important thing is looking at the destination IP address, doing a routing table
lookup and whoosh…the IP packet has been forwarded.
Before we can configure any QoS methods like queuing, policing or shaping we
have to look at the traffic that is running through our router and identify (classify)
it so we know to which application it belongs. That’s what classification is about.
Once the traffic has been classified, we will mark it and apply a QoS policy to it.
Marking and configuring QoS policies are a whole different story so in this tutorial
we’ll just stick to classification.
On IOS routers there are a couple of methods we can use for classification:
Header inspection
Payload inspection
There are quite some fields in our headers that we can use to classify applications.
For example, telnet uses TCP port 23 and HTTP uses TCP port 80. Using header
inspection you can look for:
Layer 2: MAC addresses
Layer 3: source and destination IP addresses
Layer 4: source and destination port numbers and protocol
This is a really simple method of classification that works well but has some
downsides. For example, you can configure your router that everything that uses
TCP and destination port number 80 is “HTTP” but it’s possible that some other
applications (instant messaging for example) are also using TCP port 80. Your
router will perform the same action for IM and HTTP traffic.
Configuration
We’ll start with a simple example where I use an access-list to classify some telnet
traffic. Here’s the topology that I will use:
R1 will be our telnet client and R2 the telnet server. We will classify the packets
when they arrive at R2. Let’s look at the configuration!
This will match on all IP packets that use TCP as the transport protocol and
destination port 23. Normally when you configure an access-list for filtering, we
apply it to the interface. When configuring QoS we have to use the MQC (Modular
Quality of Service Command-Line Interface). The name is pretty spectacular but it’s
a really simple method to configure QoS.
We use something called a policy-map where we configure the QoS actions we
want to perform…marking, queueing, policing, shaping, etc. These actions are
performed on a class-map, and that’s where we specify the traffic. Let me show you
how this is done:
R2(config)class-map TELNET
R2(config-cmap)#match ?
access-group Access group
any Any packets
class-map Class map
cos IEEE 802.1Q/ISL class of service/user
priority values
destination-address Destination address
discard-class Discard behavior identifier
dscp Match DSCP in IP(v4) and IPv6 packets
flow Flow based QoS parameters
fr-de Match on Frame-relay DE bit
fr-dlci Match on fr-dlci
input-interface Select an input interface to match
ip IP specific values
mpls Multi Protocol Label Switching specific
values
not Negate this match result
packet Layer 3 Packet length
precedence Match Precedence in IP(v4) and IPv6 packets
protocol Protocol
qos-group Qos-group
source-address Source address
vlan VLANs to match
I created a class-map called “TELNET” and when you create a class-map you have a
lot of options. On top you see access-group which uses an access-list to classify the
traffic, that’s what I will use. Some other nice methods are the input-interface,
frame-relay DLCI values, packet length, etc. The most simple option is probably the
access-list:
My class-map called “TELNET” now matches traffic that is specified in the access-list
called “TELNET”.
Now we can create a policy-map and refer to our class-map:
R2(config)#policy-map CLASSIFY
R2(config-pmap)#class TELNET
The policy-map is called “CLASSIFY” and the class-map called “TELNET” belongs to it.
Normally this is where I also specify the QoS action like marking, queueing, etc. I’m
not configuring any action right since this tutorial is only about classification.
That’s it, our router can now classify telnet traffic. Let’s try it by telnetting from R1 to
R2:
R1#telnet 192.168.12.2
Trying 192.168.12.2 ... Open
Great! Our router sees the telnet traffic that arrives on the FastEthernet 0/0
interface. You can see the name of the policy-map, the class-map and the access-list
that we used. Something that you should remember is that all traffic that is not
specified in a class-map will hit the class-default class-map. Not too bad right? Let’s
see if we can also make this work with NBAR…
Classification with NBAR
The configuration of NBAR is quite easy. First let me show you a simple example of
NBAR where it shows us all traffic that is flowing through an interface:
Now you can view all traffic that is flowing through the interface:
FastEthernet0/0
Input Output
----- ------
Protocol Packet Count Packet Count
Byte Count Byte Count
5min Bit Rate (bps) 5min Bit Rate
(bps)
5min Max Bit Rate (bps) 5min Max Bit
Rate (bps)
------------------------ ------------------------
------------------------
telnet 8 7
489 457
0 0
0 0
unknown 3 2
180 120
0 0
0 0
Total 11 9
669 577
0 0
0 0
I don't have a lot going on on this router but telnet is there. This is a nice way to see
the different traffic types on your interface but if we want to use this information
for QoS we have to put NBAR in a class-map. Here's how:
R2(config)#class-map NBAR-TELNET
R2(config-cmap)#match protocol ?
3com-amp3 3Com AMP3
3com-tsmux 3Com TSMUX
3pc Third Party Connect Protocol
914c/g Texas Instruments 914 Terminal
9pfs Plan 9 file service
CAIlic Computer Associates Intl License Server
Konspire2b konspire2b p2p network
acap ACAP
acas ACA Services
accessbuilder Access Builder
accessnetwork Access Network
acp Aeolon Core Protocol
acr-nema ACR-NEMA Digital Img
aed-512 AED 512 Emulation service
agentx AgentX
alpes Alpes
aminet AMInet
an Active Networks
anet ATEXSSTR
ansanotify ANSA REX Notify
ansatrader ansatrader
aodv AODV
[output omitted]
I created a class-map called "NBAR-TELNET" and when I use match protocol you
can see there's a long list of supported applications. I'm not going to show all of it
but telnet is in there somewhere:
R2(config-cmap)#match protocol telnet
That's how we use NBAR in a class-map. Now we need to add this class-map to the
policy-map:
R2(config)#policy-map CLASSIFY
R2(config-pmap)#no class TELNET
R2(config-pmap)#class NBAR-TELNET
I'll remove the old class-map with the access-list and add the new class-map to our
policy-map.
I showed you how you can use the ip nbar protocol-discovery command, it's a great way to see the
traffic on the interface but it's not a requirement for NBAR to work in a class-map. Using "match
protocol" in the class-map is enough for NBAR to work.
The output is pretty much the same as when I used the access-list but the "match:
protocol telnet" reveals that we are using NBAR for classification this time.
That's all I have for now! I hope this tutorial helps you to understand classification,
in other tutorials I will show you how to let your policy-map do something...things
like queueing, marking, shaping or policing. If you have any questions feel free to
leave a comment.
In this tutorial we’ll take a look at marking packets. Marking means that we set the
TOS (Type of Service) byte with an IP Precedence value or DSCP value. If you have
no idea what precedence or DSCP is about then you should read my IP Precedence
and DSCP value tutorial first. I’m also going to assume that you understand
what classification is, if you don’t…read my classification tutorial first.
Marking on a Cisco catalyst switch is a bit different than on a router, if you want to
know how to configure marking on your Cisco switch than look at this tutorial.
Having said that, let’s take a look at the configuration!
Configuration
I will use three routers to demonstrate marking, connected like this:
I will send some traffic from R1 to R3 and we will use R2 to mark our traffic. We’ll
keep it simple and start by marking telnet traffic.
R2(config)#class-map TELNET-TRAFFIC
R2(config-cmap)#match access-group name TELNET-TRAFFIC
R2(config)#policy-map MARKING
R2(config-pmap)#class TELNET-TRAFFIC
R2(config-pmap-c)#set ?
atm-clp Set ATM CLP bit to 1
cos Set IEEE 802.1Q/ISL class of service/user priority
cos-inner Set Inner CoS
discard-class Discard behavior identifier
dscp Set DSCP in IP(v4) and IPv6 packets
fr-de Set FR DE bit to 1
ip Set IP specific values
mpls Set MPLS specific values
precedence Set precedence in IP(v4) and IPv6 packets
qos-group Set QoS Group
vlan-inner Set Inner Vlan
There are quite some options for the set command. When it comes to IP packets
we’ll use the precedence or DSCP values. Let’s start with precedence:
R2(config-pmap-c)#set precedence ?
<0-7> Precedence value
cos Set packet precedence from L2 COS
critical Set packets with critical precedence (5)
flash Set packets with flash precedence (3)
flash-override Set packets with flash override precedence (4)
immediate Set packets with immediate precedence (2)
internet Set packets with internetwork control precedence
(6)
network Set packets with network control precedence (7)
priority Set packets with priority precedence (1)
qos-group Set packet precedence from QoS Group.
routine Set packets with routine precedence (0)
For this example it doesn’t matter much what we pick. Let’s go for IP precedence 7
(network):
That’s all there is to it. Let’s see if it works….I’ll telnet from R1 to R3:
R1#telnet 192.168.23.3
Trying 192.168.23.3 ... Open
That’s looking good! 10 packets have been marked with precedence 7. That’s not
too bad right?
Let’s see if we can also mark some packets with a DSCP value, let’s mark some HTTP
traffic:
Create a class-map:
R2(config)#class-map HTTP-TRAFFIC
R2(config-cmap)#match access-group name HTTP-TRAFFIC
R2(config)#policy-map MARKING
R2(config-pmap)#class HTTP-TRAFFIC
R2(config-pmap-c)#set dscp ?
<0-63> Differentiated services codepoint value
af11 Match packets with AF11 dscp (001010)
af12 Match packets with AF12 dscp (001100)
af13 Match packets with AF13 dscp (001110)
af21 Match packets with AF21 dscp (010010)
af22 Match packets with AF22 dscp (010100)
af23 Match packets with AF23 dscp (010110)
af31 Match packets with AF31 dscp (011010)
af32 Match packets with AF32 dscp (011100)
af33 Match packets with AF33 dscp (011110)
af41 Match packets with AF41 dscp (100010)
af42 Match packets with AF42 dscp (100100)
af43 Match packets with AF43 dscp (100110)
cos Set packet DSCP from L2 COS
cs1 Match packets with CS1(precedence 1) dscp (001000)
cs2 Match packets with CS2(precedence 2) dscp (010000)
cs3 Match packets with CS3(precedence 3) dscp (011000)
cs4 Match packets with CS4(precedence 4) dscp (100000)
cs5 Match packets with CS5(precedence 5) dscp (101000)
cs6 Match packets with CS6(precedence 6) dscp (110000)
cs7 Match packets with CS7(precedence 7) dscp (111000)
default Match packets with default dscp (000000)
ef Match packets with EF dscp (101110)
qos-group Set packet dscp from QoS Group.
There is one thing left I'd like to share with you. Some network devices like switches
or wireless controllers sometimes re-mark traffic, this can be a pain and it's
something you might want to check. On a Cisco IOS router it's simple to do
this...just create a policy-map and some class-maps that match on your precedence
or DSCP values. This allows you to quickly check if you are receiving (correctly)
marked packets or not. Here's what I usually do:
R3(config)#class-map AF12
R3(config-cmap)#match dscp af12
R3(config)#class-map PREC7
R3(config-cmap)#match precedence 7
R3(config)#policy-map COUNTER
R3(config-pmap)#class AF12
R3(config-pmap-c)#exit
R3(config-pmap)#class PREC7
R3(config-pmap-c)#exit
This proves that R3 is receiving our marked packets. In this scenario it's not a
surprise but when you do have network devices that mess with your markings, this
can be a relief to see.
Hopefully you enjoyed this tutorial...if you enjoyed this, please use any of the share
buttons below.
In this lesson you will learn about the QoS Pre-classify feature. When you use
tunnelling, your Cisco IOS router will do classification based on the outer (post)
header, not the inner (pre) header. This can cause issues with QoS policies that are
applied to the physical interfaces. I will explain the issue and we will take a look how
we can fix it. Here’s the topology that we will use:
Below is the tunnel configuration, I’m using a static route so that R1 and R3 can
reach each other’s loopback interfaces through the tunnel:
R1(config)#interface Tunnel 0
R1(config-if)#tunnel source 192.168.12.1
R1(config-if)#tunnel destination 192.168.23.3
R1(config-if)#ip address 172.16.13.1 255.255.255.0
R1(config)#ip route 3.3.3.3 255.255.255.255 172.16.13.3
R3(config)#interface Tunnel 0
R3(config-if)#tunnel source 192.168.23.3
R3(config-if)#tunnel destination 192.168.12.1
R3(config-if)#ip address 172.16.13.3 255.255.255.0
R3(config)#ip route 1.1.1.1 255.255.255.255 172.16.13.1
The tunnel is up and running, before we play with classification and service policies,
let’s take a look at the default classification behaviour of Cisco IOS when it comes to
tunnelling…
R1#ping
Protocol [ip]:
Target IP address: 3.3.3.3
Repeat count [5]:
Datagram size [100]:
Timeout in seconds [2]:
Extended commands [n]: y
Source address or interface: 1.1.1.1
Type of service [0]: 160
Set DF bit in IP header? [no]:
Validate reply data? [no]:
Data pattern [0xABCD]:
Loose, Strict, Record, Timestamp, Verbose[none]:
Sweep range of sizes [n]:
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 3.3.3.3, timeout is 2 seconds:
Packet sent with a source address of 1.1.1.1
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/4
ms
This ping between 1.1.1.1 and 3.3.3.3 will go through the tunnel and I marked the
TOS byte of this IP packet with 160 (decimal). 160 in binary is 10100000, remove the
last two bits and you have our 6 DSCP bits. 101000 in binary is 40 in decimal which
is the same as the CS5.
R1(config)#class-map TELNET
R1(config-cmap)#match access-group name TELNET
R1(config)#ip access-list extended GRE
R1(config-ext-nacl)#permit gre any any
R1(config)#class-map GRE
R1(config-cmap)#match access-group name GRE
R1(config)#policy-map POLICE
R1(config-pmap)#class TELNET
R1(config-pmap-c)#police 128000
R1(config-pmap-c-police)#exit
R1(config-pmap-c)#exit
R1(config-pmap)#class GRE
R1(config-pmap-c)#exit
R1(config-pmap)#exit
I’ve added policing for telnet traffic and nothing for GRE. It doesn’t matter what
“actions” we configure here, even without an action the traffic will still be classified
and it will show up in the policy-map. Let’s activate it on the physical interface:
See how it only matches the GRE traffic? We don’t have any matches for the telnet
traffic. If this was a real network, it means that telnet traffic will never get policed
(or any other action you configured). The reason that we don’t see any matches is
because Cisco IOS first encapsulates the IP packet and then applies the policy to the
GRE traffic. Let me illustrate this:
The blue IP header on top is our original IP packet with telnet traffic, this is
encapsulated and the router adds a GRE header and a new IP header (the red one).
The policy-map is then applied to this outer IP header.
How do we fix this? There are a couple of options…let’s look at the first one!
You can use the qos pre-classify command to do this. Let's do another test and
we'll see the difference:
R1#clear counters
Clear "show interface" counters on all interfaces [confirm]
R1#telnet 3.3.3.3 /source-interface loopback 0
Trying 3.3.3.3 ... Open
Great! Now we see matches on our telnet traffic so it can be policed if needed. We
don't see any matches on our GRE traffic anymore. Let me visualize what just
happened for you:
When the router encapsulates a packet, it will make a temporary copy of the
header. This temporary copy is then used for the policy instead of the outer header.
When this is done, the temporary copy is destroyed.
We accomplished this with the qos pre-classify command but there is another
method to get the same result, here's how...
R1(config)#interface Tunnel 0
R1(config-if)#no qos pre-classify
R1(config-if)#service-policy output POLICE
Note that I also removed the qos pre-classify command on the tunnel interface.
Let's give it another try:
R1#clear counters
Clear "show interface" counters on all interfaces [confirm]
R1#telnet 3.3.3.3 /source-interface loopback 0
Trying 3.3.3.3 ... Open
If you enable the policy on the tunnel interface then the router will use the inner
header for classification, just like we saw when we used the qos pre-classify
command on the tunnel interface.
That's all there is to explain. I hope this lesson has been useful to understand the
difference between "outer" and "inner" header classification and how to deal with
this issue.
Why do we need QoS on LAN
Switches
2 votes
Quality of Service (QoS) on our LAN switches is often misunderstood. Every now
and then people ask me why we need it since we have more than enough
bandwidth. If we don’t have enough bandwidth it’s easier to add bandwidth than on
our WAN links. If you use any real-time applications like Voice over IP on your
networks then you should think about implementing QoS on your switches. Let me
show you what could go wrong with our switches. Here’s an example:
Above you see a computer connected to Switch with a Gigabit interface. Between
Switch A and Switch B there’s also a gigabit interface. Between Switch B and the
server there’s only a FastEthernet link. In the picture above the computer is sending
400 Mbps of traffic towards the server. Of course the FastEthernet link only has a
bandwidth of 100 Mbps so traffic will be dropped. Another example of traffic drops
on our switches is something that might occur on monday morning when all your
users are logging in at the same time. Let me show you a picture:
It's impossible to fix these problems just by adding more bandwidth. By adding
more bandwidth you can reduce how often congestion happens but you can't
prevent it completely. A lot of data applications will try to consume as much
bandwidth as possible so if the aggregated traffic rate exceeds one of your uplink
ports you will see congestion.
By configuring QoS we can tell our switches what traffic to prioritize in case of
congestion. When congestion occurs the switch will keep forwarding voice over IP
traffic (up to a certain level that we configure) while our data traffic will be dropped.
In short, bandwidth is not a replacement for QoS. Using QoS we can ensure that
real-time applications keep working despite (temporarily) congestions.
Rate this Lesson:
When we configure QoS on our Cisco switches we need to think about our trust
boundary. Simply said this basically means on which device are we going to trust
the marking of the packets and Ethernet frames entering our network. If you are
using IP phones you can use those for marking and configure the switch to trust the
traffic from the IP phone. If you don’t have any IP phones or you don’t trust them,
we can configure the switch to do marking as well. In this article I’ll show you how to
do both! First let me show you the different QoS trust boundaries:
In the picture above the trust boundary is at the Cisco IP phone, this means that we
won’t remark any packets or Ethernet frames anymore at the access layer switch.
The IP phone will mark all traffic. Note that the computer is outside of the QoS trust
boundary. This means that we don’t trust the marking of the computer. We can
remark all its traffic on the IP phone if we want. Let’s take a look at another picture:
In the picture above we don’t trust whatever marking the IP phone sends to the
access layer switch. This means we’ll do classification and marking on the access
layer switches. I have one more example for you…
Above you can see that we don’t trust anything before the distribution layer
switches. This is something you won’t see very often but it’s possible if you don’t
trust your access layer switches. Maybe someone else does management for the
access layer switches and you want to prevent them to send packets or Ethernet
frames that are marked towards your distribution layer switches.
Let’s take a look at a switch to see how we can configure this trust boundary. I have
a Cisco Catalyst 3560 that I will use for these examples. Before you do anything with
QoS, don’t forget to enable it globally on your switch first:
3560Switch(config)#mls qos
Something you need to be aware of is that as soon as you enable QoS on your
switch it will erase the marking of all packets that are received! If you don’t want
this to happen you can use the following command:
3560Switch(config)#no mls qos rewrite ip dscp
Let’s continue by looking at the the first command. We can take a look at the QoS
settings for the interface with the show mls qos interface command. This will show
you if you trust the marking of your packets or frames:
3560Switch#show mls qos interface fastEthernet 0/1
FastEthernet0/1
trust state: not trusted
trust mode: not trusted
COS override: dis
default COS: 0
DSCP Mutation Map: Default DSCP Mutation Map
Trust device: none
Above you can see that we don’t trust anything at the moment. This is the default
on Cisco switches. We can trust packets based on the DSCP value, frames on the
CoS value or we can trust the IP phone. Here are some examples:
Just type mls qos trust cos to ensure the interface trusts the CoS value of all frames
entering this interface. Let’s verify our configuration:
By default your switch will overwrite the DSCP value of the packet inside your frame
according to the cos-to-dscp map. If you don’t want this you can use the following
command:
Using the command above it will not trust the CoS value but the DSCP value of the
packets arriving at the interface. Here’s what it will look like:
Trusting the Cos or DSCP value on the interface will set your trust boundary at the
switch level. What if we want to set our trust boundary at the Cisco IP phone? We
need another command for that!
Use the mls qos trust device cisco-phone command to tell your switch to trust all
CoS values that it receives from the Cisco IP phone:
3560Switch#show mls qos interface FastEthernet0/1
FastEthernet0/1
trust state: not trusted
trust mode: not trusted
COS override: dis
default COS: 0
DSCP Mutation Map: Default DSCP Mutation Map
Trust device: cisco-phone
Maybe you are wondering how the switch knows the difference between a Cisco IP
phone and another vendor? CDP (Cisco Discovery Protocol) is used for this. Now we
trust the CoS value of the Cisco IP phone but what about the computer behind it?
We have to do something about it…here’s one way to deal with it:
3560Switch(config-if)#switchport priority extend cos
The command above will overwrite the CoS value of all Ethernet frames received
from the computer that is behind the IP phone. You’ll have to set a CoS value
yourself. Of course we can also trust the computer, there’s another command for
that:
This will trust all the CoS values on the Ethernet frames that we receive from the
computer.
The commands above will let you trust traffic but if we don’t trust anything we can
also decide to mark or remark packets and Ethernet frames on the switch. This is
quite easy to do with the following command:
Just type mls qos cos to set a CoS value yourself. In the example above I will set a
CoS value of 4 to alluntagged frames. Any frame that is already tagged will not
be remarked with this command.
3560Switch#show mls qos interface FastEthernet0/1
FastEthernet0/1
trust state: not trusted
trust mode: not trusted
COS override: dis
default COS: 4
DSCP Mutation Map: Default DSCP Mutation Map
Trust device: none
Above you can see that the default CoS will be 4 but override (remarking)
is disabled. Marking Ethernet frames with this command is useful when you have a
computer or server that is unable to mark its own traffic. In case the Ethernet frame
already has a CoS value but we want to remark it, we’ll have to do this:
3560Switch(config-if)#mls qos cos override
Use the keyword override to tell the switch to remark all traffic. If you receive
Ethernet frames that already have a CoS value then they will be remarked with
whatever CoS value you configured. Let’s verify it:
3560Switch#show mls qos interface FastEthernet 0/1
FastEthernet0/1
trust state: not trusted
trust mode: not trusted
COS override: ena
default COS: 4
DSCP Mutation Map: Default DSCP Mutation Map
Trust device: none
When you are configuring QoS on your Cisco switches you are probably familiar
with the concept of “trust boundaries”. If not, take a look at this article that I wrote
earlier that explains the concept and teaches you how to trust markings or (re)mark
packets or Ethernet frames.
Using the mls qos trust command we can trust the Cos or DSCP value or an IP
phone. With the mls qos cos command we can set a new CoS value if we like. The
downside of these two commands is that it applies to all packets or Ethernet
frames that arrive on the FastEthernet 0/1 interface. What if we wanted to be a bit
more specific? Let me show you an example:
Above you see a small network with a server, switch and a router connected to a
WAN. Let’s imagine the server is running a couple of applications:
1. SSH server.
2. Mail server.
3. MySQL server.
What if the server is unable to mark its own IP packets with a DSCP value but we
want to prioritize SSH traffic on the router when it leaves the serial 0/0 interface? In
that case we’ll have to doclassification and marking ourselves. I will show you how
to do this on a Cisco catalyst switch. You can use a standard, extended or MAC
access-list in combination with MQC (Modular QoS Configuration) to get the job
done.
Let’s start with the standard access-list to classify traffic from the server. Since a
standard access-list can only match on source IP addresses I will be unable to
differentiate between different applications…
We’ll use a class-map to select our traffic. I will refer to access-list 1 with
the match command.
Switch(config)#access-list 1 permit 192.168.1.1
Access-list 1 will match IP address 192.168.1.1. This is the classification part but we
still have to markour traffic. This is done with a policy-map:
Switch(config)#policy-map SET-DSCP-SERVER
Switch(config-pmap)#class SERVER
Switch(config-pmap-c)#set ip dscp 40
Above I created a policy map called “SET-DSCP-SERVER” and i’m referring to the
class-map “SERVER” that I created before. Using the set command I will set the
DSCP value to 40. Now I am almost done, I still need to activate this policy map on
the interface:
Switch(config)#interface FastEthernet 0/1
Switch(config-if)#service-policy input SET-DSCP-SERVER
This is how you activate it on the interface. Use the service-policy command and
you can use theinput or output keyword to apply it to inbound or outbound traffic.
If you want to verify your configuration and see if traffic is being marked you can
use the following command:
Switch#show policy-map interface FastEthernet 0/1
FastEthernet0/1
Above you can see that the policy-map has been applied to the FastEthernet0/1
interface and even better, you can see the number of packets that have matched
this policy-map and class-map. At the moment there are 0 packets (nothing is
connected to my switch at the moment). You can also see the class-default class.
All traffic that doesn’t belong to a class-map will belong to the class-default class.
The example above is nice to demonstrate the class-map and policy-map but I was
only able to match on the source IP address because of the standard access-list. Let
me show you another example that will only match on SSH traffic using an
extended access-list:
Switch(config)#class-map SSH
Switch(config-cmap)#match access-group 100
First i’ll create a class-map called SSH that matches access-list 100. Don’t forget to
create the access-list:
Switch(config)#policy-map SET-DSCP-SSH
Switch(config-pmap)#class SSH
Switch(config-pmap-c)#set ip dscp cs6
Whenever it matches class-map SSH we will set the DSCP value to CS6. Don't forget
to activate it:
Switch(config)#interface FastEthernet 0/1
Switch(config-if)#no service-policy input SET-DSCP-SERVER
Switch(config-if)#service-policy input SET-DSCP-SSH
You can only have one active policy-map per direction on an interface so first we'll
remove the old one. Let's take a look if it is active:
Switch#show policy-map interface fastEthernet 0/1
FastEthernet0/1
You can see that it's active. I still don't have any traffic so we are stuck at 0
Switch(config)#class-map SERVER-MAC
Switch(config-cmap)#match access-group name MAC
We'll create a class-map called SERVER-MAC and refer to an access-list called MAC.
Let's create that MAC access-list:
Switch(config)#policy-map SET-DSCP-FOR-MAC
Switch(config-pmap)#class SERVER-MAC
Switch(config-pmap-c)#set ip dscp cs1
Switch(config)#interface FastEthernet 0/1
Switch(config-if)#no service-policy input SET-DSCP-SSH
Switch(config-if)#service-policy input SET-DSCP-FOR-MAC
That's all there is to it. You have now learned how to configuration classification
and marking using MQC on Cisco Catalyst switches. Before I forget, MQC is similar
on routers so you can configure the same thing on your router. If you enjoyed this
article please leave a comment!
The first 54 minutes are about classification, marking and policing so if you only
care about congestion management and queuing you can skip the first part. Having
said that let’s walk through the different commands.
Priority Queue
If your switch supports ingress queuing then on most switches (Cisco Catalyst 3560
and 3750) queue 2 will be the priority queue by default. Keep in mind that there are
only 2 ingress queues. If we want we can make queue 1 the priority queue and we
can also change the bandwidth. Here’s how to do it:
The command makes queue 1 the priority queue and limits it to 20% of the
bandwidth of the total internal ring bandwidth.
For our egress queuing we have to enable the priority queue ourselves! It’s not
enabled by default. Here’s how you can do it:
Switch(config)#interface fa0/1
Switch(config-if)#priority-queue out
The command above will enable the outbound priority queue for interface fa0/1. By
default queue 1 is the priority queue!
Queue-set
The queue-set is like a template for QoS configurations on our switches. There are 2
queue-sets that we can use and by default all interfaces are assigned to queue-set
1. If you plan to make changes to buffers etc. it’s better to use queue-set 2 for this.
If you change queue-set 1 you will apply you new changes to all interfaces.
Switch(config)#interface fa0/2
Switch(config-if)#queue-set 2
Above we put interface fa0/2 in queue-set 2. Keep in mind that we only have queue-
sets for egress queuing, not for ingress.
Buffer Allocation
For each queue we need to configure the assigned buffers. The buffer is like the
‘storage’ space for the interface and we have to divide it among the different
queues. This is how to do it:
Above you see the mls qos command. First we select the queue-set and then we
can divide the buffers between queue 1,2,3 and 4. For queue 1,3 and 4 you can
select a value between 0 and 99. If you type 0 you will disable the queue. You can’t
do this for queue 2 because it is used for the CPU buffer. Let’s take a look at an
actual example:
Switch(config)#mls qos queue-set output 2 buffers 33 17 25 25
Threshold 1 value
Threshold 2 value
Reserved value
Maximum value
The command to configure these values looks like this:
Here’s an example:
In the example above we configure queue-set 2. We select queue 3 and set the
following values:
Threshold 1 = 33%
Threshold 2 = 66%
Reserved = 100%
Maximum = 300%
This means that threshold 1 can go up to 33% of the queue. Threshold 2 can go up
to 66% of the queue. We reserve 100% buffer space for this queue and in case the
queue is full we can borrow more buffer space from the common pool. 300%
means we can get twice our queue size from the common pool.
Bandwidth Allocation
The buffers determine how large the queues are. In other words how ‘big is our
storage’. The bandwidth is basically how often we visit our queues. We can change
the bandwidth allocation for each interface. Here’s what it looks like for our igress
queuing:
Igress queuing only has two queues. We can divide a weight between the two
queues. Here’s an example:
With the command above queue 1 will receive 30% of the bandwidth and queue 2
will receive 70%. These two values are “weighted” and don’t have to add up to
100%. If I would have typed somethine like “70 60” then queue 1 would receive
60/130 = about 46% of the bandwidth and queue 2 would receive 70/130 = about
53%. Of course it’s easier to calculate if you make these values add up to 100.
For our egress queues we have to do the same thing but it will be on interface level.
We can also choose between shaping or sharing. Sharing means the queues will
divide the available bandwidth between each other. Shaping means you set a fixed
limit, it’s like policing. Here’s an example:
Switch(config)#interface fa0/1
Switch(config-if)#srr-queue bandwidth share 30 20 25 25
Queue 1: 30%
Queue 2: 20%
Queue 3: 25%
Queue 4: 25%
In this case we have a 100Mbit interface which means queue 1 will receive 30Mbit,
queue 2 20Mbit, queue 3 25Mbit and queue 4 25Mbit. If there is no congestion than
our queues can go above their bandwidth limit. This is why it’s called “sharing”.
If I want I can enable shaping for 1 or more queues. This is how you do it:
Switch(config)#interface fa0/1
Switch(config-if)#srr-queue bandwidth shape 20 0 0 0
This value is a weighted value. The other queues are not shaped because there’s a
0. When you configure shaping for a queue it will be removed from the sharing
mechanism. So how much bandwidth does queue 1 really get? We can calculate it
like this:
1/20 = 0.05 x 100Mbit = 5Mbit.
So traffic in queue 1 will be shaped to 5Mbit. Since queue 1 is now removed from
the sharing mechanism…how much bandwidth will queue 2,3 and 4 get?
Let’s take a look again at the sharing configuration that I just showed you:
Switch(config)#interface fa0/1
Switch(config-if)#srr-queue bandwidth share 30 20 25 25
I just explained you that queue 1 would receive 30Mbit, queue 2 20Mbit, queue 3
25Mbit and queue 4 also 25Mbit. Since I enabled shaping for queue 1 it doesn’t join
the sharing mechanism anymore. This means there is more bandwidth for queue
2,3 and 4. Here’s what the calculation looks like now:
It’s also possible to rate-limit the entire interface for egress traffic if you want to
save the hassle of configuring shaping. This is how you do it:
Switch(config)#interface fa0/1
Switch(config-if)#srr-queue bandwidth limit 85
This will limit our 100 Mbit interface to 85% so you’ll end up with 85 Mbit.
Above you can see that this Cisco Catalyst 3560 switch has 4 queues with 3
threshold levels.
If you are configuring QoS you need to make sure you enabled it globally first with
the “mls qos” command. You can verify if QoS is active or not with the following
command:
It tells us that QoS is enabled globally. We can also check the QoS parameters for
each interface as following:
Above you can see the trust state for this interface. We can also verify the queue-
sets for this switch. If you didn’t configure them you will find some default values:
Above you will find queue-set 1 and 2. You can see how the buffers are divided per
queue and the values for our thresholds, reserved and maximum values.
We can check how queuing is configured per interface. This is how you do it:
If you are troubleshooting you should check if you see any drops within the queues.
You can do it like this:
Here you can see the drops for each queue. We can also verify if we are receiving
traffic that is marked:
dscp: incoming
-------------------------------
0 - 4 : 0 0 0
0 0
5 - 9 : 0 0 0
0 0
10 - 14 : 0 0 0
0 0
15 - 19 : 0 0 0
0 0
20 - 24 : 0 0 0
0 0
25 - 29 : 0 0 0
0 0
30 - 34 : 0 0 0
0 0
35 - 39 : 0 0 0
0 0
40 - 44 : 0 0 0
0 0
45 - 49 : 0 0 0
0 0
50 - 54 : 0 0 0
0 0
55 - 59 : 0 0 0
0 0
60 - 64 : 0 0 0 0
dscp: outgoing
-------------------------------
0 - 4 : 0 0 0
0 0
5 - 9 : 0 0 0
0 0
10 - 14 : 0 0 0
0 0
15 - 19 : 0 0 0
0 0
20 - 24 : 0 0 0
0 0
25 - 29 : 0 0 0
0 0
30 - 34 : 0 0 0
0 0
35 - 39 : 0 0 0
0 0
40 - 44 : 0 0 0
0 0
45 - 49 : 0 0 0
0 0
50 - 54 : 0 0 0
0 0
55 - 59 : 0 0 0
0 0
60 - 64 : 0 0 0 0
cos: incoming
-------------------------------
0 - 4 : 2 0 0
0 0
5 - 7 : 0 0 0
cos: outgoing
-------------------------------
0 - 4 : 0 0 0
0 0
5 - 7 : 0 0 0
Policer: Inprofile: 0 OutofProfile: 0
That’s all I have for you for now! I suggest you to check out these commands on
your own switches to become familiar with them. If you enjoyed this article please
leave a comment!
If you are playing around with CBWFQ you might have discovered that it’s
impossible to attach a policy-map to a sub-interface directly. There is a good reason
for this and I’d like to show you whythis occurs and how to fix it. This is the topology I
will use to demonstrate this:
Just two routers connected to teach other using frame-relay. We will try to
configure CBWFQ on the Serial 0/0.1 sub-interface of R1.
Configuration
First i’ll create a simple CBWFQ configuration:
R1(config)#class-map TELNET
R1(config-cmap)#match protocol telnet
R1(config)#class-map HTTP
R1(config-cmap)#match protocol http
R1(config)#policy-map CBWFQ
R1(config-pmap)#class TELNET
R1(config-pmap-c)#bandwidth percent 10
R1(config-pmap-c)#exit
R1(config-pmap)#class HTTP
R1(config-pmap-c)#bandwidth percent 20
R1(config-pmap-c)#exit
Nothing special here…just a simple CBWFQ configuration that gives 10% of the
bandwidth to telnet and 20% to HTTP traffic. Let’s try to apply it to the sub-
interface:
Too bad, it’s not gonna happen…IOS has a day off. There is a workaround
however…we can’t apply it directly, but if we use a hierarchical policy-map it will
work. Let me show you what I mean:
R1(config)#policy-map PARENT
R1(config-pmap)#class class-default
R1(config-pmap-c)#service-policy CBWFQ
I’ll create a policy-map called PARENT that has our service-policy attached to the
class-default class. Now let’s try to attach this to the sub-interface:
R1(config)#policy-map PARENT
R1(config-pmap)#class class-default
R1(config-pmap-c)#shape average percent 100
I don't want to shape, but if I have to configure something we'll just set the shaper
to 100% of the interface bandwidth so that it doesn't limit our traffic. Let's attach it
to the sub-interface:
Verification
We'll try to telnet from R1 to R2 to see if it matches the policy-map:
R1#telnet 192.168.12.2
Trying 192.168.12.2 ... Open
Serial0/0.1
Service-policy : CBWFQ
Above you can see that my telnet traffic matches the policy-map. The shaper is
configured but since it's configured to shape to the entire interface bandwidth it
won't bother us.
So why do we have to use a shaper? Logical interfaces like sub-interfaces can't have
congestion like a physical interface so IOS doesn't support policy-maps that
implement for queuing. By using a shaper, we enforce a "hard limit" for the sub-
interface and so it will allow queuing.
I hope this has been helpful to you! If you have any questions feel free to ask.
Introduction to Policing
When you get a subscription from an ISP (for example a fibre connection) you will
pay for the bitrate that you desire, for example 5, 10 or 20 Mbit. The fibre
connection however is capable of sending traffic at a much higher bitrate (for
example 100 Mbit). In this case the ISP will “limit” your traffic to whatever you are
paying for. The contract that you have with the ISP is often called the traffic
contract. The bitrate that you pay for at the ISP is often called the CIR (Committed
Information Rate). Limiting the bitrate of a connection is done with policing or
shaping. The difference between the two is that policing will drop the exceeding
traffic and shaping will buffer it.
If you are interested to see how shaping works you should read my “traffic shaping
explained” article. The logic behind policing is completely different than shaping. To
check if traffic matches the traffic contract the policer will measure the cumulative
byte-rate of arriving packets and the policer can take one of the following actions:
Allow the packet to pass.
Drop the packet.
Remark the packet with a different DSCP or IP precedence value.
When working with policing there are three categories that we can use to see if a
packet conforms the traffic contract or not:
Conforming
Exceeding
Violating
Conforming means that the packet falls within the traffic contract, exceeding means
that the packet is using up the excess burst capability and violating means that it’s
totally out of the traffic contract rate. Don’t worry if you don’t know what “excess
burst” is, we’ll talk about it in a bit. We don’t have to work with all 3 categories…we
can also use just 2 of them (conforming and exceeding for example). It’s up to us to
configure what will happen when a packet conforms, exceeds or violates.
When we use 2 categories (conforming and exceeding) we’ll probably want the
packet to be forwarded when it’s conforming and dropped when it’s exceeding.
When we use 3 categories we can forward the packet when it’s conforming, re-mark
it when exceeding and drop it when violating. There are 3 different policing
“techniques” that we have:
Each time a packet is policed, the policer will put some tokens into the token
bucket. The number of tokens that it will replenish can be calculated with the
following formula:
Packet arrival time - Previous packet arrival time * Police Rate /
8
So the number of tokens that we put in the bucket depends on the time between
two arriving packets. This time is in seconds. We will multiply the time with the
police rate and divide it by 8. Dividing it by 8 is done so that we have a number in
bytes instead of bits.
Let’s look at an example so that this makes more sense:
Imagine we have a policer that is configured for 128.000 bps (bits per second). A
packet has been policed and it takes exactly 1 second until the next packet arrives.
The policer will now calculate how much tokens it should put in the bucket:
1 second * 128.000bps / 8 = 16.000 bytes
So it will put 16.000 tokens into the token bucket. Now imagine a third packet will
arrive, a half second later than the second packet…this is how we calculate it:
That means we’ll put 8.000 tokens into the bucket. Basically the more often we
replenish the token bucket, the less tokens you’ll get. When the bucket is full our
tokens are spilled and discarded.
Now when a packet arrives at the policer this is what will happen:
If the number of bytes in the packet is less or equal than the number of tokens in the bucket,
the packet is conforming. The policer takes the tokens out of the bucket and performs the action
that we configured for conforming.
If the number of bytes in the packet is large than the number of tokens in the bucket, the
packet is exceeding. The policer will leave the tokens in the bucket and performs the action for
exceeding packets.
With this single rate two-color policer conforming probably means to forward the
packet, and exceeding means to drop it. You can also choose to remark exceeding
packets.
As silly as it might sound, it’s possible to ‘drop’ packets that are conforming or to forward packets
that are exceeding…it’s up to us to configure an action. That kinda sounds like giving a speeding
ticket to people that are not driving fast enough and rewarding the speed devils…
When we use a single bucket, and the bucket is full we will discard the tokens. With
two buckets it works differently. Above you can see that once the Bc bucket is full
the ‘spillage’ will end up in the Be bucket. If the Be bucket is full then the tokens will
go where no token has gone before…they are gone forever! Armed with the two
buckets the policer will work as following when a packet arrives:
When the number of bytes in the packet is less or equal than the number of tokens in the Bc
bucket the packet is conforming. The policer takes the required tokens from the Bc bucket and
performs the configured action for conforming.
If the packet is not conforming and the number of bytes in the packet is less than or equal to
the number of tokens in the Be bucket, the packet is exceeding. The policer will remove the
required tokens from the Be bucket and performs the corresponding action for exceeding
packets.
If the packet is not conforming or exceeding it is violating. The policer doesn’t take any
tokens from the Bc or Be bucket and will perform the action that was configured for violating
packets.
Simply said, if we can use the Bc bucket our packets are conforming, when we use
the Be bucket we are exceeding and when we don’t use any bucket it is violating.
How are you doing so far? We have one more policer type to cover!
Let's say that 0.5 second passes between the first and the second packet to arrive.
This is how the CIR bucket will be filled:
0.5 * 128.000 / 8 = 8.000 tokens.
As you can see the PIR bucket will have more tokens than the Bc bucket. The big
secret is how the policer uses the different tokens from the buckets, this is how it
works:
When the number of bytes in the packet are less or equal than the number of tokens in the Bc
bucket the packet is conforming. The policer takes the required tokens from the Bc bucket and
performs the action. The policer also takes the same amount of tokens from the PIR bucket!
If the packet does not conform and the number of bytes of the packet is less than or equal to
the number of tokens in the PIR bucket, the packet is exceeding.The policer will remove the
required tokens from the PIR bucket and takes the configured action for exceeding packets.
When the packet is not conforming or exceeding, it is violating. The policer doesn't take
any tokens and performs the action for violating packets.
So in short, if there are tokens in the Bc bucket we are conforming, if not but we
have enough in the PIR bucket it is exceeding and otherwise we are violating. One
of the key differences is that for conforming traffic the policer will take tokens from
both buckets!
You have now seen the 3 policer techniques. Let me give you an overview of them
and their differences:
2nd bucket no 2nd bucket Filled by spilled tokens Same as the 1st bucket,
refill available from 1st bucket but based on PIR rate
take tokens from 1st take tokens from 1st take tokens from both
Conforming
bucket bucket buckets
All packets that are not All packets that are not
Violating not available conforming or conforming or
exceeding exceeding
That's the end of this policer story I hope this article is useful to you, policing can be
quite a mind-boggling topic to understand! In this tutorial you will see how
to configuring policing on a Cisco IOS router. If you have any questions, just leave a
comment.
Rate this Lesson:
in this lesson you will learn how to configure the different types of policing on Cisco
IOS routers:
We don’t need anything fancy to demonstrate policing. I will use two routers for
this, R1 will generate some ICMP traffic and R2 will do the policing.
To keep it simple, I will use NBAR to match on ICMP traffic. Now we can create a
policy-map:
R2(config)#policy-map SINGLE-RATE-TWO-COLOR
R2(config-pmap)#class ICMP
R2(config-pmap-c)#police 128000
R2(config-pmap-c-police)#conform-action transmit
R2(config-pmap-c-police)#exceed-action drop
Both options achieve the same so it doesn’t matter which one you use. For
readability reasons I selected the first option.
Let’s activate the policer on the interface and we’ll see if it works:
You need to use the service-policy command to activate the policer on the
interface.
Time to generate some traffic on R1:
Above you can see that the policer is doing it’s job. The configured CIR rate is
128000 bps (128 Kbps) and the bc is set to 4000 bytes. If you don’t configure the bc
yourself then Cisco IOS will automatically select a value based on the CIR rate. You
can see that most of the packets were transmitted (conformed) while some of them
got dropped (exceeded).
If you understand the theory about policing then the configuration and verification
isn’t too bad right? Let’s move on to the next policer…
Our CIR rate is still 128000 bps and the conform-action is still transmit. The
difference is the exceed-action which I've set to set-dscp-transmit. When the traffic
is exceeding, the policer will reset the DSCP value to 0 but still transmits the packet.
In our example, the ICMP traffic wasn't marked at all but imagine that some marked
traffic hits this policer...if it were "conforming" then it would be transmitted and
keeps it DSCP value, if it were exceeding it would also be transmitted but as a
"penalty" the DSCP value is stripped. The last command is also new, when the
traffic is violating we use violate-action to drop it.
Let's activate this policer:
I'll remove the old policer and enable the new one. Let's generate some traffic on
R1 again:
Some packets are being dropped, let's see what R2 thinks about it:
Above you can see the conformed, exceeded and violated packets with the
transmit, set-dscp-transmit and drop actions. Also, if you take a close look you can
see the be (4000 bytes) next to the CIR rate. Just like the bc, if you don't configure it
yourself then Cisco IOS will select a be automatically.
R2(config)#policy-map DUAL-RATE-THREE-COLOR
R2(config-pmap)#class ICMP
R2(config-pmap-c)#police cir 128000 pir 256000
R2(config-pmap-c-police)#conform-action transmit
R2(config-pmap-c-police)#exceed-action set-dscp-transmit 0
R2(config-pmap-c-police)#violate-action drop
Next to the CIR (128 Kbps) I also configured the PIR (256 Kbps). I've kept the actions
the same as the previous policer. Let's enable it:
The output above is similar but now you see the CIR and PIR. Some of our packets
are conforming, others are exceeding and violating.
You have now seen how to configure the single-rate two-color / three-color and the
dual-rate three color policers. I hope these configuration examples have been
useful to you. If you have any questions, feel free to leave a comment!
Shaping is a QoS (Quality of Service) technique that we can use to enforce lower
bitrates than what the physical interface is capable of. Most ISPs will use shaping or
policing to enforce “traffic contracts” with their customers. When we use shaping
we will buffer the traffic to a certain bitrate, policing will drop the traffic when it
exceeds a certain bitrate. Let’s discuss an example why you want to use shaping:
Your ISP sold you a fibre connection with a traffic contract and a guaranteed
bandwidth of 10 Mbit, the fibre interface however is capable of sending 100 Mbit
per second. Most ISPs will configure policing to drop all traffic above 10 Mbit so that
you can’t get more bandwidth than what you are paying for. It’s also possible that
they shape it down to 10 Mbit but shaping means they have to buffer data while
policing means they can just throw it away. The 10 Mbit that we pay for is called
theCIR (Commited Information Rate).
There are two reasons why you might want to configure shaping:
Instead of waiting for the policer of the ISP to drop your traffic, you might want to shape
your outgoing traffic towards the ISP so that they don’t drop it.
To prevent egress blocking. When you go from a high speed interface to a low speed
interface you might get packet loss (tail drop) in your outgoing queue. We can use shaping to
make sure everything will be sent (until its buffer is full).
In short, we configure shaping when we want to use a “lower bitrate” than what the
physical interface is capable of.
Routers are only able to send bits at the physical clock rate. As network engineers
we think we can do pretty much anything but it’s impossible to make an electrical
or optical signal crawl slower through the cable just because we want to. If we want
to get a lower bitrate we will have to send some packets, pause for a moment, send
some packets, pause for a moment…and so on.
For example let’s say we have a serial link with a bandwidth of 128 kbps. Imagine
we want to shape it to 64 kbps. If we want to achieve this we need to make sure
that 50% of the time we are sending packets and 50% of the time we are pausing.
50% of 128 kbps = an effective CIR of 64 kbps.
Another example, let’s say we have the same 128 kbps link but the CIR rate is 96
kbps. This means we will send 75% of the time and pause 25% of the time (96 / 128
= 0.75).
Now you have a basic idea of what shaping is, let’s take a look at a shaping example
so I can explain some terminology:
Above we see an interface with a physical bitrate of 128 kbps that has been
configured to shape to 64 kbps. On the vertical line you can see the physical bitrate
of 128 kbps. Horizontally you can see the time from 0 to 1000 milliseconds. The
green line indicates when we send traffic and when we arepausing traffic. The first
62.5 ms we are sending traffic at 128 kbps and the second 62.5 ms we arepausing.
This first interval takes 125 ms (62.5 + 62.5 = 125 ms) and we call this interval the Tc
(Time Interval).
In total there are 8 time intervals of 125 ms each. 8x 125 ms = 1000 ms. Most Cisco
routers have a Tcdefault value of 125 ms. With the example above we are sending
traffic 50% of the time and pausing 50% of the time. 50% of 128 kbps = shaping
rate of 64 kbps.
Our Cisco router will calculate how many bits it can send each Tc so that it will
reach the targetted shaping rate. This value is called the Bc (committed burst).
In the example above the Bc is 8.000 bits. Each Tc (125 ms) it will send 8.000 bits
and when it’s done it will wait until the Tc expires. In total we have 1.000 ms of time.
When we divide 1.000 ms by 125 ms we have 8 Tcs. 8000 bits x 8 Tcs = shaping rate
of 64 kbps.
Tc (time interval) is the time in milliseconds over which we can send the Bc (committed
burst).
Bc (committed burst) is the amount of traffic that we can send during the Tc (time interval)
and is measured in bits.
CIR (committed information rate) is the bitrate that is defined in the “traffic contract” that
we received from the ISP.
There are a number of formulas that we can use to calculate the values above:
Bc value:
Bc = Tc * CIR
In the example above we have a Tc of 125 ms and we are shaping to 64 kbps (that’s
the CIR) so the formula will be:
Tc value:
Tc = Bc / CIR
We just calculated the Bc (8.000 bits) and the CIR rate is 64 kbps, the formula will
be:
8.000 bits / 64.000 = 0.125. So that’s 125 ms.
Let’s look at another example. Imagine we have an interface with a physical bitrate
of 256 kbps and we are shaping to 128 kbps. How many bits will we send each Tc?
The shaper will grab 16.000 bits each Tc and send them. Once they are sent it will
wait until the Tc has expired and a new Tc will start.
The cool thing about shaping is that all traffic will be sent since we are buffering it.
The downside of buffering traffic is that it introduces delay and jitter. Let me show
you an example:
Above we have the same interface with a physical bitrate of 128 kbps and the Tc is
125 ms. Shaping has been configured for 64 kbps. You can see that each Tc it takes
62 ms to send the Bc. How did I come up with this number? Let me walk you
through it:
Now we know the Bc we can calculate how long it takes for a 128 kbps interface to
send these 8000 bits. This is how you do it:
Delay value:
Delay = Bc / physical bitrate
Let’s try this formula to find out how long it takes for our 128kbps interface to send
8.000 bits:
So it takes 62.5 ms to send 8000 bits through a 128 kbps interface. If we have a fast
interface the delay will of course be a lot lower, let’s say we have a T1 interface
(1.54 Mbit):
The default Tc of 125 ms is maybe not a very good idea when you are working with
Voice over IP. Imagine that we are sending a data packet that is exactly 8.000 bits
over this T1 link. It will only take 5 ms but that means that we are waiting 120 ms
( 125 ms – 5 ms) before the Tc expires and we can send the next 8.000 bits . If this
next packet is a VoIP packet then it will at least be delayed by 120 ms.
Cisco recommends a one way delay of 150 to 200 ms for realtime traffic like VoIP so
wasting 120 ms just waiting isn’t a very good idea. When you have realtime traffic
like voice, Cisco recommends to set your Tc to 10ms to keep the delay to a
minimum.
So if we set our Tc to 10 ms instead of the default 125 ms…what will our Bc be? In
other words how many bits can we send during the Tc of 10 ms?
Let’s get back to our 128 kbps interface that is configured to shape to 64 kbps to
calculate this:
640 bits is only 80 bytes...not a lot right? Many IP packets are larger than 80 bytes so if you
configure the Tc at 10 ms you will probably also have to use fragmentation. In this example, IP
packets should be fragmented to 80 bytes each.
How are you doing so far? I can imagine all the terminology and formulas make
your head spin. We are almost at the end we only have to talk about the excess
burst.
When we configure traffic shaping we have the option to send more than the Bc in
some Tcs. There is a very good reason to do this. Data traffic is not smooth but very
bursty...sometimes we don't send anything, then a few packets and suddenly
there's an avalanche of traffic. It would be nice if you can send a little bit more
traffic than the normal 'Bc' after a quiet period. To illustrate this we first need to
talk about the token bucket.
Imagine we have a bucket....this bucket we will fill with tokens and each token
represents 1 bit. When we want to send a packet we will grab the number of tokens
we require to send this packet. If the packet is 120 bits we will grab 120 tokens and
send the packet. The amount of tokens in this bucket is the Bc. Once the bucket is
empty we can't send anything anymore and you'll have to wait for the next Tc. At
the next Tc we will refill our token bucket with the Bc and we can send again.
This means that we can never send more than the Bc...it's impossible to save
tokens so that you can go beyond the Bc. If we don't use all of our tokens it won't
fit in the bucket and they will be discarded. When it comes to shaping it's good to
be a big spender...use those tokens!
Now let's talk about the excess burst. We still have the same token bucket but now
the bucket is larger and can contain the Bc + Be. At the beginning of the Tc we will
only fill the token bucket with the Bc but because it's larger we can "save" tokens up
to the Be level. The advantage of having a bigger bucket is that we can save tokens
when we have periods of time where we send less bits than the configured
shaping rate. Normally the bucket would spill once the Bc is full but now we can
save up to the Be level.
Let's take a look at an example of a shaper that we configured to use the Bc and the
Be:
Above you see an interface with a physical bitrate of 128 kbps. It has been
configured to shape to 64 kbps with a default Tc of 125 ms. This means the Bc is
8.000 bits and the Be is configured at 8.000 bits. This means we can store up to
16.000 bits. Imagine that the interface didn't send any traffic for quite some time,
this allows the token bucket to fill up to 16.000 bits (8.000 Bc + 8.000 Be) . This
means that in the first 125 ms we can send 16.000 bits.
In the second interval the token bucket is refilled up to the Bc level so we can send
another 8.000 bits. There's quite some traffic so each Tc all 8.000 Bc bits are used.
After awhile all traffic has been sent which allows us to save tokens again and fill
the token bucket completely up to the Bc+Be level. The usage of the Be allows us to
effectively "burst" after a period of inactivity.
So there you go, you have now learned how traffic shaping works, what the CIR, Tc,
Bc and Be are and how to calculate them. In another lesson I will cover how
to configure traffic shaping on a Cisco IOS router. If you have any questions feel free
to ask!
Rate this Lesson:
In a previous lesson I explained how we can use shaping to enforce lower bitrates.
In this lesson, I will explain how to configure shaping. This is the topology we will
use:
Above we have two routers connected to each other with a serial and FastEthernet
link. We’ll use both interfaces to play with shaping. The computers are used for
iPerf which is a great application to test the maximum achievable bandwidth. The
computer on the left side is our client, on the right side we have the server. Right
now we are using the serial interfaces thanks to the following static routes:
R1#
ip route 192.168.2.0 255.255.255.0 192.168.12.2
R2#
ip route 192.168.1.0 255.255.255.0 192.168.12.1
Configuration
We will start with some low bandwidth settings. Let’s set the clock rate of the serial
interface to 128 Kbps:
SERVER# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
That’s all we have to do on the server side, it will listen on the default port with a
window size of 85.3 Kbyte. Here’s what we will do on the client side:
The “-P” parameter tells the client to establish eight connections. I’m using multiple
connections so we get a nice average bandwidth. Here’s what you will see on the
server:
Server#
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-136.2 sec 256 KBytes 15.4 Kbits/sec
[ 10] 0.0-137.0 sec 256 KBytes 15.3 Kbits/sec
[ 11] 0.0-138.0 sec 256 KBytes 15.2 Kbits/sec
[ 9] 0.0-138.4 sec 256 KBytes 15.1 Kbits/sec
[ 5] 0.0-148.0 sec 384 KBytes 21.3 Kbits/sec
[ 6] 0.0-166.7 sec 384 KBytes 18.9 Kbits/sec
[ 8] 0.0-171.4 sec 384 KBytes 18.4 Kbits/sec
[ 7] 0.0-172.9 sec 384 KBytes 18.2 Kbits/sec
[SUM] 0.0-172.9 sec 2.50 MBytes 121 Kbits/sec
Above you see the individual connections and the [SUM] is the combined
throughput of all connections. 121 Kbps comes pretty close to the clock rate of 128
Kbps which we configured.
Let’s configure shaping to limit the throughput of Iperf. This is done with the MQC
(Modular Quality of Service) framework which makes the configuration very simple.
First we need to configure an access-list which matches our traffic:
The access-list above will match all traffic from 192.168.1.1 to 192.168.2.2. Now we
need to create a class-map:
R1(config)#class-map IPERF
R1(config-cmap)#match access-group name IPERF_CLIENT_SERVER
The class map is called IPERF and matches our access-list. Now we can configure a
policy-map:
R1(config)#policy-map SHAPE_AVERAGE
R1(config-pmap)#class IPERF
R1(config-pmap-c)#shape ?
adaptive Enable Traffic Shaping adaptation to BECN
average configure token bucket: CIR (bps) [Bc (bits) [Be
(bits)]],
send out Bc only per interval
fecn-adapt Enable Traffic Shaping reflection of FECN as BECN
fr-voice-adapt Enable rate adjustment depending on voice
presence
peak configure token bucket: CIR (bps) [Bc (bits) [Be
(bits)]],
send out Bc+Be per interval
In the policy-map we select the class-map, above you can see the options for
shaping. We’ll start with a simple example:
R1(config-pmap-c)#shape average ?
<8000-154400000> Target Bit Rate (bits/sec). (postfix k, m, g
optional;
decimal point allowed)
percent % of interface bandwidth for Committed
information rate
We will go for shape average where we have to specify the target bit rate. Let’s go
for 64 Kbps (64000 bps):
When you configure the target bit rate, there’s an option to specify the bits per
interval. Cisco IOS recommends you not to configure this manually so for now, we’ll
stick to configuring the bit rate. This means Cisco IOS will automatically calculate
the Bc and Tc:
That’s all there is to it. Now we can activate our policy-map on the interface:
R1(config)#interface Serial 0/0/0
R1(config-if)#service-policy output SHAPE_AVERAGE
SERVER#
[SUM] 0.0-300.5 sec 2.12 MBytes 59.3 Kbits/sec
Great, that’s close to 64 Kbps. Here’s what it looks like on our router:
Above you can see that we have matched packets on our policy-map. Cisco IOS
decided to use 256 bits for the Bc value.
The example above is of a Cisco 2800 router running IOS 15.1 which only shows you the calculated
Bc value. Older Cisco IOS versions show a lot more detailed information, including the calculated Tc
value.
How did it come up with this value? The Tc can be calculated like this:
Tc = Bc / CIR
Let’s look at some more examples, I’ll also explain how to change the Be and Tc
values.
Let's set the clock rate to 256 Kbps and shape to 128 Kbps:
R1(config)#policy-map SHAPE_AVERAGE
R1(config-pmap)#class IPERF
R1(config-pmap-c)#shape average 128000
SERVER#
[SUM] 0.0-153.5 sec 2.25 MBytes 123 Kbits/sec
Seems our shaper is working fine, we get close to 128 Kbps. Let's bump up the clock
rate again:
R1(config)#policy-map SHAPE_AVERAGE
R1(config-pmap)#class IPERF
R1(config-pmap-c)#shape average 256000
Once again, Cisco IOS sets the Bc value so we end up with a Tc value of 4 ms. Let's
try iPerf again:
What about faster interfaces? Let's try something with our FastEthernet interfaces
between R1 and R2. Let's change the static route so that R1 and R2 don't use the
serial links anymore:
Let's see what kind of throughput we get without any shaper configured:
The output above is what we would expect from a 100 Mbit link. Let's shape this to
1 Mbit:
R1(config)#policy-map SHAPE_AVERAGE
R1(config-pmap)#class IPERF
R1(config-pmap-c)#shape average 1m
Instead of specifying the shape value in bits, you can also use "k" or "m" to specify
Kbps or Mbps. Let's activate it:
Great, our traffic is now shaped to 955 Kbps which is close enough to 1 Mbps.
So far we used the default Bc and Tc values that the router calculated for us. What
if we have a requirement where we have to configure one of these values
manually?
We can't configure the Tc directly but we can change the Bc. Let's say that we have
a requirement where we have to set the Tc to 10 ms. How do we approach this?
Bc = Tc * CIR
R1(config)#policy-map SHAPE_AVERAGE
R1(config-pmap)#class IPERF
R1(config-pmap-c)#shape average 1m ?
<32-154400000> bits per interval, sustained. Recommend not to
configure, the
algorithm will fi
First we set the targetted bit rate and then we set the Bc value:
R1(config-pmap-c)#shape average 1m 10000
That's all there is to it. Let's try one more example, let's say we want a Tc of 125 ms:
R1(config)#policy-map SHAPE_AVERAGE
R1(config-pmap)#class IPERF
R1(config-pmap-c)#shape average 1000000 125000
That's it, you have now seen how to configure shaping and how to influence the Tc
by setting different Bc values.
Conclusion
Thanks to the MQC, configuring shaping on Cisco IOS routers is pretty
straightforward. You have now learned how to configure shaping and also how to
influence the Tc by setting the correct Bc value.
In the next lesson, I will explain how "peak" shaping works which works a bit
different compared to "average" shaping.
I hope you enjoyed this lesson, if you have any questions feel free to leave a
comment below.
shape average
shape peak
In my first lesson I explained the basics of shaping and I demonstrated how to
configure shape average. This time we will take a look at peak shaping which is often
misunderstood and confusing for many networking students.
Shape Average
Here’s a quick recap of how shape average works:
We have a bucket and it can contain Bc and Be tokens. At the beginning of the Tc
we will only fill the token bucket with the Bc but because it’s larger we can “save”
tokens up to the Be level. The advantage of having a bigger bucket is that we can
save tokens when we have periods of time where we send less bits than the
configured shaping rate.
After a period of inactivity, we can send our Bc and Be tokens which allows us to
burst for a short time. When we use a bucket that has Bc and Be, this is what our
traffic pattern will look like:
Above you can see that we start with a period where we are able to spend Bc and
Be tokens, the next interval only the Bc tokens are renewed so we are only able to
spend those. After awhile a period of inactivity allows us to fill our bucket again.
Shape Peak
Peak shaping uses the Be in a completely different way. We still have a token
bucket that stores Bc + Be but will fill our token bucket with Bc and Be tokens each
Tc and unused tokens will be discarded.
Here’s what our traffic pattern will look like:
Each Tc our Bc and Be tokens are renewed so we are able to spend them. A period
of inactivity doesn’t mean anything.
Now you might be wondering why do we use this and what’s the point of it?
Depending on your traffic contract, an ISP might give you a CIR and PIR (Peak
Information Rate). The rate is the guaranteed bandwidth that they offer you, the
PIR is the maximum non-guaranteed ratethat you could get when there is no
congestion on the network. When there is congestion, this traffic might be dropped.
ISPs typically use policing to enforce these traffic contracts.
The idea behind peak shaping is that we can configure shaping and take the CIR
and PIR of the ISP into account.
When we send a lot of traffic, we will be spending the Bc and Be tokens each Tc and
we are shaping up to the PIR. When there isn’t as much traffic to shape, we only
spend Bc tokens and that’s when we are shaping up to the CIR.
Let’s look at an configuration example which will help to clarify things.
Configuration
I will use the following topology to demonstrate peak shaping:
Above we have two computers and two routers. The computers will be used to
generate traffic with iPerf, I’ll configure peak shaping on R1. Let’s do a quick test
with iPerf, time to start the server:
SERVER# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
Ok great, that's close to 100 Mbit. That's what we would expect from a FastEternet
link. Now let's take a look at the peak shaping configuration:
R1(config)#class-map IPERF
R1(config-cmap)#match access-group name IPERF_TRAFFIC
First we create an access-list that matches our iPerf traffic and we attach it to a
class-map. Now we can configure the policy-map:
R1(config)#policy-map SHAPE_PEAK
R1(config-pmap)#class IPERF
R1(config-pmap-c)#shape peak ?
<8000-154400000> Target Bit Rate (bits/sec). (postfix k, m, g
optional;
decimal point allowed)
percent % of interface bandwidth for Committed
information rate
Above you see the shape peak command where we configure the target bit rate.
The value that you specify here is the CIR, not the PIR! Let's try a CIR of 128 Kbps:
R1(config-pmap-c)#shape peak 128000
We can see that the shaper works since we only get a transfer rate of 245 Kbps.
Let's take a closer look at policy-map on R1:
Our CIR is 128000 bits per second (128 Kbps) which is what we configured with the
shape peak command, our Bc and Be are 512 bits. Each TC our Bc and Be tokens
are renewed, by using both we can shape up to the PIR of 256 Kbps. With the
default settings, our Be and Bc have the same size so the PIR is 2 * CIR.
Conclusion
You have now seen how peak shaping works and how to configure it. Most students
however are still confused after learning about peak shaping so let's get a couple of
things out of the way.
First of all, you need to keep in mind that the talk about CIR and PIR is only
"cosmetic" when it comes to peak shaping. In the output of the router you can see a
CIR and PIR but that's it. It's not like the router will automatically adapt its shaping
rate up to the CIR or PIR or something. When we configure peak shaping, we set a
maximum rate and the router will shape up to that rate...that's it!
When there is a lot of traffic we will be shaping up to that maximum rate so we can
say we are shaping up to the PIR of the ISP. When there isn't as much traffic, maybe
we are only using our Bc tokens so we can say that we are shaping up to the CIR
but that's it, it's just talk.
One question I see all the time is that students ask if the following two commands
will achieve the same thing:
The answer is no although if you measure it, it will be insignificant. Let me explain
this:
When we use shape average, each Tc we renew the Bc tokens which allows us to
shape up to 256 Kbps. However after a period of inactivity and our bucket is full
with Bc and Be tokens, we can spend both during a Tc which means we will shape
up to 512 Kbps for a short time.
With peak shaping, we renew Bc and Be tokens each Tc, unused tokens
are discarded so there is no way to get above 256 Kbps. Shape average would give a
slightly better result but only after a period of inactivity.
The following commands however should give you the exact same result:
Configurations
R1
R2
Want to take a look for yourself? Here you will find the configuration of each device.
I hope this lesson has been useful, if you have any questions feel free to leave a
comment!
One of the QoS topics that CCIE R&S students have to master is shaping and how to
calculate the burst size. In this short article I want to explain how to calculate the
burst size so that you can allowbursting up to the physical interface rate after a
period of inactivity. Let’s take a look at an example:
Above we have a router with two PVCs. The physical AR (Access Rate) of this
interface is 1536 Kbps. The PVC on top has a CIR rate of 512 Kbps and the one at
the bottom has a CIR of 64 Kbps. Let’s say we have the following requirements:
25.600 50 ms
With a CIR rate of 512 Kbps it means we can send 512.000 bits in 1000 ms. In 50 ms
we will be able to send 25.600 bits. Now we have to calculate the number of Be bits
so that we can burst up to the AR rate. The physical access rate is 1536 Kbps:
76.800 bits 50 ms
The Bc and Be combined should be 76.800 bits to get to the physical access rate:
76.800 bits – 25.600 bits (Bc) = 51.200 bits
Set your Bc to 25.600 bits and your Be to 51.200 bits and you’ll be able to burst up
to the physical access rate.
Now let’s calculate this for the 64 Kbps link, first the Bc:
I hope this has been helpful to you, if you have any questions feel free to ask!
PPP Multilink lets us bundle multiple physical interfaces into a single logical
interface. We can use this to load balance on layer 2 instead of layer 2. Take a look
at the following picture so I can give you an example:
Above we have two routers connected to each other with two serial links. If we want
to use load balancing we could do this on layer 3, just configure a subnet on each
serial link and activate both links in a routing protocol like EIGRP or OSPF.
When we use PPP multilink we can bundle the two serial links into one logical layer
3 interface and we’ll do load balancing on layer 2. PPP multilink will break the
outgoing packets into smaller pieces, puts a sequence number on them and sends
them out the serial interfaces. Another feature of PPP multilink is fragmentation.
This could be useful when you are sending VoIP between the two routers.
Most voice codecs require a maximum delay of 10 ms between the different VoIP
packets. Let’s say the serial link offers 128 Kbit of bandwidth…how long would it
take to send a voice packet that is about 60 bytes?
So it takes roughly 3.7 ms to send the voice packet which is far below the required
10 ms. We can run into issues however when we also send data packets over this
link. Let’s say we have a 1500 bytes data packet that we want to send over this link:
I am using two routers with only a single serial link between them. Even though it’s
called multilink PPP you can still configure it on only one link. This is how we
configure it:
R1(config)#interface virtual-template 1
R1(config-if)#bandwidth 128
R1(config-if)#ip address 192.168.12.1 255.255.255.0
R1(config-if)#fair-queue
R1(config-if)#ppp multilink fragment delay 10
R1(config-if)#ppp multilink interleave
R2(config)#interface virtual-template 1
R2(config-if)#bandwidth 128
R2(config-if)#ip address 192.168.12.2 255.255.255.0
R2(config-if)#fair-queue
R2(config-if)#ppp multilink fragment delay 10
R2(config-if)#ppp multilink interleave
And last but not least, configure the interfaces to use PPP multilink:
Just make sure you enable PPP encapsulation and PPP multilink on the interfaces
and you are done. Now let's see if it's working or not:
Virtual-Access2
Bundle name: R2
Remote Endpoint Discriminator: [1] R2
Local Endpoint Discriminator: [1] R1
Bundle up for 00:00:25, total bandwidth 128, load 1/255
Receive buffer limit 12192 bytes, frag timeout 1000 ms
Interleaving enabled
0/0 fragments/bytes in reassembly list
0 lost fragments, 0 reordered
0/0 discarded fragments/bytes, 0 lost received
0x2 received sequence, 0x2 sent sequence
Member links: 1 (max not set, min not set)
Se0/0, since 00:00:25, 160 weight, 152 frag size
No inactive multilink interfaces
R1#show interfaces virtual-access 2
Virtual-Access2 is up, line protocol is up
Hardware is Virtual Access interface
Internet address is 192.168.12.1/24
MTU 1500 bytes, BW 128 Kbit/sec, DLY 100000 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation PPP, LCP Open, multilink Open
Open: IPCP
MLP Bundle vaccess, cloned from Virtual-Template1
Vaccess status 0x40, loopback not set
Keepalive set (10 sec)
DTR is pulsed for 5 seconds on reset
Last input 00:01:05, output never, output hang never
Last clearing of "show interface" counters 00:01:05
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output
drops: 0
Queueing strategy: weighted fair
Output queue: 0/1000/64/0 (size/max total/threshold/drops)
Conversations 0/1/32 (active/max active/max total)
Reserved Conversations 0/0 (allocated/max allocated)
Available Bandwidth 96 kilobits/sec
5 minute input rate 0 bits/sec, 0 packets/sec
5 minute output rate 0 bits/sec, 0 packets/sec
2 packets input, 28 bytes, 0 no buffer
Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
2 packets output, 40 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 output buffer failures, 0 output buffers swapped out
0 carrier transitions
Above you can see that PPP multilink is enabled and that we are using interleaving.
If you have any questions or comments let me know!
Introduction to RSVP
2 votes
R2(config)#interface fa0/1
R2(config-if)#ip rsvp bandwidth 128 64
R3(config)#interface fa0/0
R3(config-if)#ip rsvp bandwidth 128 64
R3(config)#interface fa0/1
R3(config-if)#ip rsvp bandwidth 128 64
R4(config)#interface fa0/0
R4(config-if)#ip rsvp bandwidth 128 64
If you don’t specify the bandwidth then by default RSVP will use up to 75% of the
interface bandwidth for reservations. I’m telling RSVP that it can only use up to 128
kbps for reservations and that the largest reservable flow can be 64 kbps.
Now we’ll configure R1 to act like a RSVP host so it will send a RSVP send path
message:
Above you see the reservation that we configured on R1. Now let’s configure R4 to
respond to this reservation:
R4(config)#ip rsvp reservation-host 192.168.34.4 192.168.12.1 tcp
23 0 ff ?
load Controlled Load Service
rate Guaranteed Bit Rate Service
I can choose between controlled load or guaranteed bit rate. Guaranteed means
the flow will have a bandwidth and delay guarantee. Controlled load will guarantee
the bandwidth but not the delay.
R4(config)#ip rsvp reservation-host 192.168.34.4 192.168.12.1 tcp
23 0 ff rate 64 32
You can see that it has received the reservation from R1. What about R2 and R3?
Above you can see that R2 and R3 also made the reservation. We can also check
RSVP information on the interface level:
Above you can see how R2 reserved 64 kbps on its FastEthernet0/1 interface.
Debugging RSVP
If you really want to see what is going on you should enable a debug, let’s do so on
all routers:
R1,R2,R3,R4#debug ip rsvp
RSVP signalling debugging is on
R1#
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: Received Path
message from 192.168.12.1 (on sender host)
RSVP: new path message passed parsing, continue...
RSVP: Triggering outgoing Path refresh
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: Refresh Path psb =
66C8D7CC refresh interval = 0mSec
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: Sending Path message
to 192.168.12.2
You can see that R1 has received a path message from itself and that it forwards it
towards 192.168.12.2.
R2#
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: Received Path
message from 192.168.12.1 (on FastEthernet0/0)
RSVP: new path message passed parsing, continue...
RSVP: Triggering outgoing Path refresh
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: Refresh Path psb =
650988D4 refresh interval = 0mSec
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: Sending Path message
to 192.168.23.3
R3#
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: Received Path
message from 192.168.23.2 (on FastEthernet0/0)
RSVP: new path message passed parsing, continue...
RSVP: Triggering outgoing Path refresh
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: Refresh Path psb =
6508EB64 refresh interval = 0mSec
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: Sending Path message
to 192.168.34.4
R4#
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: Received Path
message from 192.168.34.3 (on FastEthernet0/0)
R4#
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: Refresh Path psb =
6618082C refresh interval = 30000mSec
RSVP: can't forward Path out received interface
R2 receives the path message from R1 and forwards it towards R3 who will forward
it to R4. Now let's configure R4 to respond:
R4#
RSVP session 192.168.34.4_80[0.0.0.0]: Received RESV for
192.168.34.4 (receiver host) from 192.168.34.4
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: this RESV has a
confirm object
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: reservation not
found--new one
RSVP-RESV: Admitting new reservation: 674BE740
RSVP-RESV: Locally created reservation. No admission/traffic
control needed
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: start requesting 64
kbps FF reservation for 192.168.12.1(0) TCP-> 192.168.34.4(80) on
FastEthernet0/0 neighbor 192.168.34.3
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: Refresh RESV,
req=674C39E8, refresh interval=0mSec [cleanup timer is not awake]
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: Sending Resv message
to 192.168.34.3
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: RESV CONFIRM Message
for 192.168.34.4 (FastEthernet0/0) from 192.168.34.3
R3#
RSVP session 192.168.34.4_80[0.0.0.0]: Received RESV for
192.168.34.4 (FastEthernet0/1) from 192.168.34.4
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: this RESV has a
confirm object
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: reservation not
found--new one
RSVP-RESV: Admitting new reservation: 66171920
RSVP-RESV: reservation was installed: 66171920
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: start requesting 64
kbps FF reservation for 192.168.12.1(0) TCP-> 192.168.34.4(80) on
FastEthernet0/0 neighbor 192.168.23.2
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: Refresh RESV,
req=661769B4, refresh interval=0mSec [cleanup timer is not awake]
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: Sending Resv message
to 192.168.23.2
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: RESV CONFIRM Message
for 192.168.34.4 (FastEthernet0/0) from 192.168.23.2
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: Sending RESV CONFIRM
message to 192.168.34.4
R2#
RSVP session 192.168.34.4_80[0.0.0.0]: Received RESV for
192.168.34.4 (FastEthernet0/1) from 192.168.23.3
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: this RESV has a
confirm object
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: reservation not
found--new one
RSVP-RESV: Admitting new reservation: 674B8E00
RSVP-RESV: reservation was installed: 674B8E00
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: start requesting 64
kbps FF reservation for 192.168.12.1(0) TCP-> 192.168.34.4(80) on
FastEthernet0/0 neighbor 192.168.12.1
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: Refresh RESV,
req=674BDE94, refresh interval=0mSec [cleanup timer is not awake]
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: Sending Resv message
to 192.168.12.1
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: RESV CONFIRM Message
for 192.168.34.4 (FastEthernet0/0) from 192.168.12.1
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: Sending RESV CONFIRM
message to 192.168.23.3
R1#
RSVP session 192.168.34.4_80[0.0.0.0]: Received RESV for
192.168.34.4 (FastEthernet0/0) from 192.168.12.2
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: this RESV has a
confirm object
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: reservation not
found--new one
RSVP-RESV: Admitting new reservation: 66C95AF4
RSVP-RESV: reservation was installed: 66C95AF4
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: Sending RESV CONFIRM
message to 192.168.12.2
Above you can see that each router forwards the RESV message and makes the
reservation for this particular flow. That's all I wanted to show you for now, I hope
this helps you to understand RSVP. If you have any questions feel free to ask.
RSVP will work fine when you need to make a reservation on the link between two
routers, but what if you have a shared segment? An example could be a couple of
routers that is connected to the same half-duplex Ethernet network. These routers
will share the bandwidth so when multiple routers make an RSVP reservation it’s
possible that we oversubscribe.
The routers should know about all RSVP reservations that are made on this shared
segment and that’s exactly why we have the DSBM (Designated Subnetwork
Bandwidth Manager).
One of the routers on the shared segment will be elected as the DSBM and all other
RSVP routers willproxy their RSVP PATH and RESV messages through the DSBM.
This way we will have centralized admission control and we won’t risk
oversubscribing the shared segment.
Besides being in charge of admission control, the DSBM can also distribute other
information to RSVP routers, for example the amount of non-reservable traffic that
is allowed in the shared segment or the average/peak rate and burst size for non-
RSVP traffic.
The election to become the RSVP DSBM uses the following rules:
Just 3 routers connected to the same switch. First we will enable RSVP on all
interfaces:
If you want, you can configure the DSBM to tell other RSVP routers to limit the
reservations:
I’ll set the maximum bandwidth to 2048 kbit. We can also set a number of
parameters for non-RSVP traffic:
Interface: FastEthernet0/0
Local Configuration Current DSBM
IP Address: 192.168.123.1 IP Address: 192.168.123.3
DSBM candidate: no I Am DSBM: no
Priority: 64 Priority: 64
Non Resv Send Limit Non Resv Send Limit
Rate: unlimited Rate: 2147483 Kbytes/sec
Burst: unlimited Burst: 536870 Kbytes
Peak: unlimited Peak: unlimited
Min Unit: unlimited Min Unit: unlimited
Max Unit: unlimited Max Unit: unlimited
R2#show ip rsvp sbm detail
Interface: FastEthernet0/0
Local Configuration Current DSBM
IP Address: 192.168.123.2 IP Address: 192.168.123.3
DSBM candidate: no I Am DSBM: no
Priority: 64 Priority: 64
Non Resv Send Limit Non Resv Send Limit
Rate: unlimited Rate: 2147483 Kbytes/sec
Burst: unlimited Burst: 536870 Kbytes
Peak: unlimited Peak: unlimited
Min Unit: unlimited Min Unit: unlimited
Max Unit: unlimited Max Unit: unlimited
R3#show ip rsvp sbm detail
Interface: FastEthernet0/0
Local Configuration Current DSBM
IP Address: 192.168.123.3 IP Address: 192.168.123.3
DSBM candidate: yes I Am DSBM: yes
Priority: 64 Priority: 64
Non Resv Send Limit Non Resv Send Limit
Rate: unlimited Rate: 2147483 Kbytes/sec
Burst: unlimited Burst: 536870 Kbytes
Peak: unlimited Peak: unlimited
Min Unit: unlimited Min Unit: unlimited
Max Unit: unlimited Max Unit: unlimited
With R3 as the DSBM it will be in the middle of all RSVP messages. We can test this
by configuring a reservation between R1 and R2:
When we check R3 you can see that it knows about the reservation that we just
configured:
That’s all I wanted to share about DSBM for now. If you have any questions feel free
to ask!
When you create access-lists or QoS (Quality of Service) policies you normally use
layer 1,2,3 and 4 information to match on certain criteria. NBAR (Network Based
Application Recognition) addsapplication layer intelligence to our Cisco IOS router
which means we can match and filter based on certain applications.
Let’s say you want to block a certain website like Youtube.com. Normally you would
lookup the IP addresses that youtube uses and block those using an access-list or
perhaps police / shape them in your QoS policies. Using NBAR we can match on the
website addresses instead of IP addresses. This makes life a lot easier. Let’s look at
an example where we use NBAR to block a website (youtube for example):
First I will create a class-map called “BLOCKED” and I will use match protocol to use
NBAR. As you can see I match on the hostname “youtube.com”. The * means “any
character”. Effectively this will block all sub-domains of youtube.com, for example
“subdomain.youtube.com” will also be blocked. Now we need to create a policy-
map:
R1(config)#policy-map DROP
R1(config-pmap)#class BLOCKED
R1(config-pmap-c)#drop
R1(config-pmap-c)#exit
The policy-map above matches our class-map BLOCKED and when this matches the
traffic will be dropped. Last but not least we need to apply the policy-map to the
interface:
I will apply the policy-map to the interface that is connected to the Internet. Now
whenever someone tries to reach youtube.com their traffic will be dropped. You
can verify this on your router using the following command:
Above you see that we have a match for our class-map BLOCKED. Apparently
someone tried to reach youtube.com. The class-map class-default matches all other
traffic and it is permitted.
In case you were wondering...you can only use NBAR to match HTTP traffic, not
HTTPS. The reason for this is that NBAR matches on the HTTP "get" command
which is encrypted if you use HTTPS. Take a look at the following wireshark capture
for HTTP:
Above you see the HTTP GET request for youtube.com in plaintext. This is what
NBAR looks at and matches on. Now let me show you the HTTPS capture:
Above you see a wireshark capture of HTTPS traffic between my computer and
youtube.com. It's impossible for NBAR to look into these SSL packets and see what
website you are requesting. In this case your only option is to use a proxy server for
HTTP server or block the IP addresses using an access-list.
This is how you can block websites using your normal Cisco IOS router. If you have
any questions just leave a comment!