Professional Documents
Culture Documents
01 QoS Configuration
01 QoS Configuration
01 QoS Configuration
Table of Contents
Configuring PQ........................................................................................................................................4-8
PQ Configuration Procedure ...........................................................................................................4-8
PQ Configuration Example..............................................................................................................4-9
Configuring CQ .....................................................................................................................................4-10
Configuration Procedure................................................................................................................4-11
CQ Configuration Example............................................................................................................4-12
Configuring WFQ ..................................................................................................................................4-12
Configuration Procedure................................................................................................................4-12
WFQ Configuration Example.........................................................................................................4-13
Configuring CBQ ...................................................................................................................................4-13
Configuring the Maximum Available Interface Bandwidth.............................................................4-14
Defining a Class ............................................................................................................................4-16
Defining a Traffic Behavior ............................................................................................................4-16
Defining a QoS Policy....................................................................................................................4-22
Applying the QoS Policy ................................................................................................................4-22
CBQ Configuration Example .........................................................................................................4-24
Displaying and Maintaining CBQ...................................................................................................4-25
Configuring RTP Priority Queuing.........................................................................................................4-25
Configuration Procedure................................................................................................................4-25
RTP Priority Queuing Configuration Example ...............................................................................4-26
Configuring QoS Tokens.......................................................................................................................4-27
QoS Token Configuration Procedure ............................................................................................4-27
QoS Token Configuration Example...............................................................................................4-27
Configuring Packet Information Pre-Extraction.....................................................................................4-28
Configuration Procedure................................................................................................................4-28
Configuration Example ..................................................................................................................4-28
Configuring Local Fragment Pre-drop...................................................................................................4-28
Configuration Procedure................................................................................................................4-28
Configuration Example ..................................................................................................................4-29
Example 1........................................................................................................................................5-7
Example 2........................................................................................................................................5-8
6 Congestion Avoidance..............................................................................................................................6-1
Congestion Avoidance Overview ............................................................................................................6-1
Introduction to WRED Configuration.......................................................................................................6-3
Configuration Methods ....................................................................................................................6-3
Introduction to WRED Parameters ..................................................................................................6-3
Configuring WRED on an Interface.........................................................................................................6-3
Configuration Prerequisites .............................................................................................................6-3
Configuration Procedure..................................................................................................................6-3
Configuration Example ....................................................................................................................6-4
Applying a WRED Table on an Interface ................................................................................................6-5
Configuration Prerequisites .............................................................................................................6-5
Configuration Procedure..................................................................................................................6-5
Displaying and Maintaining WRED .........................................................................................................6-6
WRED Configuration Example................................................................................................................6-6
1 QoS Overview
Introduction to QoS
Quality of Service (QoS) is a concept concerning service demand and supply. It reflects the ability to
meet customer needs. Generally, QoS focuses on improving services under certain conditions rather
than grading services precisely.
In an internet, QoS evaluates the ability of the network to forward packets of different services. The
evaluation can be based on different criteria because the network may provide various services.
Generally, QoS refers to the ability to provide improved service by solving the core issues such as delay,
jitter, and packet loss ratio in the packet forwarding process.
and jitter. As for mission-critical applications, such as transactions and Telnet, they may not require high
bandwidth but do require low delay and preferential service during congestion.
The emerging applications demand higher service performance of IP networks. Better network services
during packets forwarding are required, such as providing dedicated bandwidth, reducing packet loss
ratio, managing and avoiding congestion, regulating network traffic, and setting the precedence of
packets. To meet these requirements, networks must provide more improved services.
Causes
Congestion easily occurs in complex packet switching circumstances in the Internet. The following
figure shows two common cases:
Figure 1-1 Traffic congestion causes
100M
Congestion on interfaces
with different speeds 100M
Congestion on interfaces
with the same speed
z The traffic enters a device from a high speed link and is forwarded over a low speed link.
z The packet flows enter a device from several interfaces at the same rate and are forwarded out an
interface at the same rate as well.
When traffic arrives at the line speed, a bottleneck is created at the outgoing interface causing
congestion.
Besides bandwidth bottlenecks, congestion can be caused by resource shortage in various forms such
as insufficient processor time, buffer, and memory, and by network resource exhaustion resulting from
excessive arriving traffic in certain periods.
Impacts
environments. To improve the service performance of your network, you must address the congestion
issues.
Countermeasures
A simple solution for congestion is to increase network bandwidth. However, it cannot solve all the
problems that cause congestion.
A more effective solution is to provide differentiated services for different applications through traffic
control and resource allocation. In this way, resources can be used more properly. During resources
allocation and traffic control, the direct or indirect factors that might cause network congestion should be
controlled to reduce the probability of congestion. Once congestion occurs, resource allocation should
be performed according to the characteristics and demands of applications to minimize the effects of
congestion on QoS.
As shown in Figure 1-2, traffic classification, traffic policing (TP), traffic shaping (TS), congestion
management, and congestion avoidance are the foundations for a network to provide differentiated
services. Mainly they implement the following functions:
z Traffic classification uses certain match criteria to organize packets with different characteristics
into different classes, and is the prerequisite for providing differentiated services. Traffic
classification is usually applied in the inbound direction of a port.
z Traffic policing polices particular flows entering a device according to configured specifications and
is usually applied in the inbound direction of a port. When a flow exceeds the specification, some
restriction or punishment measures can be taken to prevent overconsumption of network
resources and protect the commercial benefits of the carrier.
z Traffic shaping proactively adjusts the output rate of traffic to adapt traffic to the network resources
of the downstream device and avoid unnecessary packet drop and congestion. Traffic shaping is
usually applied in the outbound direction of a port.
z Congestion management provides measures for handling resource competition during network
congestion and is usually applied in the outbound direction of a port. Generally, it stores packets in
queues, and then uses a scheduling algorithm to arrange the forwarding sequence of the packets.
z Congestion avoidance monitors the usage status of network resources and is usually applied in the
outbound direction of a port. As congestion becomes worse, it actively reduces the amount of traffic
by dropping packets.
Among these traffic management technologies, traffic classification is the basis for providing
differentiated services by classifying packets with certain match criteria. Traffic policing, traffic shaping,
congestion management, and congestion avoidance manage network traffic and resources in different
ways to realize differentiated services.
Normally, QoS provides the following functions:
z Traffic classification
z Access control
z Traffic policing and traffic shaping
z Congestion management
z Congestion avoidance
When configuring traffic classification, traffic policing, and traffic shaping, go to these sections for
information you are interested in:
z Traffic Classification Overview
z Traffic Policing and Traffic Shaping Overview
z Traffic Evaluation and Token Bucket
z TP, GTS and Line Rate Configuration
z Displaying and Maintaining TP, GTS and Line Rate
z TP and GTS Configuration Example
Traffic classification organizes packets with different characteristics into different classes using match
criteria. It is the basis for providing differentiated services.
You can define match criteria based on the IP precedence bits in the type of service (ToS) field of the IP
packet header, or based on other header information such as IP addresses, MAC addresses, IP
protocol field and port numbers. Contents other than the header information in packets are rarely used
for traffic classification. You can define a class for packets with a common quintuple (source address,
source port number, protocol number, destination address and destination port number), or for all
packets to a certain network segment.
When packets are classified on the network boundary, the precedence bits in the ToS field of the IP
packet header are generally re-set. In this way, IP precedence can be adopted as a classification
criterion for the packets in the network. On the other hand, IP precedence can also be used in queuing
to prioritize traffic. The downstream network can either adopt the classification results from its upstream
network or classify the packets again according to its own criteria.
To provide differentiated services, traffic classes must be associated with certain traffic control actions
or resource allocation actions. What traffic control actions to adopt depends on the current phase and
the resources of the network. For example, CIR is adopted to police packets when they enter the
network; GTS is performed on packets when they flow out of the node; queue scheduling is performed
when congestion happens; congestion avoidance measures are taken when the congestion
deteriorates.
IP Precedence
As shown in Figure 2-1, the ToS field of the IP header contains 8 bits: the first three bits (0 to 2)
represent IP precedence from 0 to 7; the following 4 bits (3 to 6) represent a ToS value from 0 to 15. In
RFC 2474, the ToS field of the IP header is redefined as the DS field, where a DiffServ code point
(DSCP) precedence is represented by the first 6 bits (0 to 5) and is in the range 0 to 63. The remaining
2 bits (6 and 7) are reserved.
A token bucket can be considered as a container holding a certain number of tokens. The system puts
tokens into the bucket at a set rate. When the token bucket is full, the extra tokens will overflow.
The evaluation for the traffic specification is based on whether the number of tokens in the bucket can
meet the need of packet forwarding. If the number of tokens in the bucket is enough to forward the
packets (generally, one token is associated with a 1-bit forwarding authority), the traffic conforms to the
specification, and the traffic is called conforming traffic; otherwise, the traffic does not conform to the
specification, and the traffic is called excess traffic.
A token bucket has the following configurable parameters:
z Mean rate: At which tokens are put into the bucket, namely, the permitted average rate of traffic. It
is usually set to the committed information rate (CIR).
z Burst size: the capacity of the token bucket, namely, the maximum traffic size that is permitted in
each burst. It is usually set to the committed burst size (CBS). The set burst size must be greater
than the maximum packet size.
One evaluation is performed on each arriving packet. In each evaluation, if the number of tokens in the
bucket is enough, the traffic conforms to the specification and the corresponding tokens for forwarding
the packet are taken away; if the number of tokens in the bucket is not enough, it means that too many
tokens have been used and the traffic is excessive.
Complicated evaluation
You can set two token buckets in order to evaluate more complicated conditions and implement more
flexible regulation policies. For example, traffic policing uses four parameters:
z CIR
z CBS
z Excess burst size (EBS)
The rate of putting tokens into the two buckets is CIR, and their sizes are CBS and EBS respectively
(the two buckets are called C bucket and E bucket respectively for short), representing different
permitted burst levels. In each evaluation, different regulation policies can be implemented in different
conditions, including “enough tokens in C bucket”, “insufficient tokens in C bucket but enough tokens in
E bucket” and “insufficient tokens in both C bucket and E bucket”.
Huawei Proprietary and Confidential 2-3
Copyright © Huawei Technologies Co., Ltd
2 Traffic Classification, Traffic Policing and Traffic QoS Configuration QoS Volume
Shaping Configuration Operation Manual
Traffic policing
The typical application of traffic policing is to supervise the specification of certain traffic entering a
network and limit it within a reasonable range, or to "discipline" the extra traffic. In this way, the network
resources and the interests of the carrier are protected. For example, you can limit bandwidth
consumption of HTTP packets to less than 50% of the total. If the traffic of a certain session exceeds the
limit, traffic policing can drop the packets or reset the IP precedence of the packets.
Traffic policing is widely used in policing traffic entering the networks of internet service providers (ISPs).
It can classify the policed traffic and perform pre-defined policing actions based on different evaluation
results. These actions include:
z Forwarding the packets whose evaluation result is “conforming”.
z Dropping the packets whose evaluation result is “excess”.
z Modifying the IP precedence of the packets whose evaluation result is “conforming” and forwarding
them.
z Modifying the IP precedence of the packets whose evaluation result is “conforming” and delivering
them into the next-level traffic policing.
z Entering the next-level policing (you can set multiple traffic policing levels with each level focusing
on specific objects).
Traffic shaping
Traffic shaping provides measures to adjust the rate of outbound traffic actively. A typical traffic shaping
application is to limit the local traffic output rate according to the downstream traffic policing parameters.
The difference between TP and GTS is that packets to be dropped in TP are cached in a buffer or queue
in GTS, as shown in Figure 2-3. When there are enough tokens in the token bucket, these cached
packets are sent at an even rate. Traffic shaping may result in an additional delay while traffic policing
does not.
Figure 2-3 Diagram for GTS
Packets sent
Packet
classification
Token
bucket
Queue
Packets dropped
For example, in Figure 2-4, Router A sends packets to Router B. Router B performs TP on packets from
Router A and drops packets exceeding the limit.
You can perform traffic shaping for the packets on the outgoing interface of Router A to avoid
unnecessary packet loss. Packets exceeding the limit are cached in Router A. Once resources are
released, traffic shaping takes out the cached packets and sends them out. In this way, all the traffic
sent to Router B conforms to the traffic specification defined in Router B.
Line rate
The line rate of a physical interface specifies the maximum rate for forwarding packets (including critical
packets).
Line rate also uses token buckets for traffic control. With line rate configured on an interface, all packets
to be sent through the interface are firstly handled by the token bucket at line rate. If there are enough
tokens in the token bucket, packets can be forwarded; otherwise, packets are put into QoS queues for
congestion management. In this way, the traffic passing the physical interface is controlled.
Figure 2-5 Line rate implementation
In the token bucket approach to traffic control, bursty traffic can be transmitted so long as enough
tokens are available in the token bucket; if tokens are inadequate, packets cannot be transmitted until
the required number of tokens are generated in the token bucket. Thus, traffic rate is restricted to the
rate for generating tokens, thus limiting traffic rate and allowing bursty traffic.
Compared with traffic policing, line rate can only limit traffic rate on a physical interface. Since traffic
policing operates at the IP layer, it can limit the rate of different flows on a port. However, traffic policing
ignores packets not processed by the IP layer. To limit the rate of all the packets on interfaces, using line
rate is easier.
Huawei Proprietary and Confidential 2-5
Copyright © Huawei Technologies Co., Ltd
2 Traffic Classification, Traffic Policing and Traffic QoS Configuration QoS Volume
Shaping Configuration Operation Manual
Task Remarks
Configure a CAR list
Configuring CAR-list-based traffic policing
Apply CAR policies to the specified interface
Configure an ACL
Configuring ACL-based traffic policing
Apply CAR policies to the specified interface
Configuring traffic policing for all traffic Apply CAR policies to the specified interface
Configure ACL
Configuring ACL-based GTS
Configure GTS on interfaces
Traffic policing configuration involves the following two tasks: the first task is to define the
characteristics of packets to be policed; the second task is to define policing policies for the matched
packets.
Configure traffic policing on Ethernet 1/1: rate of packets sent on Ethernet 1/1 cannot exceed 1 Mbps,
and excess packets are dropped.
# Enter system view.
<Sysname> system-view
Configuring GTS
Display GTS
configuration display qos gts interface Optional
information on the [ interface-type interface-number ] Available in any view
interface
Configure GTS on Ethernet 1/1, shaping the packets when the sending rate exceeds 500 kbps.
# Enter system view.
<Sysname> system-view
Configuration procedure
The line rate of a physical interface specifies the maximum rate of incoming packets or outgoing
packets.
Follow these steps to configure the line rate:
Display CAR list information display qos carl [ carl-index ] Available in any view
Configuration procedure
1) Configure Router A
# Configure GTS on Ethernet 1/3 of Router A, shaping the packets when the sending rate exceeds 500
kbps to decrease the packet loss ratio of Ethernet 1/1 of Router B.
<RouterA> system-view
[RouterA] interface ethernet 1/3
[RouterA-Ethernet1/3] qos gts any cir 500
[RouterA-Ethernet1/3] quit
When configuring a QoS policy, go to these sections for information you are interested in:
z QoS Policy Overview
z Configuring a QoS Policy
z Applying the QoS Policy
z Configuring QoS Policy-Based Traffic Rate Measuring Interval
z Displaying and Maintaining QoS Policies
Class
Traffic behavior
Policy
Configuration Prerequisites
Defining a Class
To define a class, you need to specify a name for it and then configure match criteria in class view.
Follow these steps to define a class:
To define a traffic behavior, you should first create a traffic behavior name and then configure attributes
in traffic behavior view.
Follow these steps to define a traffic behavior:
Defining a Policy
A policy defines the mapping between a class and a traffic behavior (a set of QoS actions).
In a policy, multiple class-to-traffic-behavior mappings can be configured, and these mappings are
executed according to the order configured.
Follow these steps to define a policy:
If an ACL is referenced by a QoS policy for defining traffic match criteria, the operation of the QoS policy
varies by interface or PVC:
z If the QoS policy is applied to a software interface or PVC and the match mode of the if-match
clause is deny, the if-match clause for matching the ACL does not take effect and packets go to
the next match criterion.
z With the QoS policy applied to a hardware interface, packets matching the ACL are organized as a
class and the behavior defined in the QoS policy applies to the class regardless of whether the
match mode of the if-match clause is deny or permit.
Network requirements
Configure a QoS policy test_policy to limit the rate of packets with IP precedence 6 to 100 kbps.
Configuration procedure
# Create a traffic behavior test_behavior and configure the action of limiting the traffic rate to 100 kbps
for it.
[Sysname] traffic behavior test_behavior
[Sysname-behavior-test_behavior] car cir 100
[Sysname-behavior-test_behavior] quit
# Create a QoS policy test_policy and associate the traffic behavior with the class.
[Sysname] qos policy test_policy
[Sysname-qospolicy-test_policy] classifier test_class behavior test_behavior
You can modify the classification rules, traffic behaviors, and classifier-behavior associations of a QoS
policy already applied.
A policy can be applied to multiple interfaces or PVCs. Only one policy can be applied in one direction
(inbound or outbound) of an interface or PVC.
Configuration procedure
z QoS policies can be applied to all physical interfaces except interfaces encapsulated with X.25 or
LAPB.
z If a QoS policy is applied in the outbound direction of an interface or PVC, the QoS policy cannot
influence local packets (local packets refer to the important protocol packets that maintain the
normal operation of the device. QoS must not process such packets to avoid packet drop.
Commonly used local packets are: link maintenance packets, ISIS packets, OSPF packets, RIP
packets, BGP packets, LDP packets, RSVP packets, and SSH packets and so on.)
Configuration example
interface interface-type
Enter interface view —
interface-number
z The QoS policy-based traffic rate measuring interval of an ATM PVC is the same as that of the
ATM interface.
z The QoS policy-based traffic rate measuring interval of an FR DLCI is the same as that of the FR
interface.
z The QoS policy-based traffic rate measuring interval of a subinterface is the same as that of the
main interface.
When configuring congestion management, go to these sections for information you are interested in:
z Congestion Management Overview
z Configuring FIFO
z Configuring PQ
z Configuring CQ
z Configuring WFQ
z Configuring CBQ
z Configuring RTP Priority Queuing
z Configuring QoS Tokens
z Configuring Packet Information Pre-Extraction
z Configuring Local Fragment Pre-drop
In general, congestion management adopts queuing technology. The system uses a certain queuing
algorithm for traffic classification, and then uses a certain precedence algorithm to send the traffic. Each
queuing algorithm is used to handle a particular network traffic problem and has significant impacts on
bandwidth resource assignment, delay, and jitter.
Several common queue-scheduling mechanisms are introduced here.
FIFO
Packets to be sent
through this interface Packets sent
Queue Interface
Sending queue
As shown in Figure 4-1, First in First Out (FIFO) queuing determines the order of forwarding packets
according to the arrival time. On a device, the resources assigned for the packets are based on the
arrival time of the packets and the current load status of the device. The best-effort service model
adopts FIFO queuing.
If there is only one FIFO output/input queue on each port of a device, malicious applications may
occupy all network resources and seriously affect mission-critical data transmission.
FIFO queuing is adopted by default.
Priority queuing
High queue
Packets to be sent
through this interface Packets sent
Middle queue
Interface
Normal queue
Priority queuing is designed for mission-critical applications. The key feature of mission-critical
applications is that they require preferential service to reduce the response delay when congestion
occurs. Priority queuing can flexibly determine the order of forwarding packets by network protocol (for
example, IP and IPX), incoming interface, packet length, source/destination address, and so on. Priority
queuing classifies packets into four queues: top, middle, normal, and bottom, in descending priority
order. By default, packets are assigned to the normal queue.
Priority queuing schedules the four queues strictly according to the descending order of priority. It sends
packets in the queue with the highest priority first. When the queue with the highest priority is empty, it
4-2 Huawei Proprietary and Confidential
Copyright © Huawei Technologies Co., Ltd
QoS Configuration QoS Volume
Operation Manual 4 Congestion Management Configuration
sends packets in the queue with the second highest priority. In this way, you can assign the
mission-critical packets to the high priority queue to ensure that they are always served first. The
common service packets are assigned to the low priority queues and transmitted when the high priority
queues are empty.
The disadvantage of priority queuing is that packets in the lower priority queues cannot be transmitted if
there are packets in the higher queues for a long time.
Custom queuing
CQ organizes packets into 16 classes (corresponding to 16 queues) by certain rules. A certain class of
packets enters the corresponding custom queue according to FIFO queuing.
Queues 1 through 16 are customer queues, as shown in Figure 4-3. You can define traffic classification
rules and assign a percentage of interface/PVC bandwidth for each of these 16 customer queues.
During a cycle of queue scheduling, packets in the system queue are sent preferentially till the system
queue is empty. Then round robin queue scheduling is performed for the 16 customer queues, that is, a
certain number of packets (based on the percentage of interface bandwidth assigned for each queue)
are taken out from a queue and forwarded in the ascending order of queue 1 to queue 16. In CQ,
packets of different applications are assigned with different bandwidths. In this way, mission-critical
packets are assigned with more bandwidth, and at the same time, normal packets are also assigned
with certain bandwidth. By default, packets are assigned to queue 1.
Another advantage of CQ is that bandwidth can be assigned according to the utilization of applications,
so CQ is suitable for applications with the special requirement for bandwidth. Though round robin queue
scheduling is performed for the 16 customer queues, no fixed service time segment is assigned for
each queue. When there are no packets of certain classes, the bandwidth allocated to packets of the
existing classes increases.
Before WFQ is introduced, you need to understand fair queuing (FQ). FQ is designed for fairly sharing
network resources, reducing the delay and jitter of all traffic. FQ takes all the aspects into consideration:
z Different queues have fair dispatching opportunities for delay balancing among streams.
z Short packets and long packets are fairly scheduled: if there are long packets and short packets in
queues, statistically the short packets should be scheduled preferentially to reduce the jitter
between packets on the whole.
Compared with FQ, WFQ takes weights into account when determining the queue scheduling order.
Statistically, WFQ gives high priority traffic more scheduling opportunities than low priority traffic. WFQ
can automatically classify traffic according to the “session” information of traffic (protocol type, TCP or
UDP source/destination port numbers, source/destination IP addresses, IP precedence bits in the ToS
field, etc), and try to provide as many queues as possible so that each traffic flow can be put into these
queues to balance the delay of every traffic flow on a whole. When dequeuing packets, WFQ assigns
the outgoing interface bandwidth to each traffic flow by the precedence. The higher precedence value a
traffic flow has, the more bandwidth it gets.
For example, assume that there are five flows in the current interface, with the precedence being 0, 1, 2,
3, and 4 respectively. The total bandwidth quota is the sum of all the (precedence value + 1)s, that is, 1
+ 2 + 3 + 4 + 5 = 15.
The bandwidth percentage assigned to each flow is (precedence value of the flow + 1)/total bandwidth
quota. The bandwidth percentages for flows are 1/15, 2/15, 3/15, 4/15, and 5/15 respectively.
Because WFQ can balance the delay and jitter of every flow when congestion occurs, it is effectively
applied in some special occasions. For example, WFQ is adopted in the assured forwarding (AF)
services of the Resource Reservation Protocol (RSVP). In Generic Traffic Shaping (GTS), WFQ is used
to schedule buffered packets.
CBQ
Class-based queuing (CBQ) extends WFQ by supporting user-defined classes. CBQ assigns an
independent reserved FIFO queue for each user-defined class to buffer data of the class. In the case of
4-4 Huawei Proprietary and Confidential
Copyright © Huawei Technologies Co., Ltd
QoS Configuration QoS Volume
Operation Manual 4 Congestion Management Configuration
network congestion, CBQ assigns packets to queues by user-defined traffic classification rules. It is
necessary to perform the congestion avoidance mechanism (tail drop or weighted random early
detection (WRED)) and bandwidth restriction check before packets are enqueued. When being
dequeued, packets are scheduled by WFQ.
CBQ provides an emergency queue to enqueue emergent packets. The emergency queue is a FIFO
queue without bandwidth restriction. However, delay sensitive flows like voice packets may not be
transmitted timely in CBQ since packets are fairly treated. To solve this issue, Low Latency Queuing
(LLQ) was introduced to combine PQ and CBQ to transmit delay sensitive flows like voice packets
preferentially.
When defining traffic classes for LLQ, you can configure a class of packets to be transmitted
preferentially. Such a class is called a priority class. The packets of all priority classes are assigned to
the same priority queue. It is necessary to check bandwidth restriction of each class of packets before
the packets are enqueued. During the dequeuing operation, packets in the priority queue are
transmitted first. WFQ is used to dequeue packets in the other queues.
In order to reduce the delay of the other queues except the priority queue, LLQ assigns the maximum
available bandwidth for each priority class. The bandwidth value is used to police traffic in the case of
congestion. In the case of no congestion, a priority class can use more than the bandwidth assigned to
it. In the case of congestion, the packets of each priority class exceeding the assigned bandwidth are
discarded. LLQ can also specify burst-size.
The system matches packets with classification rules in the following order:
z Match packets with priority classes and then the other classes.
z Match packets with priority classes in the order configured.
z Match packets with other classes in the order configured.
z Match packets with classification rules in a class in the order configured.
Real-time transport protocol (RTP) priority queuing is a simple queuing technology designed to
guarantee QoS for real-time services (including voice and video services). It assigns RTP voice or video
packets to high-priority queues for preferential sending, thus minimizing delay and jitter and ensuring
QoS for voice or video services sensitive to delay.
Figure 4-5 RTP queuing
As shown in Figure 4-5, RTP priority queuing assigns RTP packets to a high-priority queue. An RTP
packet is a UDP packet with an even destination port number in a configurable range. RTP priority
queuing can be used in conjunction with any queuing (such as, FIFO, PQ, CQ, WFQ and CBQ), while it
Huawei Proprietary and Confidential 4-5
Copyright © Huawei Technologies Co., Ltd
QoS Configuration QoS Volume
4 Congestion Management Configuration Operation Manual
always has the highest priority. Since LLQ of CBQ can also be used to guarantee real-time service data
transmission, it is not recommended to use RTP priority queuing in conjunction with CBQ.
Breaking through the single congestion management policy of FIFO for traditional IP devices, the
current device provides all the congestion management technologies above mentioned to offer
powerful QoS capabilities, meeting different QoS requirements of different applications. The following
table compares these queuing technologies for efficient use.
Number of
Type Advantages Disadvantages
queues
z All packets are treated equally.
The available bandwidth, delay
and drop probability are
determined by the arrival order of
the packets.
z No restriction on the
z No need to configure, easy to incooperative data sources (that
FIFO 1 use is, flows without flow control
z Easy to operate, low delay mechanism, UDP for example),
resulting in bandwidth loss of
cooperative data sources such
as TCP.
z No delay guarantee to
time-sensitive real-time
applications, such as VoIP
z Need to configure; low
processing speed
Provide absolute bandwidth and
delay guarantees for real-time and z If there is no restriction on
PQ 4 bandwidth assigned to
mission critical applications such as
VoIP high-priority packets, low-priority
packets may fail to get
bandwidth.
z Provide different bandwidth
percentages for different
applications Need to configure; low processing
CQ 16
z If packets of certain classes do speed
not exist, it can increase the
bandwidth for existing packets.
Number of
Type Advantages Disadvantages
queues
z Easy to configure
z Provide a bandwidth guarantee
for packets from cooperative
(interactive) sources (such as
TCP packets)
z Reduce jitter
z Reduce the delay for interactive
applications with a small amount The processing speed is faster than
WFQ Configurable of data that of PQ and CQ but slower than
z Assign different bandwidths to that of FIFO
traffic flows with different
priorities
z When the number of traffic
classes decreases, it can
automatically increase the
bandwidth for the existing
classes.
z Flexibly classify traffic based on
various rules and provide
different queue scheduling
mechanisms for expedited
forwarding (EF), assured
forwarding (AF) and best-effort
(BE) services.
z Provide a highly precise
bandwidth guarantee and queue
scheduling on the basis of AF
service weights for various AF
Configurable services.
CBQ The system overhead is large.
(0 to 64)
z Provide absolutely preferential
queue scheduling for the EF
service to meet the delay
requirement of real-time data;
overcome the disadvantage of
PQ that some low-priority
queues are not serviced by
restricting the high-priority traffic.
z Provide WFQ scheduling for
best-effort traffic (the default
class).
If the burst traffic is too heavy, you can increase the queue length to make queue scheduling more
accurate.
Configuring FIFO
FIFO is the default queue scheduling mechanism for an interface or PVC, and the FIFO queue size is
configurable.
Huawei Proprietary and Confidential 4-7
Copyright © Huawei Technologies Co., Ltd
QoS Configuration QoS Volume
4 Congestion Management Configuration Operation Manual
Enter
interface interface-type interface-number
Enter interface view
interface view —
Enter PVC interface atm interface-number
or PVC view
view pvc vpi/vci
Required
Set the FIFO queue size qos fifo queue-length queue-length
75 by default
For the queuing function to take effect on Tunnel interfaces, sub-interfaces, or VT/dialer interfaces
using PPPoE, PPPoA, PPPoEoA, or PPPoFR at the data link layer, you must enable line rate on them.
Configuring PQ
You can define multiple rules for a priority queue list (PQL) and apply the list to an interface or PVC.
When a packet arrives at the interface or PVC, the system matches the packet with each rule in the
order configured. If a match is found, the packet is assigned to the corresponding queue and the match
procedure is complete. If the packet cannot match any rule, the packet is assigned to the default queue
normal.
PQ Configuration Procedure
Apply a PQ list to an interface or PVC. For an interface or PVC, a newly applied PQ list overwrites the
previous one.
Follow these steps to configure PQ:
z PQ is applicable to all physical interfaces except interfaces using the X.25 or LAPB protocol at the
data link layer.
z For the queuing function to take effect on Tunnel interfaces, sub-interfaces, or VT/dialer interfaces
using PPPoE, PPPoA, PPPoEoA, or PPPoFR at the data link layer, you must enable line rate on
them.
PQ Configuration Example
Network requirements
As shown in the figure below, both Server and Host A send data to Host B through Router A. Suppose
Server sends critical packets and Host A sends non-critical packets. Congestion may occur on Serial
Huawei Proprietary and Confidential 4-9
Copyright © Huawei Technologies Co., Ltd
QoS Configuration QoS Volume
4 Congestion Management Configuration Operation Manual
1/1 and result in packet loss because the rate of the incoming interface Ethernet 1/1 is greater than that
of the outgoing interface Serial 1/1 on Router A. It is required that the critical packets from Server be
transmitted preferentially when congestion occurs in the network.
Figure 4-6 Network diagram for PQ
Configuration procedure
Configure Router A:
# Configure ACLs to match the packets from Server and Host A respectively.
[RouterA] acl number 2001
[RouterA-acl-basic-2001] rule permit source 1.1.1.1 0.0.0.0
[RouterA] acl number 2002
[RouterA-acl-basic-2002] rule permit source 1.1.1.2 0.0.0.0
# Configure a PQ list that assigns the packets from Server to the top queue and those from Host A to the
bottom queue when congestion occurs. The maximum queue size of the top queue is set to 50 while
that of the bottom queue is set to 100.
[RouterA] qos pql 1 protocol ip acl 2001 queue top
[RouterA] qos pql 1 protocol ip acl 2002 queue bottom
[RouterA] qos pql 1 queue top queue-length 50
[RouterA] qos pql 1 queue bottom queue-length 100
Configuring CQ
You can configure a CQ list that contains up to 16 queues (1-16), with each queue including the match
criteria for packets to enter the queue, the length of the queue and the bytes sent from the queue during
a cycle of round robin queue scheduling. Only one CQ list can be applied to an interface or PVC.
Configuration Procedure
z CQ is applicable to all physical interfaces except interfaces using X.25 or LAPB protocol at the data
link layer.
z For the queuing function to take effect on Tunnel interfaces, sub-interfaces, or VT/dialer interfaces
using PPPoE, PPPoA, PPPoEoA, or PPPoFR at the data link layer, you must enable line rate on
them.
CQ Configuration Example
Network requirements
Configure CQ to assign packets from Ethernet 1/1 to queue 1 and specify queue 1 to send 2000 bytes
during a cycle of round robin queue scheduling.
Configuration procedure
# Configure CQ list 1.
[Sysname] qos cql 1 inbound-interface ethernet 1/1 queue 1
[Sysname] qos cql 1 queue 1 serving 2000
Configuring WFQ
Configuration Procedure
On an interface/PVC without WFQ configured, the qos wfq command can be used to enable WFQ and
configure WFQ-related parameters. If WFQ is configured for the interface/PVC, the qos wfq command
can be used to modify the WFQ-related parameters.
Follow these steps to configure WFQ:
z WFQ is applicable to all physical interfaces except interfaces using X.25 or LAPB at the data link
layer.
z For the queuing function to take effect on Tunnel interfaces, sub-interfaces, or VT/dialer interfaces
using PPPoE, PPPoA, PPPoEoA, or PPPoFR at the data link layer, you must enable line rate on
them.
Network requirements
Configure WFQ on Serial 2/0, setting the maximum queue size to 100, and the total number of queues
to 512.
Configuration procedure
# Configure WFQ on Serial 2/0, setting the maximum queue size to 100, and the total number of queues
to 512.
[Sysname-Serial2/0] qos wfq queue-length 100 queue-number 512
Configuring CBQ
Follow these steps to configured CBQ:
1) Create a class and define a set of traffic match criteria in class view.
2) Create a traffic behavior, and define a group of QoS features in traffic behavior view.
3) Create a policy, and associate a traffic behavior with a class in policy view.
4) Apply the QoS policy in the interface or PVC view.
The system pre-defines some classes, traffic behaviors and policies. The detailed description is given
below.
Pre-defined classes
The system pre-defines some classes and defines general rules for them. You can use these
pre-defined classes when defining a policy.
z The default class
default-class: Matches the default traffic.
z DSCP-based pre-defined classes
ef, af1, af2, af3, af4: Matches IP DSCP value ef, af1, af2, af3, af4 respectively.
The system pre-defines some traffic behaviors and defines QoS features for them.
z ef: Assigns a class of packets to the EF queue and assigns 20% of the available interface/PVC
bandwidth to the class of packets.
z af: Assigns a class of packets to the AF queue and assigns 20% of the available interface/PVC
bandwidth to the class of packets.
z be: Defines no features.
The system pre-defines a QoS policy, specifies a pre-defined class for the policy and associates a
pre-defined behavior with the class. The policy is named default, with the default CBQ action.
The policy default is defined as follows:
z Associates the pre-defined class ef with the pre-defined traffic behavior ef.
z Associates pre-defined classes af1 through af4 with the pre-defined traffic behavior af.
z Associates the pre-defined class default-class with the pre-defined traffic behavior be.
Configuration procedure
The maximum available interface bandwidth refers to the maximum interface bandwidth used for
bandwidth check when CBQ enqueues packets, rather than the actual bandwidth of the physical
interface.
Follow these steps to configure the maximum interface available bandwidth:
If no maximum available interface bandwidth is configured for any type of interfaces, the bandwidth
used in CBQ calculation is as follows:
z The actual baudrate or rate of a physical interface.
z Total bandwidth of the bound logical serial interfaces, such as T1/E1 interfaces and multilink frame
relay (MFR) interfaces.
z 1000000 kbps for template interfaces such as VT, dialer, BRI, and PRI interfaces.
z 0 kbps for the other virtual interfaces such as tunnel interfaces.
z You are recommended to configure the maximum available interface bandwidth to be smaller than
the actual available bandwidth of a physical interface or logical link.
z On a primary channel interface (such as VT, dialer, BRI, or PRI) configured with the qos
max-bandwidth command, AF and EF perform queue bandwidth check and calculation based on
the bandwidth specified with the qos max-bandwidth command. The same is true of AF and EF
synchronized to the sub-channel interfaces (such as VA interfaces or B channels), where
sub-channel interface bandwidth is ignored. As the QoS configurations of the primary channel
interface and the sub-channel interfaces are the same in this case, prompts are output only for the
primary channel interface. If the qos max-bandwidth command is not configured, AF and EF on
the primary channel interface calculate queue bandwidth based on 1 Gbps of bandwidth, while AF
and EF synchronized to the sub-channel interfaces calculate queue bandwidth based on actual
sub-channel interface bandwidth. In this case, if queuing on a sub-channel interface fails due to
bandwidth change, the prompt will be output for the sub-channel interface.
z On an MP-group interface or MFR interface configured with the qos max-bandwidth command,
AF and EF perform queue bandwidth check and calculation based on the bandwidth specified with
the qos max-bandwidth command. On an MP-group interface or MFR interface without the qos
max-bandwidth command configured, if the sum of sub-channel bandwidth equals to or exceeds
the sum of AF bandwidth and EF bandwidth, AF and EF calculate bandwidth based on the actual
interface bandwidth; otherwise, AF and EF calculate bandwidth based on 1 Gbps of bandwidth,
and the message indicating insufficient bandwidth is displayed. In the latter case, the queuing
function may fail to take effect. You can use the qos reserved-bandwidth command to set the
maximum percentage of the reserved bandwidth to the available bandwidth.
z On Tunnel interfaces, sub-interfaces, or VT/dialer interfaces using PPPoE, PPPoA, PPPoEoA, or
PPPoFR at the data link layer, you should configure the qos max-bandwidth command to provide
base bandwidth for CBQ bandwidth calculation.
1) Network requirements
Set the maximum available interface bandwidth to 60 kbps.
2) Configuration procedure
# Enter system view.
<Sysname> system-view
Defining a Class
To define a class, you need to create the class with a name specified and then configure matching
criteria in class view.
Configuration procedure
Configuration example
1) Network requirements
Define a class named test to match packets having an IP precedence value of 6.
2) Configuration procedure
# Enter system view.
<Sysname> system-view
To define a traffic behavior, you should first create the traffic behavior with a name specified and then
configure attributes for it in traffic behavior view.
Configuration procedure
z This traffic behavior can be applied only in the outbound direction of an interface or ATM PVC.
z In the same traffic behavior, the same bandwidth unit must be used to configure the queue ef
command and the queue af command, either bandwidth or percentage.
Required
Create a traffic behavior The entered traffic behavior
and enter traffic behavior traffic behavior behavior-name name cannot be the name of a
view traffic behavior pre-defined by
the system.
queue ef bandwidth { bandwidth
Configure EF and the
[ cbs burst ] | pct percentage Required
maximum bandwidth
[ cbs-ratio ratio] }
z The queue ef command can not be used in conjunction with the commands queue af,
queue-length, and wred for the same traffic behavior.
z The default class cannot be associated with a traffic behavior including EF.
z In the same traffic behavior, the same bandwidth unit must be used to configure the queue ef
command and the queue af command, either bandwidth or percentage.
3) Configuring WFQ
Follow these steps to configure WFQ:
Required
Create a traffic behavior The entered traffic behavior
and enter traffic behavior traffic behavior behavior-name name cannot be the name of a
view traffic behavior pre-defined by
the system.
queue wfq [ queue-number
Configure WFQ Required
total-queue-number ]
A traffic behavior with WFQ applied can only be associated with the default class.
The queue-length command can be used only after the queue af command or the queue wfq
command has been configured. Executing the undo queue af command or the undo queue wfq
command cancels the queue-length command configuration as well.
z The wred [ dscp | ip-precedence ] command must be issued after the queue af command or the
queue wfq command is used.
z The wred command and the queue-length command are mutually exclusive.
z If the WRED drop configuration is removed, other configurations under it are deleted.
z When a QoS policy including the WRED traffic behavior is applied to an interface, the previous
interface-level WRED configuration gets invalid.
Before configuring the wred weighting-constant command, make sure the queue af command or the
queue wfq command has been configured and the wred command has been used to enable WRED
drop.
7) Configuring the lower limit, upper limit and drop probability denominator for each DSCP value in
WRED
To perform this configuration, make sure DSCP-based WRED drop has been enabled with the wred
dscp command.
Follow these steps to configure the lower limit, upper limit, and drop probability denominator for a DSCP
value in WRED:
dscp-value: DSCP value in the range of 0 to 63, which can also be any of the following keywords: ef,
af11, af12, af13, af21, af22, af23, af31, af32, af33, af41, af42, af43, cs1, cs2, cs3, cs4, cs5, cs6, cs7,
and default.
z When the wred command is disabled, the wred dscp command is also disabled.
z The WRED drop-related parameters are disabled if the queue af command or the queue wfq
command is disabled.
8) Configuring the lower limit, upper limit and drop probability denominator for each IP precedence
value in WRED
To perform this configuration, make sure IP precedence-based WRED drop has been enabled with the
wred ip-precedence command.
Follow these steps to configure the lower limit, upper limit, and drop probability denominator for an IP
precedence value in WRED:
z The wred ip-precedence command is disabled when the wred command is disabled.
z The WRED drop-related parameters are disabled if the queue af command or the queue wfq
command is disabled.
Configuration procedure
1) Network requirements
Huawei Proprietary and Confidential 4-21
Copyright © Huawei Technologies Co., Ltd
QoS Configuration QoS Volume
4 Congestion Management Configuration Operation Manual
Define traffic behavior test, enabling AF and setting the minimum guaranteed bandwidth to 200 kbps.
2) Configuration procedure
# Enter system view.
<Sysname> system-view
A QoS policy associates a class with a traffic behavior, which contains multiple QoS actions, include
queue scheduling, EF, AF, WFQ, TP, GTS, WRED, traffic marking, and so on.
Follow these steps to associate a traffic behavior with a specific class in policy view:
Configuration procedure
Use the qos apply policy command to apply a policy to a specific physical interface or ATM PVC. A
policy can be applied to multiple physical interfaces or ATM PVCs.
Follow these steps to apply a policy to the specific interface or ATM PVC:
Configuration example
1) Network requirements
Create a policy named test. Associate the traffic behavior test_behavior with the traffic class
test_class in the policy, and apply the policy in the inbound direction of Ethernet 1/1.
2) Configuration procedure
# Enter system view.
<Sysname> system-view
Network requirements
As shown in Figure 4-7, traffic travels from Router C to Router D through Router A and Router B.
Configure a QoS policy to meet the following requirements:
z Traffic from Router C is classified into three classes based on DSCP precedence; perform AF for
traffic with the DSCP precedence being AF11 and AF21 and set a minimum guaranteed bandwidth
percentage of 5% for the traffic.
z Perform EF for traffic with the DSCP precedence being EF and set the maximum bandwidth
percentage for the traffic to 30%.
Before performing the configuration, make sure that:
z The route from Router C to Router D through Router A and Router B is reachable.
z The DSCP fields have been set for the traffic before the traffic enters Router A.
Figure 4-7 Network diagram for CBQ configuration
Configuration procedure
Configure Router A:
# Define three classes to match the IP packets with the DSCP precedence AF11, AF21 and EF
respectively.
[RouterA] traffic classifier af11_class
[RouterA-classifier-af11_class] if-match dscp af11
[RouterA-classifier-af11_class] quit
[RouterA]traffic classifier af21
[RouterA-classifier-af21] if-match dscp af21
[RouterA-classifier-af21] quit
[RouterA] traffic classifier ef_class
[RouterA-classifier-ef_class] if-match dscp ef
[RouterA-classifier-ef_class] quit
# Define two traffic behaviors, enable AF and set a minimum guaranteed bandwidth percentage of 5%.
[RouterA] traffic behavior af11_behav
[RouterA-behavior-af11_behav] queue af bandwidth pct 5
[RouterA-behavior-af11_behav] quit
4-24 Huawei Proprietary and Confidential
Copyright © Huawei Technologies Co., Ltd
QoS Configuration QoS Volume
Operation Manual 4 Congestion Management Configuration
# Define a traffic behavior, enable EF and set a maximum bandwidth percentage of 30% (both
bandwidth and delay guarantees are provided).
[RouterA] traffic behavior ef_behav
[RouterA-behavior-ef_behav] queue ef bandwidth pct 30
[RouterA-behavior-ef_behav] quit
# Define a QoS policy to associate the configured traffic behaviors with classes respectively.
[RouterA] qos policy dscp
[RouterA-qospolicy-dscp] classifier af11_class behavior af11_behav
[RouterA-qospolicy-dscp] classifier af21_class behavior af21_behav
[RouterA-qospolicy-dscp] classifier ef_class behavior ef_behav
[RouterA-qospolicy-dscp] quit
# Apply the QoS policy in the outbound direction of an ATM PVC of Router A.
[RouterA] interface atm 1/0
[RouterA-atm1/0] ip address 1.1.1.1 255.255.255.0
[RouterA-atm1/0] pvc qostest 0/40
[RouterA-atm-pvc-atm1/0-0/40-qostest] qos apply policy dscp outbound
With the above configurations complete, EF traffic is forwarded preferentially when congestion occurs.
For the queuing function to take effect on Tunnel interfaces, sub-interfaces, or VT/dialer interfaces
using PPPoE, PPPoA, PPPoEoA, or PPPoFR at the data link layer, you must enable line rate on them.
Network requirements
Configure RTP priority queuing on an interface, and specify up to 70% of the available interface
bandwidth to be reserved for the RTP priority queue.
Configure RTP priority queuing on Serial 1/0: the start port number is 16384, the end port number is
32767, and 64 kbps bandwidth is reserved for RTP packets. When congestion occurs to the outgoing
interface, RTP packets are assigned to the RTP priority queue.
Configuration procedure
# Specify up to 70% of the available bandwidth to be reserved for the RTP priority queue.
[Sysname-Serial1/0] qos reserved-bandwidth pct 70
# Configure RTP priority queuing on Serial 1/0: the start port number is 16384, the end port number is
32767, and 64 kbps of bandwidth is reserved for RTP packets. When congestion occurs to the outgoing
interface, RTP packets are assigned to the RTP priority queue.
[Sysname-Serial1/0] qos rtpq start-port 16384 end-port 32767 bandwidth 64
Since the upper layer protocol TCP provides traffic control, CQ and WFQ may become invalid during
FTP transmission. QoS tokens are used to solve this problem. The token feature of QoS provides a flow
control mechanism for underlying-layer queues. This feature can control the number of packets sent to
the interface underlying-layer queues based on the number of tokens.
You are recommended to set the token number to 1 on an interface for FTP transmission.
If the upper layer protocol, UDP for example, does not provide flow control, you are recommended not
to use the QoS token function in order to improve data transmission efficiency.
Follow these steps to configure QoS tokens:
z To validate the above configuration, you must execute the shutdown command and then the undo
shutdown command on the interface.
z So far, this feature is available only for serial interfaces and BRI interfaces.
Network requirements
Configuration procedure
The packets that a tunnel interface passes to a physical interface are encapsulated in GRE. Thus, the
IP data (including Layer-3 and Layer-4 information) that the QoS module obtains from the physical
interface is the IP data encapsulated in GRE rather than the IP data in the original packets.
To address the problem, you can configure packet information pre-extraction on the tunnel interface to
buffer the IP data in the original packets for its corresponding physical interface to use.
Follow these steps to configure packet information pre-extraction:
Configuration Example
Network requirements
Configuration procedure
If the packet size is bigger than the MTU of the egress interface, the packet is fragmented into local
fragments. If the first fragment is dropped, the subsequent fragments become invalid fragments, and
therefore it is insignificant to process and transmit these invalid fragments.
With local fragment pre-drop, if the first fragment of a packet is dropped by the QoS module, the
subsequent fragments will be dropped directly without undergoing QoS processing. The local fragment
pre-drop function thus improves the local fragment processing and transmitting efficiency, and reduces
subsequent invalid fragments’ occupation of system resources and network bandwidth.
Local fragment pre-drop is mutually exclusive with matching fragments by fragment attributes or size.
With local fragment pre-drop enabled, non-first fragments inherit the matching result of the first
fragment.
Local fragment pre-drop applies to IPv4 and IPv6 local fragments.
Follow these steps to configure local fragment pre-drop:
Configuration Example
Network requirements
Configuration procedure
When configuring priority mapping, go to these sections for information you are interested in:
z Priority Mapping Overview
z Configuring a Priority Mapping Table
z Configuring the Trusted Precedence Type for a Port
z Displaying and Maintaining Priority Mapping
z Priority Mapping Configuration Examples
The device provides the dot1p-lp priority mapping table, that is, 802.1p-priority-to-local-precedence
mapping table. The following tables list the default priority mapping tables.
1 0
2 1
3 0
4 2
5 2
6 3
7 3
Configuration Prerequisites
Configuration Procedure
Required
Enter priority mapping table
qos map-table dot1p-lp You can enter the corresponding
view
priority mapping table view as required.
Required
Configure the priority import import-value-list
mapping table export export-value Newly configured mappings overwrite
the previous ones.
Configuration Example
Network requirements
1 0
2 1
3 1
4 2
5 2
6 3
7 3
Configuration procedure
Configuration Prerequisites
Configuration Procedure
If a WLAN-ESS interface in use has WLAN-DBSS interfaces created, its priority cannot be modified. To
modify the priority of the WLAN-ESS interface, you must stop the service the interface provides (that is,
make the current users on the interface offline).
Configuration Example
Network requirements
Configuration procedure
Configuration Prerequisites
Configuration Procedure
Configuration Example
Network requirements
Configuration procedure
Network requirements
z It is required that Router enqueue packets based on the 802.1p precedence of packets.
z The priority mapping table is user-defined, as shown in the table below.
7 3
Server
Router
Eth1/2 Eth1/1
Eth1/3 Eth1/4
Server
Configuration procedure
# Enter inbound dot1p-lp priority mapping table view and modify the priority mapping table parameters.
[Switch] qos map-table inbound dot1p-lp
[Switch-maptbl-in-dot1p-lp] import 0 1 export 0
[Switch-maptbl-in-dot1p-lp] import 2 3 export 1
Example 2
Network requirements
The router assigns local precedences for packets according to the mappings between precedence and
interface. The precedences of Ethernet 1/1, Ethernet 1/2, Ethernet 1/3, and Ethernet 1/4 are 1, 3, 5, and
7, respectively.
The default priority mapping table is adopted.
Figure 5-3 Network diagram for trusted precedence type configuration
Server
Router
Eth1/2 Eth1/1
Eth1/3 Eth1/4
Server
Configuration procedure
6 Congestion Avoidance
When configuring congestion avoidance, go to these sections for information you are interested in:
z Congestion Avoidance Overview
z Introduction to WRED Configuration
z Configuring WRED on an Interface
z Applying a WRED Table on an Interface
z Displaying and Maintaining WRED
z WRED Configuration Example
The traditional packet drop policy is tail drop. When the length of a queue reaches the maximum
threshold, all the subsequent packets are dropped.
Such a policy results in global TCP synchronization. That is, if packets from multiple TCP connections
are dropped, these TCP connections go into the state of congestion avoidance and slow start to reduce
traffic, but traffic peak occurs later. Consequently, the network traffic jitters all the time.
You can use random early detection (RED) or weighted random early detection (WRED) to avoid global
TCP synchronization.
The RED or WRED algorithm sets an upper threshold and lower threshold for each queue, and
processes the packets in a queue as follows:
z When the queue size is shorter than the lower threshold, no packet is dropped;
z When the queue size reaches the upper threshold, all subsequent packets are dropped;
z When the queue size is between the lower threshold and the upper threshold, the received packets
are dropped at random. The longer a queue is, the higher the drop probability is. However, a
maximum drop probability exists.
Huawei Proprietary and Confidential 6-1
Copyright © Huawei Technologies Co., Ltd
QoS Configuration QoS Volume
6 Congestion Avoidance Operation Manual
Different from RED, WRED determines differentiated drop policies for packets with different IP
precedence values. Packets with a lower IP precedence are more likely to be dropped.
Both RED and WRED avoid global TCP synchronization by randomly dropping packets. When the
sending rate of a TCP session slows down after its packets are dropped, the other TCP sessions remain
in high packet sending rates. In this way, some TCP sessions remain in high sending rates in any case,
and the link bandwidth can be fully utilized.
If the current queue size is compared with the upper threshold and lower threshold to determine the
drop policy, bursty traffic is not fairly treated. To solve this problem, WRED compares the average
queue size with the upper threshold and lower threshold to determine the drop probability.
The average queue size reflects the queue size change trend but is not sensitive to bursty queue size
changes, and thus bursty traffic can be fairly treated. The average queue size is calculated using the
formula: average queue size = previous average queue size × (1-2-n) + current queue size × 2-n, where
n can be configured with the qos wred weighting-constant command.
With WFQ queuing adopted, you can set the exponent for average queue size calculation, upper
threshold, lower threshold, and drop probability for packets with different precedence values
respectively to provide differentiated drop policies.
With FIFO queuing, PQ, or CQ adopted, you can set the exponent for average queue size calculation,
upper threshold, lower threshold, and drop probability for each queue to provide differentiated drop
policies for different classes of packets.
The relation between WRED and queuing mechanism is shown in the following figure:
Figure 6-1 Relationship between WRED and queuing mechanism
Through combining WRED with WFQ, the flow-based WRED can be realized. Because each flow has
its own queue after classification, a flow with a smaller queue size has a lower packet drop probability,
while a flow with a larger queue size has a higher packet drop probability. In this way, the benefits of the
flow with a smaller queue size are protected.
You can configure WRED using one of the following two methods:
z Interface configuration: configure WRED parameters on an interface or PVC and enable WRED.
z WRED table configuration: configure a WRED table in system view and then apply the WRED table
to an interface.
Support for WRED configuration methods depends on the device model.
z The WRED exponent for average queue size calculation is determined (optional).
z The upper threshold and lower threshold for the queue corresponding to the precedence is
determined (optional).
Configuration Procedure
The qos wred enable command can be configured on a hardware interface without any prerequisites.
However, to configure this command on a software interface, make sure that WFQ queuing has been
applied on the software interface.
Configuration Example
Network requirements
Configuration procedure
# Set the following parameters for packets with IP precedence 3: lower threshold 20, upper threshold 40,
and drop probability denominator 15.
[Sysname-Ethernet1/1] qos wred ip-precedence 3 low-limit 20 high-limit 40
discard-probability 15
# Set the exponential factor for the average queue size calculation to 6.
[Sysname-Ethernet1/1] qos wred weighting-constant 6
Configuration Prerequisites
Configuration Procedure
Required
Apply the WRED table to A queue-based WRED table
qos wred apply table-name
the interface/port group is available on only Layer 2
ports.
Configuring and applying a WRED table of another type (except queue-based WRED table)
Follow these steps to configure and apply a WRED table of another type (except queue-based WRED
table):
Required
Apply the WRED table to WRED tables except
qos wred apply table-name queue-based WRED tables
an interface/port group
are applicable on only Layer 3
ports.
Display configuration
information about a display qos wred table Optional
WRED table or all WRED [ table-name ] Available in any view
tables
Configuration procedure
When configuring MPLS QoS, go to these sections for information you are interested in:
z MPLS QoS Overview
z Configuring MPLS QoS
z MPLS QoS Configuration Examples
The MPLS-related knowledge is necessary for understanding MPLS QoS. Refer to MPLS Basic
Configuration in the MPLS Volume for more information about MPLS.
Configuring MPLS PQ
Configuration prerequisites
Configuration procedure
Configuration example
z Create a classification rule for MPLS-based PQ list 10, assigning packets with an EXP value of 5 to
the queue top.
z Apply PQ list 10 to Ethernet 1/1.
The configuration procedure is as follows:
<Sysname> system-view
[Sysname] qos pql 10 protocol mpls exp 5 queue top
[Sysname] interface ethernet 1/1
[Sysname-Ethernet1/1] qos pq pql 10
Configuring MPLS CQ
Configuration prerequisites
Configuration procedure
Configuration example
z Create a classification rule for MPLS-based CQ list 10, assigning packets with an EXP value of 1 to
queue 2.
z Apply CQ list 10 to Ethernet 1/1.
The configuration procedure is as follows:
<Sysname> system-view
[Sysname] qos cql 10 protocol mpls exp 1 queue 2
[Sysname] interface ethernet 1/1
[Sysname-Ethernet1/1] qos cq cql 10
Configuration prerequisites
Configuration procedure
Configuration prerequisites
Configuration procedure
Network requirements
for traffic with an EXP value of 2; guarantee 30% of the bandwidth for traffic with an EXP value of 3;
guarantee the delay and 40% of the bandwidth for traffic with an EXP value of 4.
Refer to MPLS Configuration in the MPLS Volume for the MPLS VPN configuration. This section
introduces only the MPLS QoS configuration.
Configuration procedure
1) Configure PE 1
# Define four classes, matching respectively the DSCP values AF11, AF21, AF31 and EF of the MPLS
packets in the same VPN.
<PE1> system-view
[PE1] traffic classifier af11
[PE1-classifier-af11] if-match dscp af11
[PE1-classifier-af11] traffic classifier af21
[PE1-classifier-af21] if-match dscp af21
[PE1-classifier-af21] traffic classifier af31
[PE1-classifier-af31] if-match dscp af31
[PE1-classifier-af31] traffic classifier efclass
[PE1-classifier-efclass] if-match dscp ef
[PE1-classifier-efclass] quit
# Define four traffic behaviors to set the EXP field value for MPLS packets.
[PE1] traffic behavior exp1
[PE1-behavior-exp1] remark mpls-exp 1
[PE1-behavior-exp1] traffic behavior exp2
[PE1-behavior-exp2] remark mpls-exp 2
[PE1-behavior-exp2] traffic behavior exp3
[PE1-behavior-exp3] remark mpls-exp 3
[PE1-behavior-exp3] traffic behavior exp4
[PE1-behavior-exp4] remark mpls-exp 4
[PE1-behavior-exp4] quit
# Define a QoS policy to associate configured traffic behaviors with traffic classes, that is, mark different
classes of packets with different EXP values.
[PE1] qos policy REMARK
[PE1-qospolicy-REMARK] classifier af11 behavior exp1
[PE1-qospolicy-REMARK] classifier af21 behavior exp2
[PE1-qospolicy-REMARK] classifier af31 behavior exp3
[PE1-qospolicy-REMARK] classifier efclass behavior exp4
[PE1-qospolicy-REMARK] quit
# Apply the QoS policy in the inbound direction of the interface of the PE in the MPLS network.
[PE1] interface ethernet 1/1
[PE1-Ethernet1/1] qos apply policy REMARK inbound
[PE1-Ethernet1/1] quit
2) Configure P
# Define four classes, matching respectively EXP values 1, 2, 3 and 4 of the MPLS packets.
<P> system-view
[P] traffic classifier EXP1
[P-classifier-EXP1] if-match mpls-exp 1
[P-classifier-EXP1] traffic classifier EXP2
[P-classifier-EXP2] if-match mpls-exp 2
[P-classifier-EXP2] traffic classifier EXP3
[P-classifier-EXP3] if-match mpls-exp 3
[P-classifier-EXP3] traffic classifier EXP4
[P-classifier-EXP4] if-match mpls-exp 4
[P-classifier-EXP4] quit
# Define a QoS policy that satisfies the following requirements: guarantee 10% of the bandwidth for
traffic with an EXP value of 1; guarantee 20% of the bandwidth for traffic with an EXP value of 2;
guarantee 30% of the bandwidth for traffic with an EXP value of 3; guarantee the delay and 40% of the
bandwidth for traffic with an EXP value of 4.
[P] qos policy QUEUE
[P-qospolicy-QUEUE] classifier EXP1 behavior AF11
[P-qospolicy-QUEUE] classifier EXP2 behavior AF21
[P-qospolicy-QUEUE] classifier EXP3 behavior AF31
[P-qospolicy-QUEUE] classifier EXP4 behavior EF
[P-qospolicy-QUEUE] quit
# Apply the QoS policy in the outbound direction of Serial 2/2 on device P.
[P] interface serial 2/2
[P-Serial2/2] qos apply policy QUEUE outbound
After the above configuration, when congestion occurs in VPN 1, the bandwidth proportion between
flows with the DSCP value being af11, af21, af31, and ef is 1:2:3:4, and the delay for the flow with the
DSCP value being ef is smaller than the other traffic flows.
8 DAR Configuration
When configuring DAR, go to these sections for information you are interested in:
z DAR Overview
z Configuring DAR
z Displaying and Maintaining DAR
z DAR Configuration Examples
DAR Overview
Today, the Internet has become the major media for enterprises to implement their ever growing service
oriented applications. The simple mechanism that only checks the IP header in packets cannot meet the
requirements of current complicated networks any longer. Therefore, the concept of Deeper Application
Recognition (DAR) based on service was put forward.
DAR is an intelligent recognition and classification tool that is capable of checking and recognizing the
contents and dynamic protocols from Layer 4 to Layer 7 (for example, BT, HTTP, FTP, RTP) in the
packets, to distinguish the application-based protocols. This overcomes the disadvantage that the
packets can only be classified in a simple way previously.
DAR recognizes different protocols in the following ways:
z Protocols such as HTTP, FTP, RTP, RTCP and BitTorrent are recognized by protocol rules. DAR
can automatically recognize their dynamic port numbers; match both protocol and data packets.
z All other TCP/UDP based protocols are recognized by port number, that is, only protocol packets
rather than data packets are matched.
Recognizing and classifying packets deeper help enhance users’ control granularity on data streams
and implement high-priority policies for critical service data, thus to better protect users’ investment.
IP Packet
IP packet format
The IP packet format is shown in Figure 8-1. An IP header without the option field is 20 bytes in length.
The protocol field in the header is 8-bit long. The protocol field value indicates the protocol type of the
data.
Table 8-1 lists the recognizable protocol field values and corresponding protocols.
The lower 2 bits of the flags field control IP packet fragmentation. The 3 bits in the flags field are defined
as follows:
z Reserved: must be 0.
z Do not fragment: 0 indicates that fragmentation is allowed, and 1 indicates that fragmentation is
forbidden.
z More fragments: 0 indicates that the packet is the last fragment, and 1 indicates that it is not the last
fragment.
Therefore, the 3-bit flags field 001 indicates that the IP packet is an IP fragment, and the 3-bit flags field
000 indicates that the IP packet is the last IP fragment.
TCP Packet
The protocols using TCP can be static or dynamic. Static protocols use fixed port numbers for
interaction, while dynamic protocols use negotiated port numbers.
UDP Packet
Like TCP, protocols employing UDP can be static or dynamic. Static protocols use fixed port numbers
for interaction, while dynamic protocols use negotiated port numbers.
HTTP Packet
There are two types of HTTP packets: request packets and response packets. Figure 8-6 shows the
HTTP packets format.
Figure 8-6 HTTP packet format
z The header of an HTTP request packet consists of a request line and header. The request line
consists of the request type field, the URL field, and the HTTP version field separated by spaces.
z The header of an HTTP response packet consists of a status line and header. The status line
consists of the HTTP version field, the status code field, and the status phrase field separated by
spaces.
z Both the request packet headers and the response packet headers consist of several optional
fields. The response packet header contains the HOST field, which is used to identify the host
name and the port number of the server. The header of a packet with body load contains the
Content-Type field, which is used to identify the MIME type of body load.
z When the length of an HTTP packet with body load exceeds the maximum segment size (MSS) of
TCP, the packet is fragmented.
RTP Packet
A Real-time Transport Protocol (RTP) packet is encapsulated in a UDP packet. Usually a UDP packet
carries only one RTP packet.
Figure 8-7 shows the format of a RTP packet.
Figure 8-7 RTP packet format
A Real-time Transport Control Protocol (RTCP) packet is encapsulated in a UDP packet. Usually a UDP
packet carries at least two RTP packets, and such a UDP packet is called a compound RTCP packet.
Huawei Proprietary and Confidential 8-5
Copyright © Huawei Technologies Co., Ltd
QoS Configuration QoS Volume
8 DAR Configuration Operation Manual
SSRC
SSRC
SSRC
SSRC
SSRC
SSRC
SSRC
As shown in the above figure, the random 32-bit prefix in the header exists only when the RTCP packet
is encrypted. An encrypted RTCP packet no longer has the features of an RTCP packet and therefore
requires no special processing. Each packet in the figure represents an RTCP packet, and there is no
space between two RTCP packets. The type of the first RTCP packet in a compound RTCP packet must
be SR or RR. Figure 8-9 shows the header format of an SR-type RTCP packet.
Figure 8-9 Header format of an SR-type RTCP packet
Some protocols using TCP and UDP are identified by TCP or UDP port numbers. See the following
table for their names and the corresponding port numbers.
Configuring DAR
Configuration Prerequisites
To apply various policies (e.g. setting packet priority, allocating bandwidth for data streams) to
corresponding data streams, you need to use DAR to classify the data streams first.
Follow these steps to configure protocol match criteria:
8-8 Huawei Proprietary and Confidential
Copyright © Huawei Technologies Co., Ltd
QoS Configuration QoS Volume
Operation Manual 8 DAR Configuration
Optional
if-match [ not ] protocol http DAR can classify HTTP
Configure the match criterion [ url url-string | host packets by the URL address,
for HTTP hostname-string | mime host name, or MIME type in
mime-type ] HTTP packets.
Not configured by default.
Optional
if-match [ not ] protocol rtp DAR can classify RTP packets
Configure the match criterion
[ payload-type { audio | video by the payload type in RTP
for RTP
| payload-string&<1-16> }* ] packets.
Not configured by default.
The system pre-defines large numbers of protocols and their port numbers. The protocols include
known protocols and 10 user-defined protocols, namely user-defined01, user-defined02, …,
user-defined10. You can define port numbers for these protocols to enhance scalability of DAR.
Follow these steps to configure port numbers for DAR application protocols:
By default, the names of the ten user-defined protocols are user-defined01, user-defined02,…,
user-defined10. You can rename them following these steps to facilitate memorizing and
management.
Follow these steps to rename user-defined protocols:
With the packet accounting function of DAR, you can monitor the number of packets, the amount of data
traffic, the historical average traffic rate, and the historical maximum traffic rate of application protocols
on each interface. According to the statistics, you can apply corresponding policies for the traffic.
Follow these steps to configure DAR packets accounting:
When a large amount of data traffic passes a device, if DAR recognizes all the traffic, tremendous
system resources are occupied, and the normal operation of other functional modules is affected. To
solve this problem, you can limit the maximum number of connections that DAR can recognize, thus
saving the system resources. When the connection number exceeds the maximum threshold, DAR will
not recognize the corresponding packets and directly mark them as unrecognizable.
Follow these steps to configure the maximum number of connections recognizable to DAR:
Network requirements
As shown in Figure 8-10, a router provides access to the BT seed server for the PCs on a network
attached to it.
Make configuration on the router to prohibit PCs from downloading files from the BT seed server.
Figure 8-10 Network diagram for BT downloading prohibition configuration
Configuration procedure
Run BT seed software on the BT seed server, and run BT client software on PC to start BT downloading.
Check BT client software and you can see that PC cannot perform BT downloading.
Network requirements
As shown in Figure 8-11, a router provides access to the Web server for the clients on a network
attached to it.
Make configurations on Router to prohibit Client from accessing the webpage
http://www.abcd.com:8080/news/index.html on Web server.
Figure 8-11 Network diagram for HTTP URL-based DAR configuration
Configuration procedure
z When the HTTP URL is configured as the match criterion, url-string only matches the URL fields in
request packets. For example, url-string just matches /news/index.html of the webpage
http://www.abcd.com:8080/news/index.html.
z As url-string matches the fields in request packets, to have the QoS policy take effect, you should
apply the QoS policy to a direction with HTTP URL request packets.
Network requirements
As shown in Figure 8-12, a router provides access to the Web server for the clients on a network
attached to it.
Make configurations on Router to prohibit Client from accessing the webpage
http://www.abcd.com:8080/news/index.html on Web server.
Figure 8-12 Network diagram for HTTP host-based DAR configuration
Configuration procedure
z When the HTTP host is configured as a match criterion, hostname-string only matches the host
names and port numbers in request packets. For example, hostname-string just matches
www.abcd.com:8080 of the webpage http://www.abcd.com:8080/news/index.html.
z As hostname-string matches the fields in request packets, to have the QoS policy take effect, you
should apply the QoS policy to a direction with HTTP Host request packets.