Professional Documents
Culture Documents
NE40E&NE80E V600R003C00 Configuration Guide - System Management 03 PDF
NE40E&NE80E V600R003C00 Configuration Guide - System Management 03 PDF
V600R003C00
Issue 03
Date 2012-06-08
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and the
customer. All or part of the products, services and features described in this document may not be within the
purchase scope or the usage scope. Unless otherwise specified in the contract, all statements, information,
and recommendations in this document are provided "AS IS" without warranties, guarantees or representations
of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute the warranty of any kind, express or implied.
Website: http://www.huawei.com
Email: support@huawei.com
Purpose
This part describes the organization of this document, product version, intended audience,
conventions, and update history. This document covers the system management protocols and
configurations supported by the NE80E/40E. It describes the basic concepts of system
management, multiple system management protocols, and several configuration examples. In
the later part, this document provides also the glossary and acronyms and abbreviations. Reading
this document helps you to understand the system management protocols and configuration
information.
NOTE
l This document takes interface numbers and link types of the NE40E-X8 as an example. In working
situations, the actual interface numbers and link types may be different from those used in this
document.
l On NE80E/40E series excluding NE80E/40E-X1 and NE80E/40E-X2, line processing boards are
called Line Processing Units (LPUs) and switching fabric boards are called Switching Fabric Units
(SFUs). On the NE80E/40E-X1 and NE80E/40E-X2, there are no LPUs and SFUs, and NPUs
implement the same functions of LPUs and SFUs to exchange and forward packets.
Related Versions
The following table lists the product versions related to this document.
Intended Audience
This document is intended for:
l Data configuration engineers
l Commissioning engineers
l Network monitoring engineers
l System maintenance engineers
Symbol Conventions
The symbols that may be found in this document are defined as follows.
Symbol Description
Command Conventions
The command conventions that may be found in this document are defined as follows.
Convention Description
&<1-n> The parameter before the & sign can be repeated 1 to n times.
Change History
Changes between document issues are cumulative. The latest document issue contains all the
changes made in earlier issues.
Contents
2 SNMP Configuration..................................................................................................................50
2.1 Introduction to SNMP......................................................................................................................................51
2.1.1 SNMP Overview......................................................................................................................................51
2.1.2 SNMP Features Supported by the NE80E/40E.......................................................................................53
2.2 Configuring a Device to Communicate with an NM Station by Running SNMPv1........................................56
2.2.1 Establishing the Configuration Task.......................................................................................................57
2.2.2 Configuring Basic SNMPv1 Functions...................................................................................................57
2.2.3 (Optional) Controlling the NM Station's Access to the Device...............................................................60
2.2.4 (Optional) Enabling the SNMP Extended Error Code Function.............................................................61
2.2.5 (Optional) Configuring the Trap Function..............................................................................................62
2.2.6 Checking the Configuration.....................................................................................................................63
2.3 Configuring a Device to Communicate with an NM Station by Running SNMPv2c......................................65
2.3.1 Establishing the Configuration Task.......................................................................................................65
2.3.2 Configuring Basic SNMPv2c Functions.................................................................................................66
2.3.3 (Optional) Controlling the NM Station's Access to the Device...............................................................69
2.3.4 (Optional) Enabling the SNMP Extended Error Code Function.............................................................71
2.3.5 (Optional) Configuring the Trap Function..............................................................................................71
2.3.6 Checking the Configuration.....................................................................................................................74
2.4 Configuring a Device to Communicate with an NM Station by Running SNMPv3........................................76
2.4.1 Establishing the Configuration Task.......................................................................................................77
2.4.2 Configuring Basic SNMPv3 Functions...................................................................................................78
2.4.3 (Optional) Controlling the NM Station's Access to the Device...............................................................80
2.4.4 (Optional) Enabling the SNMP Extended Error Code Function.............................................................82
2.4.5 (Optional) Configuring the Trap Function..............................................................................................82
2.4.6 Checking the Configuration.....................................................................................................................84
2.5 SNMP Configuration Examples.......................................................................................................................85
2.5.1 Example for Configuring a Device to Communicate with an NM Station by Using SNMPv1..............86
2.5.2 Example for Configuring a Device to Communicate with an NM Station by Using SNMPv2c............89
2.5.3 Example for Configuring a Device to Communicate with an NM Station by Using SNMPv3..............93
4 HGMP Configuration...............................................................................................................119
4.1 Overview........................................................................................................................................................120
4.1.1 Introduction to HGMP...........................................................................................................................120
4.1.2 HGMP Features Supported by the NE80E/40E....................................................................................122
4.2 Configuring Basic HGMP Functions.............................................................................................................126
4.2.1 Establishing the Configuration Task.....................................................................................................126
4.2.2 Configuring NDP...................................................................................................................................126
4.2.3 Configuring NTDP................................................................................................................................128
4.2.4 Creating a Cluster..................................................................................................................................129
4.2.5 Adding a Member Switch......................................................................................................................132
4.2.6 (Optional) Deleting or Quitting a Cluster..............................................................................................133
4.2.7 (Optional) Deleting a Member Switch..................................................................................................134
4.2.8 Checking the Configuration...................................................................................................................135
4.3 Configuring Advanced HGMP Functions......................................................................................................137
4.3.1 Establishing the Configuration Task.....................................................................................................137
4.3.2 Adjusting Parameters of the Cluster......................................................................................................138
4.3.3 Managing Switches in a Cluster Through HGMP.................................................................................141
4.3.4 Checking the Configuration...................................................................................................................145
4.4 Maintaining HGMP........................................................................................................................................148
5 NTP Configuration....................................................................................................................254
5.1 Overview of NTP............................................................................................................................................255
5.1.1 Introduction to NTP...............................................................................................................................255
5.1.2 NTP Supported by the NE80E/40E.......................................................................................................257
5.2 Configuring Basic NTP Functions.................................................................................................................259
5.2.1 Establishing the Configuration Task.....................................................................................................259
5.2.2 Configuring the NTP Primary Clock.....................................................................................................260
5.2.3 Configuring the Time Interval to Update Client Clock.........................................................................260
5.2.4 Configuring the Unicast Client/Server Mode........................................................................................261
5.2.5 Configuring the Peer Mode...................................................................................................................262
5.2.6 Configuring the Broadcast Mode..........................................................................................................263
5.2.7 Configuring the Multicast Mode...........................................................................................................264
5.2.8 (Optional)Disabling the Interface from Receiving NTP Packets..........................................................265
5.2.9 Disabling NTP Service..........................................................................................................................266
5.2.10 Checking the Configuration.................................................................................................................267
5.3 Configuring NTP Security Mechanisms.........................................................................................................267
5.3.1 Establishing the Configuration Task.....................................................................................................268
5.3.2 Setting NTP Access Authorities............................................................................................................269
5.3.3 Enabling NTP Authentication...............................................................................................................270
5.3.4 Configuring NTP Authentication in Unicast Client/Server Mode........................................................271
5.3.5 Configuring NTP Authentication in Peer Mode....................................................................................272
5.3.6 Configuring NTP Authentication in Broadcast Mode...........................................................................272
5.3.7 Configuring NTP Authentication in Multicast Mode............................................................................273
5.3.8 Configuring NTP Authentication in Manycast Mode...........................................................................274
5.3.9 Checking the Configuration...................................................................................................................274
5.4 Configuring KOD...........................................................................................................................................275
6 1588v2 Configuration................................................................................................................320
6.1 Overview of 1588v2.......................................................................................................................................322
6.1.1 Introduction to 1588v2..........................................................................................................................322
6.1.2 1588v2 Features Supported by the NE80E/40E....................................................................................327
6.2 Configuring 1588v2 on OC............................................................................................................................329
6.2.1 Establishing the Configuration Task.....................................................................................................329
6.2.2 Configuring 1588v2 Globally................................................................................................................330
6.2.3 Configuring 1588v2 on an Interface......................................................................................................331
6.2.4 Configuring Time Attributes for 1588v2 Packets.................................................................................332
6.2.5 Configuring Encapsulation Modes for 1588v2 Packets........................................................................334
6.2.6 Checking the Configuration...................................................................................................................336
6.3 Configuring 1588v2 on BC............................................................................................................................337
6.3.1 Establishing the Configuration Task.....................................................................................................337
6.3.2 Configuring 1588v2 Globally................................................................................................................339
6.3.3 Configuring 1588v2 on an Interface......................................................................................................339
6.3.4 Configuring Time Attributes for 1588v2 Packets.................................................................................341
6.3.5 Configuring Encapsulation Modes for 1588v2 Packets........................................................................342
6.3.6 Checking the Configuration...................................................................................................................344
6.4 Configuring 1588v2 on TC.............................................................................................................................347
6.4.1 Establishing the Configuration Task.....................................................................................................348
6.4.2 Configuring 1588v2 Globally................................................................................................................349
6.4.3 Configuring 1588v2 on an Interface......................................................................................................350
6.4.4 Configuring Time Attributes for 1588v2 Packets.................................................................................351
7 NQA Configuration..................................................................................................................407
7.1 Overview of NQA..........................................................................................................................................411
7.1.1 Introduction to NQA..............................................................................................................................411
7.1.2 Comparisons Between NQA and Ping..................................................................................................411
7.1.3 NQA Server and NQA Clients..............................................................................................................412
7.1.4 NQA Supported by the NE80E/40E......................................................................................................413
7.2 Configuring the ICMP Test............................................................................................................................414
7.2.1 Establishing the Configuration Task.....................................................................................................415
7.35.5 Sending Trap Messages When the Transmission Delay Exceeds Thresholds....................................552
7.35.6 Checking the Configuration.................................................................................................................553
7.36 Configuring Test Results to Be Sent to the FTP Server...............................................................................553
7.36.1 Establishing the Configuration Task...................................................................................................554
7.36.2 Configuring Parameters for Connecting the FTP Server.....................................................................554
7.36.3 Enabling the Function of Saving NQA Test Results Through FTP....................................................555
7.36.4 (Optional) Configuring the Number of Test Results Saved Through FTP..........................................555
7.36.5 (Optional) Configuring the Duration of Saving Test Results Through FTP.......................................556
7.36.6 (Optional) Enabling Alarms to Be Sent to the NM Station After the FTP Transmission Succeeds
........................................................................................................................................................................556
7.36.7 Starting the Test Instance....................................................................................................................557
7.36.8 Checking the Configuration.................................................................................................................558
7.37 Configuring a Threshold for the NQA Alarm..............................................................................................558
7.37.1 Establishing the Configuration Task...................................................................................................559
7.37.2 Configuring the Event Corresponding to the Alarm Threshold..........................................................559
7.37.3 Configuring the Alarm Threshold.......................................................................................................560
7.37.4 Starting the Test Instance....................................................................................................................560
7.37.5 Checking the Configuration.................................................................................................................561
7.38 Configuring a VPLS MFIB Ping to Check the VPLS Network...................................................................562
7.38.1 Establishing the Configuration Task...................................................................................................562
7.38.2 Configuring a VPLS MFIB Ping to Check the Multicast Forwarding................................................563
7.38.3 Checking the Configuration.................................................................................................................564
7.39 Configuring a MAC Ping and Trace Test.....................................................................................................565
7.39.1 Establishing the Configuration Task...................................................................................................565
7.39.2 Configuring Parameters for a MAC Trace Test..................................................................................566
7.39.3 Checking the Configuration.................................................................................................................568
7.40 Configuring GMAC Ping and GMAC Trace to Detect the Connectivity of a VLAN Network..................569
7.40.1 Establishing the Configuration Task...................................................................................................569
7.40.2 Configuring Parameters for a GMAC Ping Test.................................................................................570
7.40.3 Configuring Parameters for a GMAC Trace Test................................................................................571
7.40.4 Checking the Configuration.................................................................................................................572
7.41 Configuring GMAC Ping and GMAC Trace to Detect the Connectivity of a VPLS Network....................574
7.41.1 Establishing the Configuration Task...................................................................................................574
7.41.2 Configuring Parameters for a GMAC Ping Test.................................................................................574
7.41.3 Configuring Parameters for a GMAC Trace Test................................................................................575
7.41.4 Checking the Configuration.................................................................................................................576
7.42 Configuring VPLS PW Ping and VPLS PW Trace Test Instances..............................................................578
7.42.1 Establishing the Configuration Task...................................................................................................578
7.42.2 Configuring Parameters for the VPLS PW Ping Test Instance...........................................................578
7.42.3 Configuring Parameters for the VPLS PW Trace Test Instance.........................................................581
7.42.4 Checking the Configuration.................................................................................................................583
7.43 Configuring a VPLS MFIB Trace to Check the VPLS Network.................................................................584
7.43.1 Establishing the Configuration Task...................................................................................................584
7.47.26 Example for Checking the Multicast Path from the Multicast Source to the Destination Host Through
the MTrace Test..............................................................................................................................................661
7.47.27 Example for Configuring the PWE3 Ping Test on a One-Hop PW...................................................665
7.47.28 Example for Configuring the PWE3 Ping Test on a Multi-Hop PW................................................669
7.47.29 Example for Configuring the PWE3 Trace Test on a One-Hop PW.................................................674
7.47.30 Example for Configuring the PWE3 Trace Test on a Multi-Hop PW...............................................678
7.47.31 Configuring the VC Trace Test on an Inter-AS Multi-Hop Kompella L2VPN................................684
7.47.32 Example for Sending Trap Message When Transmission Delay Exceeds Thresholds.....................690
7.47.33 Example for Configuring Test Results to Be Sent to the FTP Server...............................................694
7.47.34 Example for Configuring a Threshold for the NQA Alarm..............................................................697
7.47.35 Example for Configuring a VPLS MFIB Ping to Check the VPLS Network...................................699
7.47.36 Example for Configuring a VPLS MFIB Ping to Check the Kompella VPLS Network..................703
7.47.37 Example for Configuring a VPLS MFIB Trace to Check the VPLS Network.................................707
7.47.38 Example for Configuring a VPLS MAC Ping Test...........................................................................711
7.47.39 Example for Configuring a VPLS MAC Trace Test.........................................................................715
7.47.40 Example for Configuring VPLS PW Ping and VPLS PW Trace Test Instances..............................719
7.47.41 Example for Configuring MAC Ping and MAC Trace to Detect the Connectivity of a VLAN network
........................................................................................................................................................................724
7.47.42 Example for Configuring a MAC Ping and MAC Trace Test Instance to Detect the Connectivity of a
VPLS Network...............................................................................................................................................728
7.47.43 Example for Configuring GMAC Ping and GMAC Trace to Detect the Connectivity of a VLAN
Network..........................................................................................................................................................734
7.47.44 Example for Configuring GMAC Ping and GMAC Trace to Detect the Connectivity of a VPLS Network
........................................................................................................................................................................737
7.47.45 Example for Checking an RPF Path from the Multicast Source to the Destination Host of a Specified
Multicast VPN Network.................................................................................................................................742
7.47.46 Example for Configuring NQA Upper/Lower Alarm Threshold and Test Instance Linkage...........750
7.47.47 Example for Configuring the LSP Trace Test for Checking the CR-LSP Hotstandby Tunnel.........753
8 NetStream Configuration.........................................................................................................758
8.1 Introduction to NetStream..............................................................................................................................760
8.1.1 Overview of NetStream.........................................................................................................................760
8.1.2 NetStream Features Supported by the NE80E/40E...............................................................................761
8.2 Configuring Traffic Statistics on an IPv4 Network........................................................................................764
8.2.1 Establishing the Configuration Task.....................................................................................................764
8.2.2 (Optional)Adjust AS domain mode and interface index value for NetStream Device..........................765
8.2.3 Configuring Processing Mode for NetStream Services.........................................................................766
8.2.4 Enabling NetStream on an Interface......................................................................................................767
8.2.5 (Optional) Configuring TCP-flag Statistics of the Original Traffic......................................................768
8.2.6 (Optional) Configuring Refreshment Parameters of the Template........................................................769
8.2.7 Configuring the Export of NetStream Packets......................................................................................770
8.2.8 Configuring NetStream Sampling.........................................................................................................770
8.2.9 Checking the Configuration...................................................................................................................772
8.3 Collecting the Statistics of IPv6 Unicast Traffic............................................................................................774
8.3.1 Establishing the Configuration Task.....................................................................................................774
8.9.1 Example for Configuring the Statistics of Abnormal Traffic at the User Side on an IPv4 Network
........................................................................................................................................................................803
8.9.2 Example for Configuring the Statistics of VLANIF Traffic on an IPv4 Network................................806
8.9.3 Example for Configuring the Statistics of GRE Traffic on an IPv4 Network.......................................809
8.9.4 Example for Configuring Traffic Statistics on an MPLS Network.......................................................812
8.9.5 Example for Configuring Aggregation Traffic Statistics......................................................................816
8.9.6 Example for Configuring Backup of Statistics Export..........................................................................820
8.9.7 Example for Configuring Statistics on the NetStream Traffic Aggregated Based on VLAN...............822
8.9.8 Example for Configuring an Interface Index Mapped in NetStream....................................................826
8.9.9 Configuring NetStream Multi-address Output......................................................................................828
8.9.10 Example for Configuring NetStream on a BGP/MPLS IP VPN.........................................................832
8.9.11 Example for Configuring NetStream on an MVPN............................................................................837
8.9.12 Example for Configuring NetStream on a VLL..................................................................................850
8.9.13 Example for Enabling NetStream on a Dynamic Single-Hop PWE3 Network...................................855
8.9.14 Example for Enabling NetStream on a Martini VPLS Network.........................................................860
8.9.15 Example for Enabling NetStream on a Kompella VPLS Network......................................................865
9.7.3 Checking Connectivity of the VPLS Network Through the Tracert Operation....................................888
9.8 Detecting the BGP or MPLS IP VPN Through the Ping or Tracert Operation..............................................888
9.8.1 Establishing the Configuration Task.....................................................................................................889
9.8.2 Checking Connectivity of the BGP or MPLS IP VPN Through the Ping Operation............................889
9.9 Checking the VPLS Network Through VPLS MAC Ping and VPLS MAC Trace.......................................890
9.9.1 Establishing the Configuration Task.....................................................................................................890
9.9.2 Checking the Connectivity of the VPLS Network Through MAC Ping...............................................891
9.9.3 Checking the Connectivity of the VPLS Network Through MAC Trace.............................................892
9.10 Detecting an MPLS Network Through a Ping Operation.............................................................................893
9.10.1 Establishing the Configuration Task...................................................................................................893
9.10.2 Checking Whether IP Forwarding on an MPLS Network Is Normal Through a Ping Operation.......893
9.11 Detecting Trunk Member Links Through a Ping Operation........................................................................894
9.11.1 Establishing the Configuration Task...................................................................................................894
9.11.2 Detecting the Connectivity of Layer 3 Trunk Member Interfaces Through a Ping Operation............895
9.12 Detecting an MH-PW Through the Ping and Tracert Operations................................................................896
9.12.1 Establishing the Configuration Task...................................................................................................896
9.12.2 Detecting the MH-PW Connectivity Through a Ping Operation........................................................900
9.12.3 Detecting the MH-PW Connectivity Through a Tracert Operation....................................................901
9.13 Detecting the PWE3 Network Through a Service Ping Operation...............................................................902
9.13.1 Establishing the Configuration Task...................................................................................................902
9.13.2 Detecting the PWE3 Network Through a Service Ping Operation......................................................903
9.14 Detecting the VLL Accessing the VPLS Network Through a Service Ping Operation...............................904
9.14.1 Establishing the Configuration Task...................................................................................................904
9.14.2 Detecting the VLL Accessing the VPLS Network Through a Service Ping Operation......................905
9.15 Configuring Smart Ping................................................................................................................................906
9.15.1 Establishing the Configuration Task...................................................................................................907
9.15.2 Configuring Smart Ping to Check the Network Connectivity.............................................................907
10 LLDP Configuration...............................................................................................................910
10.1 Introduction..................................................................................................................................................911
10.1.1 Overview of LLDP..............................................................................................................................911
10.1.2 LLDP Features Supported by the NE80E/40E....................................................................................911
10.2 Configuring LLDP........................................................................................................................................913
10.2.1 Establishing the Configuration Task...................................................................................................913
10.2.2 (Optional) Enabling the LLDP Alarm Function..................................................................................915
10.2.3 Enabling LLDP Globally.....................................................................................................................915
10.2.4 (Optional) Disabling LLDP on an Interface........................................................................................916
10.2.5 (Optional) Configuring the Management Address of LLDP...............................................................916
10.2.6 (Optional) Configuring LLDP Attributes............................................................................................917
10.2.7 Checking the Configuration.................................................................................................................919
10.3 Maintaining LLDP........................................................................................................................................920
10.3.1 Clearing the Statistics of LLDP...........................................................................................................921
10.3.2 Monitoring the Running Status of LLDP............................................................................................921
11 Fault Management...................................................................................................................932
11.1 Introduction to Fault Management...............................................................................................................933
11.1.1 Introduction to Fault Management......................................................................................................933
11.2 Configuring Alarm Management..................................................................................................................933
11.2.1 Establishing the Configuration Task...................................................................................................933
11.2.2 Setting the Alarm Severity Level........................................................................................................934
11.2.3 Configuring Delaying Alarm Reporting..............................................................................................934
11.2.4 Configuring Correlated Alarm Suppression........................................................................................935
11.2.5 Checking the Configuration.................................................................................................................936
11.3 Configuring Event Management..................................................................................................................937
11.3.1 Establishing the Configuration Task...................................................................................................937
11.3.2 Configuring Delayed Event Reporting................................................................................................938
11.3.3 Checking the Configuration.................................................................................................................938
11.4 Configuring Fault Isolation for an Entity.....................................................................................................939
11.5 Maintenance..................................................................................................................................................940
11.5.1 Clearing Alarm Messages....................................................................................................................940
11.5.2 Clearing Event Messages.....................................................................................................................941
11.5.3 Maintaining Probe Diagnose...............................................................................................................941
11.6 Configuration Examples...............................................................................................................................942
11.6.1 Example for Configuring Alarm Management....................................................................................942
A Glossary......................................................................................................................................945
B Acronyms and Abbreviations.................................................................................................947
This chapter describes how to configure the information center to control the output of logs,
alarms, and debugging messages.
Information Classification
The information center receives and processes information of the following types:
l Logs
l Debugging information
l Alarms
When information filtering based on severity levels is enabled, only the information whose
severity level threshold is less than or equal to the configured value is output.
For example, if the severity level value is configured to 6, only information with a severity level
ranging from 0 to 6 is output.
l The information center receives logs, alarms, and debugging information from all modules.
l The information center outputs information with different severity levels to different
information channels according to the configuration.
l Information is transmitted in different directions based on the relationship between the
information channel and the output direction.
Generally, the information center distributes three types of information classified into eight
levels to 10 information channels. Information is then output to different directions.
As shown in Figure 1-1, logs, alarms, and debugging information have default output channels.
They can be customized to be output from other channels. For example, logs can be configured
to be output to the log cache through Channel 6 rather than the default Channel 4.
Information Output
channel direction
0
Console Console
1
Logs Monitor Remote terminal
Loghost Loghost
Traps 2
Trapbuffer Trap buffer
3
Logbuffer Log buffer
4
Debugs
5 SNMP agent SNMP agent
6 channel6
For details of the association relationship between default channels and output directions, see
Table 1-2.
Table 1-2 Association relationship between default channels and output directions
4 Logbuffer Log buffer Outputs logs to the log buffer. The router
assigns a specified area to be the log buffer
for recording logs.
In the case of multiple log hosts, logs can be output through one channel or several channels.
For example, some logs can be output to a log host through Channel 2 (loghost) and some logs
can be output to another log host through Channel 6. For easy management, the name of Channel
6 can be changed.
Format of Logs
Syslog is a sub-function of the information center. It outputs information to a log host through
port 514.
Figure 1-2 shows the format of logs.
<Int_16> Leading character Leading characters are added before logs are
output to log hosts.
Logs saved in the local device do not contain
leading characters.
TIMESTAMP Time to send out the Available formats for the timestamp are as follows:
information l boot: The timestamp in this format indicates a
relative time.
l date: The timestamp in this format indicates the
system time. Timestamps in logs, alarms and
debugging information are in this format by
default.
l short-date: Unlike the date format, timestamps
in the short-date format do not indicate the year.
l format-date: The timestamp in this format is
another format of the system time.
l none: indicates that the information does not
contain any timestamp.
There is a space between the timestamp and the
host name.
AAA Module name Indicates the name of the module that outputs
information to the information center.
slot=XXX Location information Indicates the number of the slot that sends the
location information.
Format of Alarms
Figure 1-3 shows the format of the output alarms.
TimeStamp Time to send out the Available formats for the timestamp are as follows:
information l boot: The timestamp in this format indicates a
relative time.
l date: The timestamp in this format indicates the
system time. Timestamps in logs, alarms and
debugging information are in this format by
default.
l short-date: Unlike the date format, timestamps
in the short-date format do not indicate the year.
l format-date: The timestamp in this format is
another format of the system time.
l none: indicates that the information does not
contain a timestamp.
There is a space between the timestamp and the
host name.
ModuleName Module name Indicates the name of the module that generates an
alarm.
Severity Severity level Severity levels available for an alarm message are
as follows:
l Critical
l Major
l Minor
l Warning
l Indeterminate
l Cleared
Applicable Environment
The system logs the operation information about devices in real time. It then outputs logs to the
log buffer, log file, console, terminal, and log host for storage and future reference. In this
manner, when faults occur on devices, users can locate the faults based on the logs.
Pre-configuration Tasks
Before configuring the log output, complete the following tasks:
Data Preparation
To configure the log output, you need the following data.
No. Data
1 l Channel number
l Channel name
2 Module name
Context
The information center classifies and outputs information. When it is heavily loaded with
information processing, system performance degrades.
Procedure
Step 1 Run:
system-view
Step 2 Run:
info-center enable
----End
Procedure
Step 1 Run:
system-view
Step 2 Run:
info-center channel channel-number name channel-name
A channel is named.
----End
Context
Binary logs provide the function of filtering specified logs by their IDs. To filter certain logs,
the user can obtain IDs of these logs through log resolution tools and add these IDs to the log
filtering list.
After that, the information center does not send these logs in each output direction.
Procedure
Step 1 Run:
system-view
Step 2 Run:
info-center filter-id { id } * &<1-50>
One or more IDs are added and a space is used to separate these IDs.
NOTE
Currently, only 50 IDs can be shielded. The aggregation of these shielded IDs is called a log ID filtering
list. The log ID filtering list is arranged by ID values.
----End
Context
During the running of a device, if too many logs with the same log ID are generated, the
information center is too busy processing these logs to process logs with other log IDs, which
may even affect the running service. The information center monitors the traffic of logs with
different log IDs. When the traffic of logs with a specific log ID repeatedly exceeds the threshold
during the monitoring period, the information center suppresses the processing rate of these
specified logs by processing only the conforming traffic and discarding the non-conforming
traffic; when the traffic of logs with the specific log ID falls below the threshold and remains
below the threshold for five monitoring periods, the suppression is removed.
Procedure
Step 1 Run:
system-view
Step 2 Run:
info-center rate-limit threshold value [ byinfoID infoID | bymodule-alias modname
alias ]
The maximum number of logs with the same log ID that the information center can process
every second is set.
By default, the information center processes a maximum of 50 logs with the same log ID in
every second. In certain application scenarios, the information center is required to defaultly
process a maximum of more than 30 logs with the same log ID in every second. You can set
thresholds for logs with different log IDs.
NOTE
Step 3 Run:
info-center rate-limit global-threshold
The total number of logs that the information center can process each second is set.
Step 4 Run:
info-center rate-limit monitor-period
The period for the information center to limit the log processing rate is set.
Step 5 Run:
info-center rate-limit except
(Optional) Cancel the log processing rate limit for logs with the specified ID or module name.
If logs with the specified ID or module name will never be generated in a huge number, you can
run this command to cancel the log processing rate limit for the logs. After this command is run,
the configured log processing rate limit will not be effective for logs with the specified ID or
module name.
----End
Context
On the HUAWEI NetEngine80E/40E, service modules generate logs and control the volume of
generated logs. The information center processes the received logs.
In some scenarios, service modules, such as ARP and VRRP, generate a large number of repeated
logs within a short period. In this situation, you can enable the output of the statistics about
repeatedly generated logs to protect the information center against the impact of the large log
volume.
NOTE
Logs that are generated consecutively and with identical log IDs and parameters can be regarded as
repeatedly generated logs.
Procedure
Step 1 Run:
system-view
Step 2 Run:
info-center statistic-suppress enable
NOTE
By default, the output of the statistics about repeatedly generated logs is enabled.
----End
Procedure
l Configure the channel through which logs are output.
1. Run the following command on the router enabled with the information center:
system-view
The channel through which logs are output to the log buffer is configured.
3. (Optional) Run:
info-center logbuffer [ channel { channel-number | channel-name } | size
buffersize ] *
Procedure
Step 1 Send logs to a channel.
1. Run:
system-view
The channel through which logs are output to the log file is configured.
Step 3 (Optional) Configure the size of the log file output by the information center.
1. Run:
info-center logfile size size
----End
Procedure
Step 1 Configure the logs to be output through the channel.
1. Do as follows on the router configured with the information center, run:
system-view
The channel through which logs are output to the Console is configured.
2. Run:
quit
----End
Procedure
Step 1 Configure the logs to be output through the channel.
1. Run:
system-view
The information channel through which logs are output to the terminal is configured.
2. Run:
quit
2. Run:
terminal logging
----End
Procedure
Step 1 Configure the logs to be output through the channel.
1. Do as follows on the router configured with the information center, run:
system-view
Step 2 Configure the channel through which logs are output to the log host.
l (On an IPv4 network) Run:
info-center loghost ip-address [ channel { channel-number | channel-name } |
facility local-number |
{ language language-name | binary [ port ] } | { vpn-instance vpn-instance-name
| public-net } ] *
The channel through which logs are output to the log host is configured.
By default, logs are not output to the log host after the information center is enabled.
The system supports the configuration of a maximum of eight log hosts to realize backup
among log hosts.
l (On an IPv6 network) Run:
info-center loghost ipv6 ipv6-address [ channel { channel-number | channel-
name } | facility local-number | { language language-name | binary [ port ] } ]
*
The channel through which logs are output to the log host is configured.
By default, logs are not output to the log host.
The system supports the configuration of a maximum of eight log hosts to implement backup
among log hosts.
Step 3 Run:
info-center loghost source interface-type interface-number
A source interface is configured. This interface is recognized by the log host as the log sending
interface.
Each device has multiple interfaces that can send logs. All of these interfaces are configured to
report the source interface's address, if configured, when they send logs. This helps the log host
quickly determine the source device from which the logs were sent.
By default, this interface is not configured, so that the log host will be aware of all actual log
sending interfaces on a device.
----End
Context
Do as follows on the router configured with the information center:
Procedure
Step 1 Run:
system-view
----End
Prerequisites
The configurations of the information center function are complete.
Procedure
l Run the display channel [ channel-number | channel-name ] command to check the
configuration of a channel.
l Run the display info-center [ statistics ] command to check the information recorded by
an information center.
l Run the display logbuffer command to view the information recorded by a log buffer.
l Run the display info-center filter-id { id } command to check whether the ID of a single
log is added into the filtering list.
l Run the display info-center filter-id command to check whether IDs of all logs are added
into the filtering list.
l Run the display info-center rate-limit threshold command to view the threshold of the
log processing rate.
l Run the display info-center rate-limit record command to view the rate at which logs are
suppressed in the information center.
----End
Example
Run the display channel [ channel-number | channel-name ] command to check the contents of
information channels.
<HUAWEI> display channel
channel number:0, channel name:console
MODU_ID NAME ENABLE LOG_LEVEL ENABLE TRAP_LEVEL ENABLE DEBUG_LEVEL
ffff0000 default Y warning Y debugging Y debugging
Run the display info-center command to check the contents of information center.
<HUAWEI> display info-center
Information Center:enabled
Log host:
Console:
channel number : 0, channel name : console
Monitor:
channel number : 1, channel name : monitor
SNMP Agent:
channel number : 5, channel name : snmpagent
Log buffer:
enabled,max buffer size 1024, current buffer size 512,
current messages 512, channel number : 4, channel name : logbuffer
dropped messages 0, overwritten messages 91
Trap buffer:
enabled,max buffer size 1024, current buffer size 256,
current messages 142, channel number:3, channel name:trapbuffer
dropped messages 0, overwritten messages 0
logfile:
channel number : 9, channel name : channel9, language : English
Information timestamp setting:
Run the display logbuffer command to view the logs in the log buffer.
<HUAWEI> display logbuffer
Logging buffer configuration and contents : enabled
Allowed max buffer size : 1024
Actual buffer size : 512
Channel number : 4 , Channel name : logbuffer
Dropped messages : 0
Overwritten messages : 0
Current messages : 1
Run the display info-center filter-id [ id ] command to check whether the log with the ID being
1098649600 is added into the filtering list.
<HUAWEI> display info-center filter-id 1098649600
ID : 1098649600
Module : TE_TUNNEL
Alias : NOTIFY_LSPM_FAIL
Content : LSPM return error to TE when processing tunnel commit event!
(TunnelName=[STRING], ErrorCode=[ULONG])
Filtered Number : 0
Run the display info-center filter-id command to check whether IDs of all logs are added into
the filtering list.
<HUAWEI> display info-center filter-id
ID : 1077481488
Module : SHELL
Alias : AUTHCMDSNDMSGFAIL
Filtered Number : 0
ID : 1079676930
Module : VTY
Alias : AUTHENTIMEOUT
Content : Refresh route to slot [ULONG].
Filtered Number : 0
Run the display info-center rate-limit threshold command to view the threshold of the log
processing rate.
<HUAWEI> display info-center rate-limit threshold
Rate limit threshold(per second):
Module Alias Default Config
default 50 50
SHELL CMDRECORD 4294967295 4294967295
SHELL DISPLAY_CMDRECORD 4294967295 4294967295
SHELL HIDECMD 4294967295 4294967295
SHELL DISPLAY_HIDECMD 4294967295 4294967295
TE_TUNNEL NOTIFY_LSPM_FAIL 50 100
Run the display info-center rate-limit record command to view the rate at which logs are
suppressed in the information center.
<HUAWEI> display info-center rate-limit record
Record No.1
InfoID : 1098731520
Module : 6OVER4
Alias : DESTFAIL
Rate limit threshold : 50
Total receive number : 1872
Total drop number : 922
Total send number : 950
Applicable Environment
The device can generate alarms in specific situations to draw attention of the administrators.
Alarms can be output to the alarm buffer, log file, Console, terminal, and Network Management
System (NMS), through which the administrator can easily locate and rectify the fault.
Pre-configuration Tasks
Before enabling alarm output, complete the following tasks:
Data Preparation
To configure alarm output, you need the following data.
No. Data
1 l Channel number
l Channel name
2 Module name
Context
Classifying and outputting a large amount of information degrades system performance.
Procedure
Step 1 Run:
system-view
Step 2 Run:
info-center enable
----End
Context
Do as follows on the router configured with the information center.
Procedure
Step 1 Run:
system-view
Step 2 Run:
info-center channel channel-number name channel-name
----End
Context
Do as follows on the router configured with the information center:
Procedure
Step 1 Configure the alarms to be output through the channel.
1. Run:
system-view
The channel through which alarms are output to the alarm buffer is configured.
After the information center is enabled, alarms default to be output through Channel 3 to
the alarm buffer and the alarm buffer can contain 256 pieces of information.
----End
Context
Do as follows on the router configured with the information center:
Procedure
Step 1 Send logs to the channel.
1. Run:
system-view
For the log information, the state is on and the allowed information level is warning.
For the alarm information, the state is on and the allowed information level is
debugging.
Step 2 Configure the channel through which alarms are output to the log file.
1. Run:
info-center logfile channel { channel-number | channel-name }
The channel through which alarms are output to the log file is configured.
By default, alarms are output through Channel 9 to the log file after the information center
is enabled.
Step 3 (Optional) Configure the size of the log file output by the information center.
1. Run:
info-center logfile size size
Step 4 (Optional) Configure the maximum number of compressed log files to be stored.
1. Run:
info-center max-logfile-number filenumbers
By default, a maximum number of 200 compressed log files can be stored. If the configured
maximum number is reached, the system will delete earlier compressed log files.
----End
Context
Do as follows on the router configured with the information center:
Procedure
Step 1 Configure the alarms to be output through the channel.
1. Run:
system-view
The channel through which alarms are output to the Console is configured.
By default, alarms are output to the Console through Channel 0.
2. Run:
quit
----End
Context
Do as follows on the router configured with the information center:
Procedure
Step 1 Configure the alarms to be output through the channel.
1. Run:
system-view
The channel through which alarms are output to the VTY terminal is configured.
By default, alarms are output to the VTY terminal through Channel 1.
2. Run:
quit
----End
Context
Do as follows on the router configured with the information center:
Procedure
Step 1 Configure the alarms to be output through the channel.
1. Run:
system-view
2. Run:
info-center source { module-name | default } channel { channel-number | channel-
name } [ trap { state { off | on } | level severity } * ]
The channel through which alarms are output to the SNMP agent is configured.
By default, alarms are output to the SNMP agent through Channel 5.
2. Run:
snmp-agent
----End
Prerequisites
The configurations of the alarm output function are complete.
Procedure
l Run the display channel [ channel-number | channel-name ] command to check the
configuration of a channel.
l Run the display info-center [ statistics ] command to check the information recorded by
the information center.
l Run the display trapbuffer [ size value ] command to check the information recorded by
the alarm buffer.
----End
Example
Run the display channel command to show channels.
<HUAWEI> display channel
channel number:0, channel name:console
MODU_ID NAME ENABLE LOG_LEVEL ENABLE TRAP_LEVEL ENABLE DEBUG_LEVEL
ffff0000 default Y warning Y debugging Y debugging
Run the display info-center command to show the data recorded by info-center.
<HUAWEI> display info-center
Information Center:enabled
Log host:
Console:
channel number : 0, channel name : console
Monitor:
channel number : 1, channel name : monitor
SNMP Agent:
channel number : 5, channel name : snmpagent
Log buffer:
enabled,max buffer size 1024, current buffer size 512,
current messages 92, channel number : 4, channel name : logbuffer
dropped messages 0, overwritten messages 0
Trap buffer:
enabled,max buffer size 1024, current buffer size 256,
current messages 30, channel number:3, channel name:trapbuffer
dropped messages 0, overwritten messages 0
logfile:
channel number : 9, channel name : channel9, language : english
Information timestamp setting:
log - date, trap - date, debug - boot
Run the display trapbuffer command. If alarms in the alarm buffer are displayed, it means that
the configuration is successful.
<HUAWEI> display trapbuffer
Trapping Buffer Configuration and contents:enabled
allowed max buffer size : 1024
Context
CAUTION
Debugging degrades system performance. Therefore, after debugging, run the undo debugging
all command to disable debugging immediately. When the CPU usage is close to 100%,
debugging ARP may cause boards to reset. So, confirm the action before you use the command.
Applicable Environment
When faults occur on a device, you can enable the information center to output debugging
information for easy faults location and analysis.
Pre-configuration Tasks
Before enabling the output of debugging information, complete the following tasks:
l Connecting the router and the PC correctly
l Configuring routes between the router and the log host
Data Preparation
To enable the output of debugging information, you need the following data.
No. Data
1 l Channel number
l Channel name
2 Module name
No. Data
Context
Classifying and outputting a large amount of information degrades system performance.
Procedure
Step 1 Run:
system-view
Step 2 Run:
info-center enable
----End
Context
Do as follows on the router configured with the information center.
Procedure
Step 1 Run:
system-view
Step 2 Run:
info-center channel channel-number name channel-name
----End
Context
After being enabled, a debugging will consume system resources. If the debugging keeps being
enabled, performance of the system will be affected. Therefore, after a debugging is enabled,
set a period after which the debugging will be disabled.
Procedure
Step 1 Run: debugging timeout timeout The period after which a debugging is automatically disabled
is set.
NOTE
To immediately disable a debugging, press the Ctrl+O hotkeys or run the undo debugging all command.
----End
Context
Do as follows on the router configured with the information center:
Procedure
Step 1 Configure debugging information to be output through the channel.
1. Run:
system-view
The channel through which debugging information is output to the log file is configured.
Step 3 (Optional) Configure the size of the log file output by the information center.
1. Run:
info-center logfile size size
By default, the debugging information is not saved in the log file. If you want the debugging
information to be saved in the log file, run the info-center source default channel 9
debug state on level severity command to add records to the information channel.
Step 4 (Optional) Configure the maximum number of compressed log files to be stored.
1. Run:
info-center max-logfile-number filenumbers
By default, a maximum number of 200 compressed log files can be stored. If the configured
maximum number is reached, the system will delete earlier compressed log files.
----End
Context
Do as follows on the router configured with the information center:
Procedure
Step 1 Configure debugging information to be output through the channel.
1. Run:
system-view
For the log information, the state is on and the allowed information level is warning.
For the alarm information, the state is on and the allowed information level is
debugging.
The channel through which debugging information is output to the console is configured.
2. Run:
quit
----End
Context
Do as follows on the router configured with the information center:
Procedure
Step 1 Configure debugging information to be output through the channel.
1. Run:
system-view
1. Run:
info-center monitor channel { channel-number | channel-name }
The channel through which debugging information is output to the terminal is configured.
2. Run:
quit
----End
Procedure
Step 1 Configure debugging information to be output through the channel.
1. Run:
system-view
The channel through which debugging information is output to the log host is configured.
By default, debugging information is not output to the log host after the information center
is enabled.
The system supports the configuration of a maximum of eight log hosts to realize backup
among log hosts.
l (On an IPv6 network) Run:
info-center loghost ipv6 ipv6-address [ channel { channel-number | channel-
name } | facility local-number | { language language-name | binary [ port ] } ]
*
The channel through which debugging information is output to the log host is configured.
By default, debugging information is not output to the log host after the information center
is enabled.
The system supports the configuration of a maximum of eight log hosts to realize backup
among log hosts.
Step 3 Run:
info-center loghost source interface-type interface-number
A source interface is configured. This interface is recognized by the log host as the log sending
interface.
Each device has multiple interfaces that can send logs. All of these interfaces are configured to
report the source interface's address, if configured, when they send logs. This helps the log host
quickly determine the source device from which the logs were sent.
By default, this interface is not configured, so that the log host will be aware of all actual log
sending interfaces on a device.
----End
Prerequisites
The configurations of the debugging information function are complete.
Procedure
l Run the display channel [ channel-number | channel-name ] command to check the
configuration of a channel.
l Run the display info-center [ statistics ] command to check the information recorded by
an information center.
----End
Example
Run the display channel command. For example:
<HUAWEI> display channel 0
channel number:0, channel name:console
MODU_ID NAME ENABLE LOG_LEVEL ENABLE TRAP_LEVEL ENABLE DEBUG_LEVEL
ffff0000 default Y warning Y debugging Y debugging
416e0000 ARP Y warning Y debugging Y debugging
Context
CAUTION
Statistics about the information center cannot be restored after being cleared. So, confirm the
action before you use the command.
Procedure
l To clear statistics about the information center, run the reset info-center statistics
command in the user view.
l To clear statistics about the log buffer, run the reset logbuffer command in the user view.
l To clear statistics about the alarm buffer, run the reset trapbuffer command in the user
view.
----End
Context
NOTE
This document takes interface numbers and link types of the NE40E-X8 as an example. In working
situations, the actual interface numbers and link types may be different from those used in this document.
Networking Requirements
As shown in Figure 1-4, Router A is required to transport logs to a File Transfer Protocol (FTP)
server. Maintenance engineers can easily obtain the operating status of Router A and locate any
faults occurring on it.
10.2.1.1/16
GE1/0/0 IP network
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Configure the routing protocol to make the router and the FTP server reachable. (The detailed
procedure is not mentioned here.)
Step 2 Configure the user name and password that are used on the FTP server. (The configuration details
are not provided here.)
# Configure the module enabled to output the logs and the severity levels of the logs that are
allowed to be output.
[RouterA] info-center source ip channel channel9 log level warnings
# Configure the channel through which logs are output to the log file.
[RouterA] info-center logfile channel channel9
[RouterA] quit
# View the received logs on the FTP server. (The display is omitted here.)
----End
Configuration Files
#
sysname RouterA
#
info-center source IP channel 9 log level warning
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.2.1.1 255.255.0.0
#
ip route-static 10.1.0.0 255.255.0.0 10.2.1.2
#
return
Networking Requirements
As shown in Figure 1-5, logs of multiple types and severity levels must be output to different
log hosts through information channels.
Router sends the logs (with a severity level as notification) generated on the Forwarding
Information Base (FIB) module and the IP module to the log host Server 1. Server 3 functions
as a backup router of Server 1.
Router sends the logs (with a severity level as warning) generated on the Point-to-Point Protocol
(PPP) module and the AAA module to the log host Server 1. Server 4 functions as a backup
router of Server 2.
POS1/0/0
172.16.0.1/24
Router
Server 4 Server 2
10.2.1.2/24 10.2.1.1/24
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Configure routing protocols to make the router and log server routable. (The detailed procedure
is not mentioned here.)
Step 2 Configure the channel for outputting logs.
# Enable the information center.
<HUAWEI> system-view
[HUAWEI] info-center enable
# Specify Server 2 as the log server and Server 4 as the backup log server to receive the logs
from the PPP module and the AAA module. The logs are output by Local4.
[HUAWEI] info-center loghost 10.2.1.1 channel loghost1 facility local4 language
english
[HUAWEI] info-center loghost 10.2.1.2 channel loghost1 facility local4 language
english
If installed with a Unix or Linux operating system, logs enabled with Syslog can be collected
by the host.
If the host has a Linux operating system, choose from the following options:
If the host has a third party's log software installed, this software can be configured to implement
host's log collection function. For example, the HUAWEI iManager 2000 supports various log
management settings and can therefore receive, filter, save, and forward the Syslog messages
sent by the device.
For the procedure for configuring log services on the HUAWEI iManager N200, refer to the
HUAWEI iManager N2000 DM - Compound Package User Manual Volume I.
----End
Configuration Files
#
info-center channel 6 name loghost1
info-center source FIB channel loghost channel 2 log level notification
info-center source IP channel 2 log level notification
info-center source PPP channel 6 log level warning
info-center source AAA channel 6 log level warning
info-center loghost source Pos1/0/0
info-center loghost 10.1.1.1 facility local2
info-center loghost 10.1.1.2 facility local2
info-center loghost 10.2.1.1 channel 6 facility local4
info-center loghost 10.2.1.2 channel 6 facility local4
#interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 172.16.0.1 255.255.255.0
#
ip route-static 10.1.1.0 255.255.255.0 172.16.0.2
ip route-static 10.2.1.0 255.255.255.0 172.16.0.2
#
return
Networking Requirements
As shown in Figure 1-6, logs of multiple types and severity levels must be output different log
hosts through information channels.
Router sends the logs (with a severity level as notification) generated on the Forwarding
Information Base (FIB) module and the IP module to the log host Server 1. Server 3 functions
as a backup router of Server 1.
Router sends the logs (with a severity level as warning) generated on the Point-to-Point Protocol
(PPP) module and the AAA module to the log host Server 1. Server 4 functions as a backup
router of Server 2.
Both the routers and the log hosts require to be configured.
A management VPN instance is configured on the router.
Figure 1-6 Networking diagram of configuring log messages to be output to the log host on the
public network when the management VPN instance is used
10.1.1.2/24 10.1.1.1/24
Server 3 Server1
POS1/0/0
172.16.0.1/24
Router
Server 4 Server 2
10.2.1.2/24 10.2.1.1/24
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Configure routing protocols to make the router and log server routable. (The detailed procedure
is not mentioned here.)
# Specify Server 2 as the log server and Server 4 as the backup log server to receive the logs
from the PPP and the AAA modules. The logs are output by Local4.
[HUAWEI] info-center loghost 10.2.1.1 public-net channel loghost1 facility local4
language english
[HUAWEI] info-center loghost 10.2.1.2 public-net channel loghost1 facility local4
language english
log management settings and can be therefore receive, filter, save, and forward the Syslog
messages sent by the device.
For the procedure for configuring log services on the HUAWEI iManager N200, refer to the
HUAWEI iManager N2000 DM - Compound Package User Manual Volume I.
Step 8 Verify the configuration.
# Display the configuration of the log host.
<HUAWEI> display info-center
Information Center:enabled
Log host:
the interface name of the source address:pos1/0/0
10.1.1.1, channel number 2, channel name loghost,
language english , host facility
local2
10.1.1.2, channel number 2, channel name loghost,
language english , host facility
local2
10.2.1.1, channel number 6, channel name loghost1
language english , host facility
local4
10.2.1.2, channel number 6, channel name loghost1
language english , host facility
local4
Console:
channel number : 0, channel name : console
Monitor:
channel number : 1, channel name : monitor
SNMP Agent:
channel number : 5, channel name : snmpagent
Log buffer:
enabled,max buffer size 1024, current buffer size 512,
current messages 50, channel number : 4, channel name : logbuffer
dropped messages 13, overwritten messages 3
Trap buffer:
enabled,max buffer size 1024, current buffer size 256,
current messages 2, channel number:3, channel name:trapbuffer
dropped messages 0, overwritten messages 0
Information timestamp setting:
log - date, trap - date, debug - boot
----End
Configuration Files
#
info-center channel 6 name loghost1
info-center source FIB channel loghost channel 2 log level notification
info-center source IP channel 2 log level notification
info-center source PPP channel 6 log level warning
info-center source AAA channel 6 log level warning
info-center loghost source Pos1/0/0
info-center loghost 10.1.1.1 public-net facility local2
info-center loghost 10.1.1.2 public-net facility local2
info-center loghost 10.2.1.1 public-net channel 6 facility local4
info-center loghost 10.2.1.2 public-net channel 6 facility local4
#interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 172.16.0.1 255.255.255.0
#
ip route-static 10.1.1.0 255.255.255.0 172.16.0.2
Networking Requirements
As shown in Figure 1-7, binary logs generated on Router A are sent to the log host in real time.
Users or maintenance personnel can analyze the log through log analysis tools and locate the
fault.
Figure 1-7 Example for Configuring Binary Logs to be sent to the Log Host
GE1/0/0
10.1.1.1/24
RouterA Loghost
10.1.1.6/24
Configuration Roadmap
The configuration roadmap is as follows:
1. Enable the information center on the router.
2. Add the ID of the log to be filtered.
3. Configure binary logs to be sent to the log host.
Data Preparation
To complete the configuration, you need to perform the following data:
l ID of the log to be filtered
l IP address of the FTP server
l User name and password used for logging into the FTP server
l IP address of the log host
Procedure
Step 1 Configure IP addresses and routes between Router A and Loghost. (The detailed procedure is
not mentioned here.)
Step 2 Enable the information center.
# Enable the information center.
<HUAWEI> system-view
----End
Configuration Files
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
#
info-center filter-id 1077514264
info-center loghost 10.1.1.6 binary
#
return
Networking Requirements
As shown in Figure 1-8, alarms are required to be output first to the SNMP agent and then be
transmitted to the NM Station through SNMP Agent.
GE1/0/0
NM Station Agent
10.1.1.1/24 10.1.1.2/24
Configuration Roadmap
The configuration roadmap is as follows:
1. Enable the information center on the router.
2. Specify the module enabled to output logs and configure the channel through which the
alarm is output.
3. Enable outputting alarm to the SNMP agent.
4. Enable transmitting alarms to the NM Station through SNMP.
Data Preparation
To complete the configuration, you need the following data:
l Information channel number
l Module enabled to output alarms
l Severity levels of alarms
Procedure
Step 1 Enable the information center.
<HUAWEI> system-view
[HUAWEI] info-center enable
Step 2 Specify the module enabled to output alarms and configure the channel used to output alarms.
# Specify the module enabled to output alarms and configure the channel used to output alarms.
[HUAWEI] info-center source ip channel channel7 trap level informational state on
NOTE
By default, alarms are output through the SNMP agent and information about all modules is displayed.
# Start the SNMP agent and set the SNMP version to SNMPv2c.
[HUAWEI] snmp-agent sys-info version v2c
# View the alarms output through the channel selected by SNMP agent.
[HUAWEI] display channel 7
channel number:7, channel name:channel7
MODU_ID NAME ENABLE LOG_LEVEL ENABLE TRAP_LEVEL ENABLE DEBUG_LEVEL
ffff0000 default Y debugging Y debugging N debugging
416a0000 IP Y debugging Y informational N debugging
----End
Configuration Files
#
info-center source IP channel 7 trap level informational
info-center snmp channel 7
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
#
snmp-agent
snmp-agent local-engineid 000007DB7F00000100003598
snmp-agent community write write
snmp-agent community read public
snmp-agent community write private
snmp-agent sys-info version v2c v3
snmp-agent target-host trap address udp-domain 10.1.1.1 params securityname public
snmp-agent trap enable
#
return
Networking Requirements
As shown in Figure 1-9, it is required to output the debugging information of the Address
Resolution Protocol (ARP) module to the console.
Router PC
Configuration Roadmap
The configuration roadmap is as follows:
1. Enable the information center.
2. Set the logs to be output to the console and the information module.
3. Configure the channel through which the debugging information is output.
4. Enable the terminal monitor function and display the debugging information.
Data Preparation
To complete the configuration, you need the following data:
l Information channel number
l Module enabled to output the logs
l Information severity level
Procedure
Step 1 Enable the information center.
<HUAWEI> system-view
Info: Current terminal monitor is on.
[HUAWEI] info-center enable
Info: Current terminal debugging is on.
Step 2 Allow the debugging on the ARP module to be output to the Console with the severity level of
the information as debugging.
[HUAWEI] info-center source arp channel console debug level debugging
[HUAWEI] info-center console channel console
[HUAWEI] quit
Step 3 Enable the terminal monitor function to display the debugging information.
<HUAWEI> terminal monitor
<HUAWEI> terminal debugging
----End
Configuration Files
#
info-center source ARP channel 0
#
return
2 SNMP Configuration
The Simple Network Management Protocol (SNMP) is a standard network management protocol
widely used on TCP/IP networks. It uses a central computer (a network management station)
that runs network management software to manage network elements. There are three SNMP
versions, SNMPv1, SNMPv2c, and SNMPv3. You can configure one or more versions, if
needed.
As network services develop, more devices are deployed on existing networks. The devices are
not close to the central equipment room where a network administrator works. When faults occur
on the remote devices, the network administrator cannot detect, locate or rectify faults
immediately because the devices do not report the faults. This affects maintenance efficiency
and greatly increases maintenance workload.
To solve this problem, equipment vendors have provided network management functions in
some products. These functions allow the NM station to query the status of remote devices, and
devices can send alarms to the NM station in the case of particular events.
SNMP operates at the application layer of the IP suite and defines how to transmit management
information between the NM station and devices. SNMP defines several device management
operations that the NM station can perform and allows devices to send alarms to notify the NM
station of device faults.
SNMP Components
SNMP device management uses the following three components:
l NM station: sends various query packets to query managed devices and receives alarms
from these devices.
l Agent: is a network-management process on a managed device. An agent has the following
functions:
– Receives and parses query packets sent from the NM station.
– Reads or writes management variables based on the query type, and generates and sends
response packets to the NM station.
– Sends an alarm to the NM station when triggering conditions defined on each protocol
module corresponding to the alarm are met. For example, the system view is displayed
or closed, or the device is restarted.
l Managed device: is managed by an NM station and generates and reports alarms to the NM
station.
Figure 2-1 shows the relationship between the NM station and agent.
Request
Response
NM Station Agent
UDP Port162
Trap
NM Station Agent
MIB
SNMP uses a hierarchical naming convention to identify managed objects and to distinguish
between managed objects. This hierarchical structure is similar to a tree with the nodes
representing managed objects, Figure 2-2 shows a managed object that can be identified by the
path from the root to the node representing it.
1
1 2
1 2
1 B 2
5 6
A
As shown in Figure 2-2, object B is uniquely identified by a string of numbers, {1.2.1.1}. Such
a number string is called an Object Identifier (OID). A MIB tree is used to describe the hierarchy
of data in a MIB that collects the definitions of variables on the managed devices.
A user can use a standard MIB or define a MIB based on certain standards. Using a standard
MIB can reduce the costs on proxy deployment and therefore reduce the costs on the entire
network management system.
SNMP Operations
SNMP uses Get and Set operations to replace a complex command set. The operations described
in Figure 2-3 can implement all functions.
NOTE
The NM station uses SNMP to monitor and manage network devices. It cannot be used to monitor and
manage the operation of the entire network. To monitor and manage the operation of an entire network,
for example, to learn network performance or collect network statistics, see the Configuration Guide -
System Management for details about the configurations of RMON and RMON2, NetStream, and fault and
performance management.
The NE80E/40E supports SNMPv1, SNMPv2c, and SNMPv3. Table 2-2 lists the features
supported by SNMP, and Table 2-3 shows the support of different SNMP versions for the
features. Table 2-4 describes the usage scenarios of SNMP versions, which will help you choose
a proper version for the communication between an NM station and managed devices based on
the network operation conditions.
NOTE
When multiple NM stations using different SNMP versions manage the same device in a network,
SNMPv1, SNMPv2c, and SNMPv3 can all be configured on the device for its communication with all the
NM stations.
Feature Description
If you plan to build a new network, choose an SNMP version based on your usage scenario. If
you plan to expand or upgrade an existing network, choose an SNMP version to match the SNMP
version running on the NM station to ensure the normal communication between managed
devices and the NM station.
Applicable Environment
SNMP needs to be deployed in a network to allow the NM station to manage network devices.
If the network has a few devices and its security is good, such as a campus network or a small
enterprise network, SNMPv1 can be deployed to ensure the normal communication between the
NM station and managed devices.
Pre-configuration Tasks
Before configuring a device to communicate with an NM station by running SNMPv1, complete
the following task:
l Configuring a routing protocol to ensure that the router and NM station are routable
Data Preparation
Before configuring a device to communicate with an NM station by running SNMPv1, you need
the following data.
No. Data
Context
Steps 3, 4, and 5 are mandatory for the configuration of basic SNMP functions. After the
configurations are complete, basic SNMP communication can be conducted between the NM
station and managed device.
Procedure
Step 1 Run:
system-view
l To configure a destination IPv6 address for the alarms and error codes sent from the device,
run:
snmp-agent target-host trap ipv6 address udp-domain ip-address [ udp-port port-
number ] params securityname security-string [ v1 ] [private-netmanager ]
l The default destination UDP port number is 162. In some special cases (for example, port
mirroring is configured to prevent a well-known port from being attacked), the parameter
udp-port can be used to specify a non-well-known UDP port number. This ensures normal
communication between the NM station and managed device.
l If the alarms sent from the managed device to the NM station need to be transmitted over a
public network, the parameter public-net needs to be configured. If the alarms sent from the
managed device to the NM station need to be transmitted over a private network, the
parameter vpn-instance vpn-instance-name needs to be used to specify a VPN that will take
over the sending task.
l The parameter securityname identifies the alarm sender, which will help you learn the alarm
source.
l If the NM station and managed device are both Huawei products, the parameter private-
netmanager can be configured to add more information to alarms, such as the alarm type,
alarm sequence number, and alarm sending time. The information will help you locate and
rectify faults more quickly.
This step is required when the NM station administrator must know equipment administrators'
contact information and locations when the NM station manages many devices. This allows the
NM station administrator to contact the equipment administrators quickly for fault location and
rectification.
To configure both the equipment administrator's contact information and location, you must run
the command twice to configure them separately.
The maximum size of an SNMP packet that the device can receive or send is set.
By default, the maximum size of an SNMP packet that the device can receive or send is 12000
bytes.
After the maximum size is set, the device will discard any SNMP packet that is larger than the
set size. The allowable maximum size of an SNMP packet for a device depends on the size of a
packet that the NM station can process; otherwise, the NM station cannot process the SNMP
packets sent from the device.
----End
Follow-up Procedure
After the configurations are complete, basic communication can be conducted between the NM
station and managed device.
l Access control allows any NM station that uses the community name to monitor and manage
all the objects on the managed device.
l The managed device sends alarms generated by the modules that are enabled by default to
the NM station. The modules include FM, NQA, DATASYNC, PM, SINDEX, L2IF
If finer device management is required, follow directions below to configure a managed device:
l To allow a specified NM station that uses the community name to manage specified objects
on the device, follow the procedure described in Controlling the NM Station's Access to
the Device.
l To allow a specified module on the managed device to report alarms to the NM station,
follow the procedure described in Configuring the Trap Function.
l If the NM station and managed device are both Huawei products, follow the procedure
described in Enabling the SNMP Extended Error Code Function to allow the device to
send more types of error codes. This allows more specific error identification and facilitates
your fault location and rectification.
Context
If a device is managed by multiple NM stations that use the same community name, note the
following points:
l If all the NM stations that use the community name need to have rights to access the objects
in the Viewdefault view (1.3.6.1), skip the following steps.
l If some of the NM stations that use the community name need to have rights to access the
objects in the Viewdefault view (1.3.6.1), skip Step5.
l If all the NM stations need to manage specified objects on the device, skip Step2, Step3,
and Step4.
l If some of the NM stations that use the community name need to manage specified objects
on the device, perform all the following steps.
Procedure
Step 1 Run:
system-view
Step 2 Run:
acl acl-number
A basic ACL is created to filter the NM station users that can manage the device.
Step 3 Run:
rule [ rule-id ] { deny | permit } source { source-ip-address source-wildcard |
any }
Step 4 Run:
quit
Step 5 Run:
snmp-agent mib-view { excluded | included } view-name oid-tree
By default, an NM station has rights to access the objects in the Viewdefault view (1.3.6.1).
l If a few MIB objects on a device or some objects in the current MIB view do not or no longer
need to be managed by the NM station, excluded needs to be specified in the related command
to exclude these MIB objects.
l If a few MIB objects on the device or some objects in the current MIB view need to be
managed by the NM station, included needs to be specified in the related command to include
these MIB objects.
Step 6 Run:
snmp-agent community { read | write } { community-name | cipher community-name } [
mib-view view-name | acl acl-number ]*
l read needs to be configured in the command if the NM station administrator needs the read
permission in the specified view in some cases. For example, a low-level administrator needs
to read certain data. write needs to be configured in the command if the NM station
administrator needs the read and write permissions in the specified view in some cases. For
example, a high-level administrator needs to read and write certain data.
l cipher is used to display the community name in cipher text. It can be configured in the
command to improve security. If the parameter is configured, the administrator needs to
remember the community name. If the community name is forgotten, it cannot be obtained
by querying the device.
l If some of the NM stations that use the community name need to have rights to access the
objects in the Viewdefault view (1.3.6.1), mib-view view-name does not need to be
configured in the command.
l If all the NM stations that use the community name need to manage specified objects on the
device, acl acl-number does not need to be configured in the command.
l If some of the NM stations that use the community name need to manage specified objects
on the device, both mib-view and acl need to be configured in the command.
----End
Follow-up Procedure
After the access rights are configured, especially after the IP address of the NM station is
specified, if the IP address changes (for example, the NM station changes its location, or IP
addresses are reallocated due to network adjustment), you need to change the IP address of the
NM station in the ACL. Otherwise, the NM station cannot access the device.
Procedure
Step 1 Run:
system-view
Step 2 Run:
snmp-agent extend error-code enable
By default, SNMP standard error codes are used. After the extended error code function is
enabled, extended error codes can be sent to the NM station.
----End
Procedure
Step 1 Run:
system-view
Step 2 Run:
snmp-agent trap enable
NOTE
If the snmp-agent trap enable command is run to enable the trap functions of all modules, note the
following points:
l To disable the trap functions of all modules, you need to run the snmp-agent trap disable command.
l To restore the trap functions of all modules to the default status, you need to run the undo snmp-agent
trap enable or undo snmp-agent trap disable command.
l To disable one trap function of a module, you need to run the undo snmp-agent trap enable feature-
name command.
Step 3 Run:
snmp-agent trap enable feature-name feature-name trap-name trap-name
A trap function of a feature module is enabled. This means that an alarm of a specified feature
can be sent to the NM station.
The undo snmp-agent trap enable feature-name command can be used to disable a trap
function of a module.
Step 4 Run:
snmp-agent notify-filter-profile { excluded | included } profile-name oid-tree
At present, the snmp-agent notify-filter-profile command supports either the variable OID of
a character string or an object name. If the entered parameter is a character string, the asterisk
(*) can be used as the mask. The asterisk (*) can be placed only in the middle, not at the beginning
or end of the string.
Step 5 Run:
snmp-agent trap source interface-type interface-number
The length of the queue storing trap messages to be sent to the destination host is set.
The queue length depends on the number of generated trap messages. If the router frequently
generates trap messages, a longer queue length can be set to prevent trap messages from being
lost.
Step 7 Run:
snmp-agent trap life seconds
----End
Prerequisites
The configurations of basic SNMPv1 functions are complete.
Procedure
l Run the display snmp-agent community command to check the configured community
name.
l Run the display snmp-agent sys-info version command to check the enabled SNMP
version.
l Run the display acl acl-number command to check the rules in the specified ACL.
l Run the display snmp-agent mib-view command to check the MIB view.
l Run the display snmp-agent sys-info contact command to check the equipment
administrator's contact information.
l Run the display snmp-agent sys-info location command to check the location of the
device.
l Run the display current-configuration | include max-size command to check the
allowable maximum size of an SNMP packet.
l Run the display current-configuration | include trap command to check trap
configurations.
l Run the display snmp-agent extend error-code status command to check whether the
SNMP extended error code feature is enabled.
----End
Example
When the configuration is complete, run the display snmp-agent community command. You
can view the configured community name.
<HUAWEI> display snmp-agent community
Community name:%$%$"b>4*x#Vg&|Sr"PmUryU,A8/%$%$
Group name:%$%$"b>4*x#Vg&|Sr"PmUryU,A8/%$%$
Storage-type: nonVolatile
Community name:%$%$(;FP5lytUA3nc-QSq111,ri`%$%$
Group name:%$%$(;FP5lytUA3nc-QSq111,ri`%$%$
Storage-type: nonVolatile
Run the display snmp-agent sys-info version command. You can view the SNMP version
running on the agent.
<HUAWEI> display snmp-agent sys-info version
SNMP version running in the system:
SNMPv1 SNMPv3
Run the display acl acl-number command. You can view the rules in the specified ACL.
<HUAWEI> display acl 2000
Basic ACL 2000, 1 rule
Acl's step is 5
rule 5 permit source 1.1.1.1 0 (0 times matched)
Run the display snmp-agent mib-view command. You can view the MIB view.
<HUAWEI> display snmp-agent mib-view
View name:ViewDefault
MIB Subtree:internet
Subtree mask:
Storage-type: nonVolatile
View Type:included
View status:active
View name:ViewDefault
MIB Subtree:snmpUsmMIB
Subtree mask:
Storage-type: nonVolatile
View Type:excluded
View status:active
View name:ViewDefault
MIB Subtree:snmpVacmMIB
Subtree mask:
Storage-type: nonVolatile
View Type:excluded
View status:active
View name:ViewDefault
MIB Subtree:snmpModules.18
Subtree mask:
Storage-type: nonVolatile
View Type:excluded
View status:active
Total number is 1
Run the display snmp-agent sys-info contact command. You can view the equipment
administrator's contact information.
<HUAWEI> display snmp-agent sys-info contact
The contact person for this managed node:
R&D Beijing, Huawei Technologies co.,Ltd.
Run the display snmp-agent sys-info location command. You can view the location of the
device.
<HUAWEI> display snmp-agent sys-info location
The physical location of this node:
Beijing China
Run the display current-configuration | include max-size command. You can view the
allowable maximum size of an SNMP packet.
<HUAWEI> display current-configuration | include max-size
snmp-agent packet max-size 1800
Run the display current-configuration | include trap command. You can view trap
configurations.
<HUAWEI> display current-configuration | include trap
snmp-agent trap enable standard
Run the display snmp-agent extend error-code status command. You can view whether the
SNMP extended error code feature is enabled.
<HUAWEI> display snmp-agent extend error-code status
Extend error-code status:enabled
In the following configuration, after basic SNMP functions are configured, the NM station can
manage the device in these manners. For details on how to configure finer management such as
accurate access control or alarm module specification, see the following configuration
procedures.
Applicable Environment
SNMP needs to be deployed in a network to allow the NM station to manage network devices.
If your network is a large scale with many devices and its security requirements are not strict or
its security is good (for example, a VPN network) but services on the network are so busy that
traffic congestion may occur, SNMPv2c can be deployed to ensure communication between the
NM station and managed devices.
Pre-configuration Tasks
Before configuring a device to communicate with an NM station by running SNMPv2c, complete
the following task:
l Configuring a routing protocol to ensure that the router and NM station are routable
Data Preparation
Before configuring a device to communicate with an NM station by running SNMPv2c, you
need the following data.
No. Data
1 SNMP version, SNMP community name, address of the alarm destination host,
administrator's contact information and location, and the maximum SNMP packet
size
Context
Steps 3, 4, and 5 are mandatory for the configuration of basic SNMP functions. After the
configurations, basic SNMP communication can be conducted between the NM station and
managed device.
Procedure
Step 1 Run:
system-view
By default, the SNMP agent function is disabled. Running any command with the parameter
snmp-agent can enable the SNMP agent function, so this step is optional.
Step 3 Run:
snmp-agent sys-info version v2c
After SNMPv2c is enabled on the managed device, the device supports both SNMPv2c and
SNMPv3. This means that the device can be monitored and managed by NM stations running
SNMPv2c and SNMPv3.
Step 4 Run:
snmp-agent community { read | write } community-name
l read must be configured in the command if the NM station administrator requires the read
permission in a specified view in some cases. For example, a low-level administrator must
read certain data.
l write must be configured in the command if the NM station administrator requires the read
and write permissions in a specified view in some cases. For example, a high-level
administrator must read and write certain data.
After the community name is set, if no MIB view is configured, the NM station that uses the
community name has rights to access objects in the Viewdefault view.
Step 5 Choose one of the following commands as needed to configure the destination IP address for
the alarms and error codes sent from the device.
l If the network is an IPv4 network, configure the device to send either traps or informs to the
NM station.
NOTE
– To configure a destination IP address for the informs and error codes sent from the device,
run:
snmp-agent target-host inform address udp-domain ip-address params
securityname security-string v2c
l To configure a destination IPv6 address for the alarms and error codes sent from the device,
run:
snmp-agent target-host trap ipv6 address udp-domain ip-address [ udp-port port-
number ] params securityname security-string [ v2c ] [private-netmanager ]
NOTE
This step is required when the NM station administrator must know equipment administrators'
contact information and locations when the NM station manages many devices. This allows the
NM station administrator to contact the equipment administrators quickly for fault location and
rectification.
To configure both the equipment administrator's contact information and location, you must run
the command twice to configure them separately.
The maximum size of an SNMP packet that the device can receive or send is set.
By default, the maximum size of an SNMP packet that the device can receive or send is 12000
bytes.
After the maximum size is set, the device will discard any SNMP packet that is larger than the
set size. The allowable maximum size of an SNMP packet for a device depends on the size of a
packet that the NM station can process; otherwise, the NM station cannot process the SNMP
packets sent from the device.
----End
Follow-up Procedure
After the configurations are complete, basic communication can be conducted between the NM
station and managed device.
l Access control allows any NM station that uses the community name to monitor and manage
all the objects on the managed device.
l The managed device sends alarms generated by the modules that are open by default to the
NM station. The modules include FM, NQA, DATASYNC, PM, SINDEX, L2IF
(hwMacUsageRaisingThreshold and hwMacUsageFallingThreshold), HGMP, RRPP, FR,
HDLC, RMON, MP, VRRP, FIB, SNMP (coldStart and warmStart), and DHCPSNP.
If finer device management is required, follow directions below to configure the managed
device:
l To allow a specified NM station that uses the community name to manage specified objects
of the device, follow the procedure described in Controlling the NM Station's Access to
the Device.
l To allow a specified module on the managed device to report alarms to the NM station,
follow the procedure described in Configuring the Trap Function.
l If the NM station and managed device are both Huawei products, follow the procedure
described in Enabling the SNMP Extended Error Code Function to allow the device to
send more types of error codes. This allows more specific error identification and facilitates
your fault location and rectification.
Context
If a device is managed by multiple NM stations that use the same community name, note the
following points:
l If all the NM stations that use the community name need to have rights to access the objects
in the Viewdefault view (1.3.6.1), skip the following steps.
l If some of the NM stations that use the community name need to have rights to access the
objects in the Viewdefault view (1.3.6.1), skip Step5.
l If all the NM stations need to manage specified objects on the device, skip Step2, Step3,
and Step4.
l If some of the NM stations that use the community name need to manage specified objects
on the device, perform all the following steps.
Procedure
Step 1 Run:
system-view
Step 2 Run:
acl acl-number
A basic ACL is created to filter the NM station users that can manage the device.
Step 3 Run:
rule [ rule-id ] { deny | permit } source { source-ip-address source-wildcard |
any }
Step 4 Run:
quit
Step 5 Run:
snmp-agent mib-view { excluded | included } view-name oid-tree
By default, an NM station has rights to access the objects in the Viewdefault view (1.3.6.1).
l If a few MIB objects on a device or some objects in the current MIB view do not or no longer
need to be managed by the NM station, excluded needs to be specified in the related command
to exclude these MIB objects.
l If a few MIB objects on the device or some objects in the current MIB view need to be
managed by the NM station, included needs to be specified in the related command to include
these MIB objects.
Step 6 Run:
snmp-agent community { read | write } { community-name | cipher community-name } [
mib-view view-name | acl acl-number ]*
l read needs to be configured in the command if the NM station administrator needs the read
permission in the specified view in some cases. For example, a low-level administrator needs
to read certain data. write needs to be configured in the command if the NM station
administrator needs the read and write permissions in the specified view in some cases. For
example, a high-level administrator needs to read and write certain data.
l cipher is used to display the community name in cipher text. It can be configured in the
command to improve security. If the parameter is configured, the administrator needs to
remember the community name. If the community name is forgotten, it cannot be obtained
by querying the device.
l If some of the NM stations that use the community name need to have rights to access the
objects in the Viewdefault view (1.3.6.1), mib-view view-name does not need to be
configured in the command.
l If all the NM stations that use the community name need to manage specified objects on the
device, acl acl-number does not need to be configured in the command.
l If some of the NM stations that use the community name need to manage specified objects
on the device, both mib-view and acl need to be configured in the command.
----End
Follow-up Procedure
After the access rights are configured, especially after the IP address of the NM station is
specified, if the IP address changes (for example, the NM station changes its location, or IP
addresses are reallocated due to network adjustment), you need to change the IP address of the
NM station in the ACL. Otherwise, the NM station cannot access the device.
Procedure
Step 1 Run:
system-view
----End
Procedure
Step 1 Run:
system-view
NOTE
If the snmp-agent trap enable command is run to enable the trap functions of all modules, note the
following points:
l To disable the trap functions of all modules, you need to run the snmp-agent trap disable command.
l To restore the trap functions of all modules to the default status, you need to run the undo snmp-agent
trap enable or undo snmp-agent trap disable command.
l To disable one trap function of a module, you need to run the undo snmp-agent trap enable feature-
name command.
Step 3 Run:
snmp-agent trap enable feature-name feature-name trap-name trap-name
A trap function of a feature module is enabled. This means that an alarm of a specified feature
can be sent to the NM station.
The undo snmp-agent trap enable feature-name feature-name trap-name trap-name
command can be used to disable a trap function of a module.
Step 4 Configure trap function parameters based on the trap usage or inform usage selected during the
configuration of basic SNMPv2c functions.
If traps are used, follow the procedure described in Configuring trap parameters; if informs
are used, follow the procedure described in Configuring inform parameters.
Configuring trap parameters:
1. Run:
snmp-agent notify-filter-profile { excluded | included } profile-name oid-tree
The length of the queue storing trap messages to be sent to the destination host is set.
The queue length depends on the number of generated trap messages. If the router
frequently generates trap messages, a longer queue length can be set to prevent trap
messages from being lost.
4. Run:
snmp-agent trap life seconds
The timeout period for waiting for Inform ACK messages, number of inform
retransmissions, and allowable maximum number of informs to be acknowledged are set.
If the network is unstable, you need to specify the number of inform retransmissions and
allowable maximum number of informs to be acknowledged when you set a timeout period
for waiting for Inform ACK messages. By default, the timeout period for waiting for Inform
ACK messages is 15 seconds; the number of inform retransmissions is 3; the allowable
maximum number of informs waiting to be acknowledged is 39.
Setting the number of inform retransmissions to a value smaller than or equal to 10 is
recommended. Otherwise, device performance will be affected.
2. Run:
snmp-agent inform { timeout seconds | resend-times times } *address udp-
domain ip-address[ vpn-instance vpn-instance-name ] params securityname
security-string
The timeout period for waiting for Inform ACK messages from a specified NM station and
the number of inform retransmissions are set.
If the network is unstable, you need to specify the number of inform retransmissions and
allowable maximum number of informs to be acknowledged when you set a timeout period
for waiting for Inform ACK messages. By default, the timeout period for waiting for Inform
ACK messages is 15 seconds, and the number of inform retransmissions is 3.
Setting the number of inform retransmissions to a value smaller than or equal to 10 is
recommended. Otherwise, device performance will be affected.
3. Run:
snmp-agent notification-log enable
The aging time of alarm logs and maximum number of alarm logs allowed to be stored in
the log buffer are set.
By default, the aging time of alarm logs is 24 hours. If the aging time expires, alarms logs
will be automatically deleted.
By default, the log buffer can store a maximum of 500 alarm logs. If the number of alarm
logs in the log buffer exceeds 500, the device will delete the alarm logs from the earliest
one.
----End
Prerequisites
The configurations of basic SNMPv2c functions are complete.
Procedure
l Run the display snmp-agent community command to check the configured community
name.
l Run the display snmp-agent sys-info version command to check the enabled SNMP
version.
l Run the display acl acl-number command to check the rules in the specified ACL.
l Run the display snmp-agent mib-view command to check the MIB view.
l Run the display snmp-agent sys-info contact command to check the equipment
administrator's contact information.
l Run the display snmp-agent sys-info location command to check the location of the
device.
l Run the display current-configuration | include max-size command to check the
allowable maximum size of an SNMP packet.
l Run the display current-configuration | include trap command to check trap
configurations.
l Run the display snmp-agent target-host command to check information about the target
host.
l Run the display snmp-agent inform [ address udp-domain ip-address [ vpn-instance
vpn-instance-name ] params securityname security-string ] command to check inform
parameters and device statistics with the NM station being specified or not.
l Run the display snmp-agent notification-log info command to check alarm logs stored
in the log buffer.
l Run the display snmp-agent extend error-code status command to check whether the
SNMP extended error code feature is enabled.
----End
Example
When the configuration is complete, run the display snmp-agent community command. You
can view the configured community name.
<HUAWEI> display snmp-agent community
Community name:%$%$"b>4*x#Vg&|Sr"PmUryU,A8/%$%$
Group name:%$%$"b>4*x#Vg&|Sr"PmUryU,A8/%$%$
Storage-type: nonVolatile
Community name:%$%$(;FP5lytUA3nc-QSq111,ri`%$%$
Group name:%$%$(;FP5lytUA3nc-QSq111,ri`%$%$
Storage-type: nonVolatile
Run the display snmp-agent sys-info version command. You can view the SNMP version
running on the agent.
<HUAWEI> display snmp-agent sys-info version
SNMP version running in the system:
SNMPv2c SNMPv3
Run the display acl acl-number command. You can view the rules in the specified ACL.
<HUAWEI> display acl 2000
Basic ACL 2000, 1 rule
Acl's step is 5
rule 5 permit source 1.1.1.1 0 (0 times matched)
Run the display snmp-agent mib-view command. You can view the MIB view.
<HUAWEI> display snmp-agent mib-view
View name:ViewDefault
MIB Subtree:internet
Subtree mask:
Storage-type: nonVolatile
View Type:included
View status:active
View name:ViewDefault
MIB Subtree:snmpUsmMIB
Subtree mask:
Storage-type: nonVolatile
View Type:excluded
View status:active
View name:ViewDefault
MIB Subtree:snmpVacmMIB
Subtree mask:
Storage-type: nonVolatile
View Type:excluded
View status:active
View name:ViewDefault
MIB Subtree:snmpModules.18
Subtree mask:
Storage-type: nonVolatile
View Type:excluded
View status:active
Total number is 1
Run the display snmp-agent sys-info contact command. You can view the equipment
administrator's contact information.
<HUAWEI> display snmp-agent sys-info contact
The contact person for this managed node:
R&D Beijing, Huawei Technologies co.,Ltd.
Run the display snmp-agent sys-info location command. You can view the location of the
device.
<HUAWEI> display snmp-agent sys-info location
The physical location of this node:
Beijing China
Run the display current-configuration | include max-size command. You can view the
allowable maximum size of an SNMP packet.
<HUAWEI> display current-configuration | include max-size
snmp-agent packet max-size 1800
Run the display current-configuration | include trap command. You can view trap
configurations.
<HUAWEI> display current-configuration | include trap
snmp-agent trap enable standard
Run the display snmp-agent extend error-code status command. You can view whether the
SNMP extended error code feature is enabled.
<HUAWEI> display snmp-agent extend error-code status
Extend error-code status:enabled
Run the display snmp-agent target-host command. You can view information about the target
host.
<HUAWEI> display snmp-agent target-host
Target-host NO. 1
-----------------------------------------------------------
IP-address : 2.2.2.2
VPN instance : -
Security name : abc
Port : 23
Type : inform
Version : v2c
Level : No authentication and privacy
NMS type : NMS
With ext-vb: : Yes
-----------------------------------------------------------
Target-host NO. 2
-----------------------------------------------------------
IP-address : 1.1.1.1
VPN instance : -
Security name : aaa
Port : 22
Type : trap
Version : v2c
Level : No authentication and privacy
NMS type : HW NMS
With ext-vb: : No
-----------------------------------------------------------
Run the display snmp-agent inform command. You can view the configurations of inform
sending.
<HUAWEI> display snmp-agent inform
Global config: resend-times 3, timeout 15s, pending 39
Global status: current notification count 0
Target-host ID: VPN instance/IP-Address/Security name
a/1.1.1.1/public:
Config: resend-times 3, timeout 15s
Status: retries 0, pending 0, sent 0, dropped 0, failed 0, confirmed 0
Run the display snmp-agent notification-log info command. You can view alarm logs stored
in the log buffer.
<HUAWEI> display snmp-agent notification-log info
Notification log information :
Notifications Admin Status: enable
GlobalNotificationsLogged: 0
GlobalNotificationsBumped: 0
GlobalNotificationsLimit: 500
GlobalNotificationsAgeout: 24
Total number of notification log: 0
In the following configuration, after basic SNMP functions are configured, the NM station can
manage the device in these manners. For details on how to configure finer management such as
accurate access control or alarm module specification, see the following configuration
procedures.
Applicable Environment
SNMP needs to be deployed in a network to allow the NM station to manage network devices.
Assume your network has a strict requirement on security, only authorized administrators can
manage network devices, and the security and accuracy of transmitted network data need to be
ensured. For example, the data between the NM station and managed devices is transmitted over
a public network. In this case, SNMPv3 can be deployed. The authentication and encryption
functions provided by SNMPv3 ensure the security of data sending and normal communication
between the NM station and managed devices.
Pre-configuration Tasks
Before configuring a device to communicate with an NM station by running SNMPv3, complete
the following task:
l Configuring a routing protocol to ensure that the router and NM station are routable
Data Preparation
Before configuring a device to communicate with an NM station by running SNMPv3, you need
the following data.
No. Data
1 SNMP version, user name and user group name, address of the alarm destination host,
administrator's contact information and location, and maximum SNMP packet size
Context
Steps 4, 5, and 6 are mandatory for the configuration of basic SNMP functions. After the
configurations, basic SNMP communication can be conducted between the NM station and
managed device.
Procedure
Step 1 Run:
system-view
By default, the SNMP agent function is disabled. Running any command with the parameter
snmp-agent can enable the SNMP agent function, so this step is optional.
Step 4 Run:
snmp-agent group v3 group-name [ authentication | privacy ]
If the network or network devices are in an environment lacking security (for example, the
network is vulnerable to attacks), authentication or privacy can be configured in the command
to enable data authentication or encryption.
Step 5 Run:
snmp-agent usm-user v3 user-name group-name [ { authentication-mode { md5 | sha }
password } ] [ privacy-mode des56 password ] [ acl acl-number ]
l To configure a destination IPv6 address for the alarms and error codes sent from the device,
run:
snmp-agent target-host trap ipv6 address udp-domain ip-address [ udp-port port-
number ] params securityname security-string [ v3 ] [ private-netmanager ]
The maximum size of an SNMP packet that the device can receive or send is set.
By default, the maximum size of an SNMP packet that the device can receive or send is 12000
bytes.
After the maximum size is set, the device will discard any SNMP packet that is larger than the
set size. The allowable maximum size of an SNMP packet for a device depends on the size of a
packet that the NM station can process; otherwise, the NM station cannot process the SNMP
packets sent from the device.
----End
Follow-up Procedure
After the configurations are complete, basic communication can be conducted between the NM
station and managed device.
l Access control allows any NM station in the configured SNMPv3 user group to monitor
and manage all the objects on the managed device.
l The managed device sends alarms generated by the modules that are open by default to the
NM station. The modules include FM, NQA, DATASYNC, PM, SINDEX, L2IF
(hwMacUsageRaisingThreshold and hwMacUsageFallingThreshold), HGMP, RRPP, FR,
HDLC, RMON, MP, VRRP, FIB, SNMP (coldStart and warmStart), and DHCPSNP.
If finer device management is required, follow directions below to configure the managed
device:
l To allow a specified NM station in an SNMPv3 user group to manage specified objects of
the device(such as NM station with the specified IP address), follow the procedure
described in Controlling the NM Station's Access to the Device.
l To allow a specified module on the managed device to report alarms to the NM station,
follow the procedure described in Configuring the Trap Function.
l If the NM station and managed device are both Huawei products, follow the procedure
described in Enabling the SNMP Extended Error Code Function to allow the device to
send more types of error codes. This allows more specific error identification and facilitates
your fault location and rectification.
Context
If a device is managed by multiple NM stations that are in the same SNMPv3 user group, note
the following points:
l If all the NM stations need to have rights to access the objects in the Viewdefault view
(1.3.6.1), skip the following steps.
l If some of the NM stations need to have rights to access the objects in the Viewdefault view
(1.3.6.1), skip Step5.
l If all the NM stations need to manage specified objects on the device, skip Step2, Step3,
and Step4.
l If some of the NM stations need to manage specified objects on the device, perform all the
following steps.
Procedure
Step 1 Run:
system-view
Step 2 Run:
acl acl-number
A basic ACL is created to filter the NM station users that can manage the device.
Step 3 Run:
rule [ rule-id ] { deny | permit } source { source-ip-address source-wildcard |
any }
Step 4 Run:
quit
Step 5 Run:
snmp-agent mib-view { excluded | included } view-name oid-tree
By default, an NM station has rights to access the objects in the Viewdefault view (1.3.6.1).
l If a few MIB objects on the device or some objects in the current MIB view do not or no
longer need to be managed by the NM station, excluded needs to be specified in the command
to exclude these MIB objects.
l If a few MIB objects on the device or some objects in the current MIB view need to be
managed by the NM station, included needs to be specified in the command to include these
MIB objects.
Step 6 Run:
snmp-agent group v3 group-name [ authentication | privacy ] [ read-view read-view
| write-view write-view | notify-view notify-view ]* [ acl acl-number ]
The read and write permissions are configured for the user group.
l read-view needs to be configured in the command if the NM station administrator needs the
read permission in the specified view in some cases. For example, a low-level administrator
needs to read certain data. write-view needs to be configured in the command if the NM
station administrator needs the read and write permissions in the specified view in some
cases. For example, a high-level administrator needs to read and write certain data.
l notify-view needs to be configured in the command if you want to filter out irrelevant alarms
and configure the managed device to send only the alarms of specified MIB objects to the
NM station. If the parameter is configured, only the alarms of the MIB objects specified by
notify-view will be sent to the NM station. To make the filtering policy take effect, you also
need to configure notify-filter-profile in the snmp-agent target-host trap command when
configuring the NM station.
----End
Follow-up Procedure
After the access rights are configured, especially after the IP address of the NM station is
specified, if the IP address changes (for example, the NM station changes its location, or IP
addresses are reallocated due to network adjustment), you need to change the IP address of the
NM station in the ACL. Otherwise, the NM station cannot access the device.
Procedure
Step 1 Run:
system-view
----End
Procedure
Step 1 Run:
system-view
NOTE
If the snmp-agent trap enable command is run to enable the trap functions of all modules, note the
following points:
l To disable the trap functions of all modules, you need to run the snmp-agent trap disable command.
l To restore the trap functions of all modules to the default status, you need to run the undo snmp-agent
trap enable or undo snmp-agent trap disable command.
l To disable one trap function of a module, you need to run the undo snmp-agent trap enable feature-
name command.
Step 3 Run:
snmp-agent trap enable feature-name feature-name trap-name trap-name
A trap function of a feature module is enabled. This means that an alarm of a specified feature
can be sent to the NM station.
The undo snmp-agent trap enable feature-name command can be used to disable a trap
function of a module.
Step 4 Run:
snmp-agent notify-filter-profile { excluded | included } profile-name oid-tree
The length of the queue storing trap messages to be sent to the destination host is set.
The queue length depends on the number of generated trap messages. If the router frequently
generates trap messages, a longer queue length can be set to prevent trap messages from being
lost.
Step 7 Run:
snmp-agent trap life seconds
----End
Prerequisites
The configurations of basic SNMPv3 functions are complete.
Procedure
l Run the display snmp-agent usm-user [ engineid engineid | group group-name |
username user-name ]* command to check user information.
l Run the display snmp-agent sys-info version command to check the enabled SNMP
version.
l Run the display acl acl-number command to check the rules in the specified ACL.
l Run the display snmp-agent mib-view command to check the MIB view.
l Run the display snmp-agent sys-info contact command to check the equipment
administrator's contact information.
l Run the display snmp-agent sys-info location command to check the location of the
device.
l Run the display current-configuration | include max-size command to check the
allowable maximum size of an SNMP packet.
l Run the display current-configuration | include trap command to check trap
configurations.
l Run the display snmp-agent extend error-code status command to check whether the
SNMP extended error code feature is enabled.
----End
Example
Run the display snmp-agent usm-user command. You can view SNMP user information.
<HUAWEI> display snmp-agent usm-user
User name: u1
Engine ID: 000007DB7F00000100000772 active
Run the display snmp-agent sys-info version command. You can view the SNMP version
running on the agent.
<HUAWEI> display snmp-agent sys-info version
SNMP version running in the system:
SNMPv3
Run the display acl acl-number command. You can view the rules in the specified ACL.
<HUAWEI> display acl 2000
Basic ACL 2000, 1 rule
Acl's step is 5
rule 5 permit source 1.1.1.1 0 (0 times matched)
Run the display snmp-agent mib-view command. You can view the MIB view.
<HUAWEI> display snmp-agent mib-view
View name:ViewDefault
MIB Subtree:internet
Subtree mask:
Storage-type: nonVolatile
View Type:included
View status:active
View name:ViewDefault
MIB Subtree:snmpUsmMIB
Subtree mask:
Storage-type: nonVolatile
View Type:excluded
View status:active
View name:ViewDefault
MIB Subtree:snmpVacmMIB
Subtree mask:
Storage-type: nonVolatile
View Type:excluded
View status:active
View name:ViewDefault
MIB Subtree:snmpModules.18
Subtree mask:
Storage-type: nonVolatile
View Type:excluded
View status:active
Total number is 1
Run the display snmp-agent sys-info contact command. You can view the equipment
administrator's contact information.
<HUAWEI> display snmp-agent sys-info contact
The contact person for this managed node:
R&D Beijing, Huawei Technologies co.,Ltd.
Run the display snmp-agent sys-info location command. You can view the location of the
device.
<HUAWEI> display snmp-agent sys-info location
The physical location of this node:
Beijing China
Run the display current-configuration | include max-size command. You can view the
allowable maximum size of an SNMP packet.
<HUAWEI> display current-configuration | include max-size
snmp-agent packet max-size 1800
Run the display current-configuration | include trap command. You can view trap
configurations.
<HUAWEI> display current-configuration | include trap
snmp-agent trap enable standard
Run the display snmp-agent extend error-code status command. You can view whether the
SNMP extended error code feature is enabled.
<HUAWEI> display snmp-agent extend error-code status
Extend error-code status:enabled
Context
NOTE
In this section, the link interface numbers and link type of the NE40E-X8 are used, which may be different
from that in real-world situations.
Networking Requirements
As shown in Figure 2-4, two NM stations (NMS1 and NMS2) and the router are connected
across a public network. According to the network planning, NMS2 can manage every MIB
object except HGMP on the router, and NMS1 does not manage the router.
On the router, only the modules that are enabled by default are allowed to send alarms to NMS2.
This prevents an excess of unwanted alarms from being sent to NMS2. Excessive alarms can
make faults location difficult.
Equipment administrator's contact information needs to be configured on the router. This allows
the NMS administrator to contact the equipment administrator quickly if a fault occurs.
Figure 2-4 Networking diagram for configuring a device to communicate with an NM station
by using SNMPv1
NMS1
1.1.1.1/24 GE1/0/0
IP Network 1.1.2.1/24
Router
NMS2
1.1.1.2/24
Configuration Roadmap
The configuration roadmap is as follows:
1. Enable the SNMP agent.
2. Configure the router to run SNMPv1.
3. Configure an ACL to allow NMS2 to manage every MIB object except HGMP on the
router.
4. Configure the trap function to allow the router to send alarms to NMS2.
Data Preparation
To complete the configuration, you need the following data:
l SNMP version
l Community name
l ACL number
l IP address of the NM station
l Equipment administrator's contact information
Procedure
Step 1 Configure available routes between the router and the NM stations. Details for the configuration
procedure are not provided here.
# Configure an ACL to allow NMS2 to manage and disallow NMS1 from managing the
router.
[HUAWEI] acl 2001
[HUAWEI-acl-basic-2001] rule 5 permit source 1.1.1.2 0.0.0.0
[HUAWEI-acl-basic-2001] rule 6 deny source 1.1.1.1 0.0.0.0
[HUAWEI-acl-basic-2001] quit
# Configure a MIB view and allow NMS2 to manage every MIB object except HGMP on the
router.
[HUAWEI] snmp-agent mib-view excluded allexthgmp 1.3.6.1.4.1.2011.6.7
# Configure a community name to allow NMS2 to manage the objects in the MIB view.
[HUAWEI] snmp-agent community write adminnms2 mib-view allexthgmp acl 2001
For details on how to configure NMS2, see the relevant NMS configuration guide.
After the configurations are complete, run the following commands to verify that the
configurations have taken effect.
# When an alarm is generated, run the display trapbuffer command to view the details.
<HUAWEI> display trapbuffer
Trapping buffer configuration and contents : enabled
Allowed max buffer size : 1024
Actual buffer size : 256
Channel number : 3 , Channel name : trapbuffer
Dropped messages : 0
Overwritten messages : 0
Current messages : 98
----End
Configuration Files
Configuration file of the router
#
snmp-agent trap type base-trap
#
acl number 2001
rule 5 permit source 1.1.1.2 0
rule 6 deny source 1.1.1.1 0
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 1.1.2.1 255.255.255.0
#
interface loopback0
ip address 1.1.3.1 255.255.255.255
#
ospf 1
area 0.0.0.0
network 1.1.2.0 0.0.0.255
network 1.1.3.1 0.0.0.0
#
snmp-agent
snmp-agent local-engineid 000007DB7FFFFFFF00001AA7
snmp-agent community write adminnms2 mib-view allexthgmp acl 2001
snmp-agent sys-info contact call Operator at 010-12345678
snmp-agent sys-info version v1 v3
snmp-agent target-host trap address udp-domain 1.1.1.2 params securityname
1.1.3.1
Networking Requirements
As shown in Figure 2-5, two NM stations (NMS1 and NMS2) and the router are connected
across a public network. According to the network planning, NMS2 can manage every MIB
object except HGMP on the router, and NMS1 does not manage the router.
On the router, only the modules that are enabled by default are allowed to send alarms to NMS2.
This prevents an excess of unwanted alarms from being sent to NMS2. Excessive alarms can
make faults location difficult. Informs need to be used to ensure that alarms are received by
NMS2 because alarms sent by the router have to travel across the public network to reach NMS2.
Equipment administrator's contact information needs to be configured on the router. This allows
the NMS administrator to contact the equipment administrator quickly if a fault occurs.
Figure 2-5 Networking diagram for configuring a device to communicate with an NM station
by using SNMPv2c
NMS1
1.1.1.1/24 GE1/0/0
IP Network 1.1.2.1/24
Router
NMS2
1.1.1.2/24
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
l SNMP version
l Community name
l ACL number
l IP address of the NM station
l Equipment administrator's contact information
Procedure
Step 1 Configure available routes between the router and the NM stations. Details for the configuration
procedure are not provided here.
# Configure a community name to allow NMS2 to manage the objects in the MIB view.
[HUAWEI] snmp-agent community write adminnms2 mib-view allexthgmp acl 2001
Subtree mask:FF80(Hex)
Storage-type: nonVolatile
View Type:excluded
View status:active
# When an alarm is generated, run the display trapbuffer command to view the details.
<HUAWEI> display trapbuffer
Trapping buffer configuration and contents : enabled
Allowed max buffer size : 1024
Actual buffer size : 256
Channel number : 3 , Channel name : trapbuffer
Dropped messages : 0
Overwritten messages : 0
Current messages : 98
----End
Configuration Files
Configuration file of the router
#
snmp-agent trap type base-trap
#
acl number 2001
rule 5 permit source 1.1.1.2 0
rule 6 deny source 1.1.1.1 0
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 1.1.2.1 255.255.255.0
#
ospf 1
area 0.0.0.0
network 1.1.2.0 0.0.0.255
#
snmp-agent
snmp-agent local-engineid 000007DB7FFFFFFF00001AA7
snmp-agent community write adminnms2 mib-view allexthgmp acl 2001
snmp-agent sys-info contact call Operator at 010-12345678
snmp-agent sys-info version v2c v3
snmp-agent target-host inform address udp-domain 1.1.1.2 params securityname
1.1.2.1
Networking Requirements
As shown in Figure 2-6, two NM stations (NMS1 and NMS2) and the router are connected
across a public network. According to the network planning, NMS2 can manage every MIB
object except HGMP on the router, and NMS1 does not manage the router.
On the router, only the modules that are enabled by default are allowed to send alarms to NMS2.
This prevents an excess of unwanted alarms from being sent to NMS2. Excessive alarms can
make faults location difficult.
The data transmitted between NMS2 and the router needs to be encrypted and the NMS
administrator needs to be authenticated because the data has to travel across the public network.
Equipment administrator's contact information needs to be configured on the router. This allows
the NMS administrator to contact the equipment administrator quickly if a fault occurs.
Figure 2-6 Networking diagram for configuring a device to communicate with an NM station
by using SNMPv3
NMS1
1.1.1.1/24 GE1/0/0
IP Network 1.1.2.1/24
Router
NMS2
1.1.1.2/24
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
l SNMP version
l User group name
l User name and password
l Authentication and encryption algorithms
l ACL number
l IP address of the NM station
l Equipment administrator's contact information
Procedure
Step 1 Configure available routes between the router and the NM stations. Details for the configuration
procedure are not provided here.
Step 2 Enable the SNMP agent.
<HUAWEI> system-view
[HUAWEI] snmp-agent
# Configure an SNMPv3 user group and add a user to the group, and configure authentication
for the NMS administrator and encryption for the data transmitted between the router and NMS2.
[HUAWEI] snmp-agent usm-user v3 testuser testgroup authentication-mode md5 87654321
privacy-mode des56 87654321
[HUAWEI] snmp-agent group v3 testgroup privacy write-view testview notify-view
VPN instance : -
Security name : testuser
Port : 162
Type : trap
Version : v1
Level : No authentication and privacy
NMS type : NMS
With ext-vb: : No
-----------------------------------------------------------
# When an alarm is generated, run the display trapbuffer command to view the details.
<HUAWEI> display trapbuffer
Trapping buffer configuration and contents : enabled
Allowed max buffer size : 1024
Actual buffer size : 256
Channel number : 3 , Channel name : trapbuffer
Dropped messages : 0
Overwritten messages : 0
Current messages : 98
----End
Configuration Files
Configuration file of the router
#
snmp-agent trap type base-trap
#
acl number 2001
rule 5 permit source 1.1.1.2 0
rule 6 deny source 1.1.1.1 0
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 1.1.2.1 255.255.255.0
#
interface loopback0
ip address 1.1.3.1 255.255.255.255
#
ospf 1
area 0.0.0.0
network 1.1.2.0 0.0.0.255
network 1.1.3.1 0.0.0.0
#
snmp-agent
snmp-agent local-engineid 000007DB7FFFFFFF000004A7
snmp-agent sys-info contact call Operator at 010-12345678
snmp-agent sys-info version v3
snmp-agent group v3 testgroup write-view testview notify-view testview acl 2001
snmp-agent group v3 testgroup privacy
snmp-agent target-host trap address udp-domain 1.1.1.2 params securityname
testuser
snmp-agent mib-view included testview iso
snmp-agent usm-user v3 testuser testgroup authentication-mode md5 `,+VK;'MYJF=,/
<97^aP^1!! privacy-mode des56 `,+VK;'MYJF=,/<97^aP^1!!
snmp-agent trap source loopback0
snmp-agent trap queue-size 200
#
return
This chapter describes how to monitor the Ethernet interface through Remote Network
Monitoring (RMON) and Remote Network Monitoring Version 2 (RMON2).
RMON
RMON is implemented based on the Simple Network Management Protocol (SNMP)
architecture, and is compatible with the existing SNMP framework. There are two concepts
involved in RMON, namely, the Network Management Workstation (NM Station) and the agent.
A RMON agent collects statistics of various traffic in a network, including the number of packets
on a network segment within a period and the number of correct packets sent to a host.
Compared with SNMP, RMON monitors remote network devices more efficiently and actively.
It provides an efficient solution to monitor the running of sub-networks, which reduces the
communication traffic between the NM Station and the agent. Large-sized networks can thus be
managed in a simple and effective manner.
Currently, the NE80E/40E implements the monitoring and statistics collection function only on
the Ethernet interfaces of network devices.
RMON2
RMON2 is one of the RMON MIB standards. It functions as a supplement to RMON and add
some new groups. RMON monitors the traffic only at the MAC layer whereas RMON2 can
monitor the traffic at the MAC layer and above it (here, the MAC layer refers to the Ethernet
layer). RMON and RMON2 are both used to monitor Ethernet links.
RMON2 can decode data packets of Layer 3 to Layer 7 in the OSI model.
l Monitors the traffic based on the network layer protocols and addresses, including the IP
protocol.
An agent can learn its connected external LAN network segments and monitor the traffic
entering the LAN through the router.
l Records the incoming and outgoing traffic to and from a specific application because it is
capable of decoding and monitoring the traffic of applications, such as email, FTP, and
WWW.
As defined in RFC 2021, RMON2 contains the following MIB groups: protocolDir,
protocolDist, addressMap, nlHost, nlMatrix, alHost, alMatrix, usrHistory, ProbeConfig, and
rmonConformance.
Features of RMON
The NE80E/40E implements RMON by embedding agent modules to network devices to form
a complete system with other modules. The RMON NM Station is completely compatible with
the SNMP NM Station; so, the administrator can handle it properly without additional training.
RMON in the NE80E/40E supports four groups, namely, statistics, history, alarm, and event, as
defined in RFC 2819, and a Performance-MIB defined by Huawei. The following describes each
group.
l Statistic group
The statistics group collects the basic statistics of each monitored sub-network. The
statistics include date flows on a network segment, distribution of various packets, error
frames, and collisions.
The statistics group has one table: ethernetStatsTable.
NOTE
The RMON statistics result is not consistent with the output of the display interface command.
Although data is collected from the bottom layer in both the cases, the RMON information is more
comprehensive.
l History group
A history group periodically collects the network state statistics and stores them for future
reference. The history group has the following tables:
– historyControlTable: is used to set the control information, such as sampling intervals.
– etherHistoryTable: provides network administrators with other history statistics, such
as the traffic on a network segment, error packets, broadcast packets, utilization, and
collisions.
Each entry in the historyControlTable corresponds to a maximum of 10 pieces of history
records in the etherHistoryTable. The previous pieces are overwritten in a circular
manner if the threshold of records in etherHistoryTable is crossed.
l Alarm group
An alarm group allows predefining a set of thresholds for alarm variables (any object in
the local MIB). A monitor records logs or sends trap messages to the NM Station when the
sampled data in a certain direction crosses a threshold.
As defined in RFC 2819, the alarm function has a hysteresis mechanism to limit the
generation of alarms. If this mechanism is adopted, an alarm event is generated when the
sampled data in a direction crosses the threshold. No more events will be generated until
the sampled data in the opposite direction crosses the threshold.
The NE80E/40E does not apply this mechanism because it will not generate the alarms for
a long period. For the NE80E/40E, the alarms are re-generated if the smapling value turns
to the noraml threshold.
The alarm group contains one table: alarmTable.
l Event group
An event group stores all the events generated by the RMON agent in a table. It records
logs or sends trap messages to the NM Station when an event occurs.
The event group implements the output of three events: log, trap, and log-trap. Each event
entry corresponds to a maximum of 10 pieces of logs. The previous logs are overwritten in
a circular manner if the threshold of logs is crossed.
The event group has two tables: eventTable and logTable.
l Performance-MIB
The RMON prialarm group is an enhancement of alarmTable defined in RFC 2819.
Compared with the alarmTable, the RMON prialarm group supports the setting of alarm
objects and time spans of alarm entries through expressions.
The RMON Performance-MIB has one table: prialarmTable.
In the NE80E/40E, to save system resources, each entry is given a specific time span. The
time span indicates the period for an entry to keep the invalid state. The entry is deleted
when the time span goes down to 0.
Table 3-1 shows the capacity of various tables and the maximum time span of each table.
alarmTable 60 6000
eventTable 60 600
logTable 600 -
prialarmTable 50 6000
NOTE
logTable does not have a time span. Each log entry can have a maximum of 10 pieces of logs. The
excessive logs supersede the older ones in a circular manner.
When an interface board or an interface card is removed, the corresponding entries in the
ethernetStatsTable and historyControlTable become invalid. If the time spans of tables are
respectively set to 1200s, the entries in the tables are deleted when the time spans go down
to 0.
If an interface is added before its corresponding entries are deleted from the table, these
entries can take effect again.
Features of RMON2
Currently, the NE80E/40E supports only two MIBs in RMON2: protocolDir and nlHost.
nlHost supports only the network layer host group but not the application layer host group. That
is, host control at the application layer and alHostTable are not implemented in the
hostControlTable. So, only IP can be set in the protocol directory group and other protocols are
invalid.
Applicable Environment
To monitor network status and collect traffic statistics on a network segment, you can configure
RMON.
Enabling the RMON function does not need any special requirement. You can enable it in
advance, or configure it when you suspect that the traffic of the sub-network where interface
resides is abnormal. You can configure RMON depending on actual situations.
It is recommended to configure the statistics table in advance, configure two history control
policies on the interface where the traffic is abnormal, configure the alarm for one or more
suspicious entries, set the high and low thresholds, and view the alarm information.
NOTE
RMON only stores traffic statistics and information or abnormalities but cannot avoid the generation of
these statistics or information. To clear abnormalities, you need to adopt the other management measures.
Pre-configuration Tasks
Before configuring RMON, complete the following tasks:
Data Preparation
To configure RMON, you need the following data.
No. Data
No. Data
Context
Do as follows on the router on which traffic statistics should be collected:
Procedure
Step 1 Run:
system-view
----End
Context
Do as follows on the router on which traffic statistics should be collected:
Procedure
Step 1 Run:
system-view
----End
Context
As recommended by the RMON specifications, each monitored interface should be configured
with more than two history control entries. One entry is sampled every 30 seconds while another
entry is sampled every 30 minutes.
The short sampling interval enables a monitor to probe the sudden changes of traffic modes, and
the long sampling interval is applicable if the interface status is relatively stable.
Currently, the NE80E/40E reserves up to 10 pieces of the latest records for each history control
entry.
NOTE
To reduce the effect on the performance of the system, the sampling interval of the history table should be
longer than 10 seconds, and the same port should not be configured with too many history control entries
and alarm entries.
Procedure
Step 1 Run:
system-view
----End
Context
Do as follows on the router that is monitored:
The RMON event management module is responsible for adding events to the corresponding
rows in the eventTable and defining the methods of processing events:
l log: sending only logs
l log-trap: sending both logs and trap messages to the NM Station
l none: marking that no event occurs
l trap: sending trap messages to the NM Station
Procedure
Step 1 Run:
system-view
----End
Context
The RMON alarm management is responsible for monitoring a specified alarm variable
(identified by OID) at a specified sampling interval. An alarm event occurs when the monitored
variable exceeds the defined threshold. Generally, the event is recorded in the log table, or
RMON sends a trap message to the NM Station.
If the events that correspond to the alarm upper limit and lower limit (event-entry1, event-
entry2) are not configured in the eventTable, an alarm is not generated even if the alarm condition
is satisfied. At this time, the status of alarm recording is undercreation and not VALID.
If an event corresponding to either the alarm upper limit or the alarm lower limit is configured,
an alarm is triggered once the alarm condition is satisfied. (At this time, the status of alarm
Procedure
Step 1 Run:
system-view
----End
Context
Based on the alarmTable in RFC 2819, the RMON prialarm management is enhanced with two
functions: setting the alarm object in the form of expressions and limiting the time to live (TTL)
value of a prialarm entry.
Compared with the alarmTable, the prialarmTable has several additional entries:
l Expression of alarm variables. It can be an arithmetic expression composed of the OIDs of
alarm variables(+, -, *, / or brackets).
l Description of the prialarm entry in a character string.
l Prialarm state period, in seconds. It must be larger than the sampling interval.
l Two prialarm state types: Forever or Cycle. If Cycle is set, an alarm does not occur and the
entry is deleted after the specified prialarm state period.
If the events that correspond to the alarm upper limit and lower limit (event-entry1, event-
entry2) are not configured in the eventTable, an alarm does not occur even if the alarm conditions
are satisfied. (The alarm record is in the undercreation state rather than in the VALID state.)
If either the alarm upper limit event or the alarm lower limit event is configured, the alarm is
triggered once the conditions for an alarm are satisfied. (The alarm record is in the VALID state.)
Do as follows on the router that is monitored.
Procedure
Step 1 Run:
system-view
Step 2 Run:
rmon prialarm entry-number prialarm-formula description-string sampling-interval
{ absolute | changeratio | delta } rising-threshold threshold-value1 event-entry1
falling-threshold threshold-value2 event-entry2 entrytype { cycle entry-period |
forever } [ owner owner-name ]
----End
Prerequisites
The configurations of the RMON are complete.
Procedure
l Run the display rmon alarm [ entry-number ] command to view the RMON alarm
information.
l Run the display rmon event [ entry-number ] command to view the RMON events.
l Run the display rmon eventlog [ entry-number ] command to view the RMON event logs.
l Run the display rmon history [ ethernet interface-number | gigabitethernet interface-
number ]command to view the RMON history information.
l Run the display rmon prialarm [ entry-number ] command to view the information of the
RMON prialarmTable.
l Run the display rmon statistics [ ethernet interface-number | gigabitethernet interface-
number ] command to view the RMON statistics.
----End
Example
Run the display rmon alarm command. If information about the alarm table is displayed, it
means that the configuration succeeds.
<HUAWEI> display rmon alarm 1
Alarm table 1 owned by Test300 is VALID.
Samples absolute value : 1.3.6.1.2.1.16.1.1.1.6.1 <etherStatsBroadcastPkts.1>
Sampling interval : 30(sec)
Rising threshold : 500(linked with event 1)
Falling threshold : 100(linked with event 1)
When startup enables : risingOrFallingAlarm
Latest value : 1975
Run the display rmon event command. If information about the event table is displayed, it
means that the configuration succeeds.
<HUAWEI> display rmon event
Event table 1 owned by Test300 is VALID.
Description: null
Will cause log when triggered, last triggered at 0days 00h:24m:10s.34th.
Event table 2 owned by Test300 is VALID.
Description: forUseofPrialarm.
Will cause snmp-trap when triggered, last triggered at 0days 00h:26m:10s.73th.
Run the display rmon eventlog command. If information about the event logs is displayed, it
means that the configuration succeeds.
Run the display rmon history command to display the RMON history.
<HUAWEI> display rmon history
History control entry 1 owned by Test300 is VALID,
Samples interface : GigabitEthernet3/0/0<ifEntry.402653698>
Sampling interval : 30(sec) with 10 buckets max
Last Sampling time : 0days 00h:09m:43s
Latest sampled values :
octets :645 , packets :7
broadcast packets :7 , multicast packets :0
undersize packets :6 , oversize packets :0
fragments packets :0 , jabbers packets :0
CRC alignment errors :0 , collisions :0
Dropped packet: :0 , utilization :0
Run the display rmon prialarm command. If information about the extended alarm table is
displayed, it means that the configuration succeeds.
<HUAWEI> display rmon prialarm 1
Prialarm table 1 owned by Test300 is VALID.
Samples delta value : .1.3.6.1.2.1.16.1.1.1.6.1+.1.3.6.1.2.1.16.1.1.1.7.1
Sampling interval : 30(sec)
Rising threshold : 1000(linked with event 2)
Falling threshold : 0(linked with event 2)
When startup enables : risingOrFallingAlarm
This entry will exist : forever
Latest value : 16
Run the display rmon statistics command to display the RMON statistics.
<HUAWEI> display rmon statistics
Statistics entry 1 owned by Test300 is VALID.
Interface : GigabitEthernet3/0/0<ifEntry.402653698>
Received :
octets :142915224 , packets :1749151
broadcast packets :11603 , multicast packets:756252
undersize packets :0 , oversize packets :0
fragments packets :0 , jabbers packets :0
CRC alignment errors:0 , collisions :0
Dropped packet (insufficient resources):1795
Packets received according to length (octets):
64 :150183 , 65-127 :150183 , 128-255 :1383
256-511:3698 , 512-1023:0 , 1024-1518:0
Applicable Environment
By configuring RMON2, you can monitor the traffic on an Ethernet interface that connects to
the network, analyze the hosts the data on the interface comes from and goes to, and collect
statistics of the data passing through the interface from each host on the network.
Pre-configuration Tasks
Before configuring RMON2, configure parameters for Ethernet interface.
l Configuring parameters for Ethernet interfaces
Data Preparation
To configure RMON2, you need the following data.
No. Data
Context
Do as follows on the router that is monitored.
Procedure
Step 1 Run:
system-view
Step 2 Run:
rmon2 hlhostcontroltable index ctrl-index [ datasource interface { interface-type
interface-number } ] [ maxentry maxentry-value ] [ owner owner-name ] [ status
{ active | inactive } ]
When the hlHostControlStatus value is set to active, you cannot change the
hlHostControlDataSource and hlHostControlNlMaxDesiredEntries values.
When the physical status of the interface that corresponds to the hlHostControlDataSource is
Down and the hlHostControlStatus value is active, the state is switched to notinservice
automatically. The status displayed in the command output is "plug-out" while on the NM
Station, the status displayed is "notinservice". In this case, users can delete the entry but they
cannot change it. When the interface status turns to Up, the status of the hlHostControlTable
becomes active again.
----End
Context
Do as follows on the router that is monitored.
Procedure
Step 1 Run:
system-view
Step 2 Run:
rmon2 protocoldirtable protocoldirid protocol-id parameter parameter-value [ descr
description-string ] [ host { notsupported | supportedon | supportedoff } ]
[ owner owner-name ] [ status { active | inactive } ]
The RMON2 supports the traffic statistics of IP packets only on Ethernet interfaces. Since a
single protocol corresponds to an entry, this table currently has only one entry.
l When an entry is created or the entry status (protocolDirStatus) is set to active, both
parameter (equivalent to protocolDirDescr) and host (equivalent to
protocolDirHostConfig) must be set at the same time.
l When the protocolDirStatus is set to active, the value in the protocolDirDescr cannot be
changed.
– If the protocolDirHostConfig value is notsupported, it cannot be changed into other
values.
– If the value is not notsupported, it can be switched between supportedon and
supportedoff.
– When the protocolDirHostConfig value changes from supportedon to supportedoff, the
corresponding entry in the hlHostControlTable is deleted.
l When the protocolDirStatus is set to inactive, the corresponding entry in the hlHostTable is
deleted.
----End
Prerequisites
The configurations of the RMON2 are complete.
Procedure
Step 1 Run the display rmon2 protocoldirtable command to view the information about the
protocolDirTable.
Step 2 Run the display rmon2 hlhostcontroltable [ index ctrl-index ] command to view the
information about the hlHostControlTable.
Step 3 Run the display rmon2 nlhosttable [ hostcontrolindex ctrl-index ] [ timemark time-value ]
[ protocoldirlocalindex protocol-local-index ] [ hostaddress ip-address ] command to view
the information about the nlHostTable.
----End
Example
Run the display rmon2 protocoldirtable command. If information about the protocol directory
table is displayed, it means that the configuration succeeds.
<HUAWEI> display rmon2 protocoldirtable
Info: The protocol directory table changed at time : 3days 18h:59m:49s(32758966),
last time
protocolDirId : 8.0.0.0.1.0.0.8.0
protocolDirParameters : 2.0.0
protocolDirLocalIndex : 1
protocolDirDescr : aaa
protocolDirAddressMapConfig: notsupported
protocolDirHostConfig : supportedon
protocolDirMatrixConfig : notsupported
protocolDirOwner :
protocolDirStatus : active
Run the display rmon2 hlhostcontroltable command. If information about the host control
table is displayed, it means that the configuration succeeds.
<HUAWEI> display rmon2 hlhostcontroltable
Abbreviation:
index - hlhostcontrolindex
datasource - hlhostcontroldatasource
droppedfrm - hlhostcontrolnldroppedframes
inserts - hlhostcontrolnlinserts
Deletes - hlHostControlNlDeletes
maxentries - hlhostcontrolnlmaxdesiredentries
owner - hlhostcontrolowner
status - hlhostcontrolstatus
index datasource droppedfrm inserts Deletes maxentries owner status
123 GigabitEthernet2/2/0 0 19 0 100 China active
Run the display rmon2 nlhosttable command. If information about the host table is displayed,
it means that the configuration succeeds.
<HUAWEI> display rmon2 nlhosttable hostcontrolindex 123 timemark 1000 hostaddress
10.110.99.2
Abbreviation:
HIdx - hlHostControlIndex
PIdx - ProtocolDirLocalIndex
Addr - nlHostAddress
InPkts - nlHostInPkts
OutPkts - nlHostOutPkts
InOctes - nlHostInOctets
OutOctes - nlHostOutOctets
OutMac - nlHostOutMacNonUnicastPkts
ChgTm - nlHostTimeMark
CrtTm - nlHostCreateTime
HIdx PIdx Addr InPkts OutPkts InOctes OutOctes OutMac ChgTm CrtTm
123 1 10.110.99.2 0 78 0 10046 78 81489 40859
Context
NOTE
This document takes interface numbers and link types of the NE40E-X8 as an example. In working
situations, the actual interface numbers and link types may be different from those used in this document.
Networking Requirements
As shown in Figure 3-1, it is required to monitor a sub-network connected to GE3/0/0, involving:
l Collecting realtime statistics and history statistics about traffic and various packets.
l Enabling the alarm monitoring function for the traffic (in bytes) passing through the
interface and enabling the log function when the traffic sent in one minute exceeds the set
value.
l Monitoring the broadcast and multicast packets on the sub-network and enabling the alarm
function for these packets. The system then automatically reports the alarm to the NM
Station when the broadcast and multicast streams on the sub-network exceed the set value.
GE1/0/0 GE3/0/0
IP Network LAN
10.2.2.1/24 10.3.3.1/24
NM Station Router
10.1.1.1/24
Configuration Roadmap
The configuration roadmap is as follows:
1. Execute the SNMP configuration command in advance to enable sending Trap messages
and configure the community name.
2. Enable collecting statistics and configure the ethernetStatsTable.
3. Configure the History Control Table.
4. Configure the EventTable.
5. Configuring the AlarmTable.
6. Configure the PrialarmTable.
Data Preparation
To complete the configuration, you need the following data:
l Interval for sampling information
l Threshold for triggering alarm events
Procedure
Step 1 Configure routes between the Example for Configuring RMON and the NM Station. The detailed
configuration procedure is not mentioned here.
Step 2 Enable sending Trap messages to the NM Station.
# Enable the Trap function.
<HUAWEI> system-view
[HUAWEI] sysname Router
[Router] snmp-agent trap enable
[Router] snmp trap enable feature-name rmon non-excessive all
# Verify the configuration. Only the last sampling record is displayed if you adopt the command
line method. To check all the history records, you need to use a specific NM Station software.
<Router> display rmon history gigabitethernet 3/0/0
History control entry 1 owned by Test300 is VALID
Samples interface : GigabitEthernet3/0/0<ifEntry.402653698>
Sampling interval : 30(sec) with 10 buckets max
Last Sampling time : 0days 00h:19m:43s
Latest sampled values :
octets :645 , packets :7
broadcast packets :7 , multicast packets :0
undersize packets :6 , oversize packets :0
fragments packets :0 , jabbers packets :0
CRC alignment errors :0 , collisions :0
Dropped packet: :0 , utilization :0
History record:
Record No.1 (Sample time: 0days 00h:02m:30s)
octets :0 , packets :0
broadcast packets :0 , multicast packets :0
undersize packets :0 , oversize packets :0
fragments packets :0 , jabbers packets :0
CRC alignment errors :0 , collisions :0
Dropped packet: :0 , utilization :0
Description: null.
Will cause log when triggered, last triggered at 0days 00h:24m:10s.
Event table 2 owned by Test300 is VALID.
Description: forUseofPrialarm
Will cause snmp-trap when triggered, last triggered at 0days 00h:26m:10s.
The NM Station receives trap messages when the set prialarm variable exceeds the preset
threshold.
----End
Configuration File
#
sysname Router
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.2.2.1 255.255.255.0
interface GigabitEthernet3/0/0
undo shutdown
ip address 10.3.3.1 255.255.2555.0
rmon-statistics enable
rmon statistics 1 owner Test300
rmon history 1 buckets 10 interval 30 owner Test300
#
rmon event 1 description null log owner Test300
rmon event 2 description forUseofPrialarm trap public owner Test 300
rmon alarm 1 1.3.6.1.2.1.16.1.1.1.6.1 30 absolute rising-threshold 500 1 falling-
threshold 100 1 owner Test300
rmon prialarm 1 .1.3.6.1.2.1.16.1.1.1.6.1+.1.3.6.1.2.1.16.1.1.1.7.1
sumofbroadandmulti 30 delta rising-threshold 1000 2 falling-threshold 0 2 entrytype
forever owner Test300
#
ip route-static 10.1.1.0 255.255.255.0 10.2.2.2
#
snmp-agent
snmp-agent local-engineid 000007DB7FFFFFFF0000017C
Networking Requirements
As shown in Figure 3-2, it is required to collect statistics of IP packets passing through
GE3/0/0 through RMON2.
RMON2 can monitor remote hosts through the SNMP NM Station, or through command lines.
This example describes only command-line-based monitoring method.
GE1/0/0 GE3/0/0
IP Network LAN
10.2.2.1/24 10.3.3.1/24
NM Station Router
10.1.1.1/24
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Configure RMON2.
# Configure the hlHostControlTable. Set the index to 123, and the maximum number of entries
in the nlHostTable to 100.
<HUAWEI> system-view
<HUAWEI> sysname Router
[Router] rmon2 hlhostcontroltable index 123 datasource interface gigabitethernet
3/0/0 maxentry 100 owner china status active
# Set the value of the time filter to display the entries that meet the filtering condition.
<Router> display rmon2 nlhosttable hostcontrolindex 123 timemark 1000 hostaddress
10.110.99.2
Abbreviation:
HIdx - hlHostControlIndex
PIdx - ProtocolDirLocalIndex
Addr - nlHostAddress
InPkts - nlHostInPkts
OutPkts - nlHostOutPkts
InOctes - nlHostInOctets
OutOctes - nlHostOutOctets
OutMac - nlHostOutMacNonUnicastPkts
ChgTm - nlHostTimeMark
CrtTm - nlHostCreateTime
HIdx PIdx Addr InPkts OutPkts InOctes OutOctes OutMac ChgTm CrtTm
123 1 10.110.99.2 0 78 0 10046 78 81489 40859
# Display the hlHostControlTable. You can view the number of added or deleted host entries
on the interface and the maximum number of entries in the nlHostTable.
----End
Configuration File
#
sysname Router
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.2.2.1 255.255.255.0
interface GigabitEthernet3/0/0
undo shutdown
ip address 10.3.3.1 255.255.255.0
#
rmon2 protocoldirtable protocoldirid 8.0.0.1.0.0.8.0 parameter 2.0.0 descr ip host
supportedon owner china status active
rmon2 hlhostcontroltable index 123 datasource interface GigabitEthernet3/0/0
maxentry 100 owner china status active
#
return
4 HGMP Configuration
By running the Huawei Group Management Protocol (HGMP), you can appoint a switch as the
administrator switch to create a cluster and add a large number of Ethernet switches to the cluster.
The administrator is used to perform unified management and configuration over these switches,
which simplifies maintenance and engineering.
4.1 Overview
This section describes the basic principles and typical networking of HGMP, and HGMP features
supported by the NE80E/40E.
4.2 Configuring Basic HGMP Functions
This section describes how to configure basic HGMP functions to create or manage a cluster.
4.3 Configuring Advanced HGMP Functions
This section describes how to configure advanced HGMP functions to simplify the management
and maintenance of a basic cluster.
4.4 Maintaining HGMP
This section describes how to clear the statistics on NDP, and monitor the operation status of
the HGMP cluster.
4.5 HGMP Configuration Examples
This section exemplifies how to configure NTP and provides the networking requirements,
configuration roadmap, and configuration notes. You can better understand the configuration
procedures with the help of the configuration flowchart.
4.1 Overview
This section describes the basic principles and typical networking of HGMP, and HGMP features
supported by the NE80E/40E.
FTP
Server
IDC
IP/MPLS
Server
core I n te rnnet
Router
Cluster1
Administrator
Member1 Member2
Member3 Member4
DSLAM
Host
Administrator: administrator switch Member: member switch
FTP
Server
IDC
IP/MPLS
Server
core I n te rnnet
Router
Cluster1
Administrator
Member1 Member2
Member3 Member4
DSLAM
Host
Administrator: administrator switch Member: member switch
NDP
Neighbor Discovery Protocol (NDP) is used to collect information about the directly connected
neighbors, including the device model, software version, hardware version, connection interface,
member number, private IP address used for communication within a cluster, and hardware
platform.
NOTE
Any device that supports HGMP does not forward NDP packets.
After receiving an NDP packet from the neighbor, the device compares the contents of the packet
with those of a corresponding entry in the NDP table and updates the entry.
NTDP
In HGMP, Network Topology Discovery Protocol (NTDP) is used to collect information about
topologies. According to the neighbor information in the NDP table, the device sends and
forwards requests for topology collection, and then collects entries in the NDP table of each
device in a certain network segment.
After receiving an NTDP topology request packet, the device sends an NTDP response packet
immediately. At the same time, the device forwards the received NTDP packet to other interfaces
according to NTDP forwarding rules.
Roles in a Cluster
HGMP defines four roles in a cluster: administrator switch, member switch, candidate switch,
and standby switch.
NOTE
You can determine the role of a switch in a cluster. Each of the four roles, however, can be
changed according to certain rules.
NAT
In HGMP, member switches in a cluster can communicate with devices in the public network
through Network Address Transmission (NAT). Whether to use NAT for the communication
can be controlled through commands.
l The administrator switch is the management device in a cluster. To ensure the
communication between devices in and out of the cluster, you need to assign a public IP
address to the administrator switch.
l To ensure that devices in and out of the cluster can communicate through NAT, you need
to enable NAT of specified protocols on the administrator switch.
l NAT rules used by a cluster are automatically configured by the administrator switch. When
member switches access devices out of the cluster, they can automatically obtain the
interface mapped through NAT; when devices out of the cluster access member switches,
they need to calculate the number of the port of specified services on member switches.
Batch Distribution
HGMP can perform batch distribution over all the member switches under its management.
Objects to be distributed in batches include: the system software, configuration files, patch
files, PAF files, and license files.
l The batch distribution command can be performed only on the administrator switch.
l The administrator switch can be configured with the plug-and-play IP address, user name,
and password. If no IP address, user name, or password are specified in the command, the
plug-and-play IP address, user name, and password are adopted. If neither kinds of IP
address, user name, and password are specified or configured, the command cannot be
performed.
l Member switches download specified files from the FTP server and then set them as the
default files for the next startup.
l To avoid congestion, you can set the maximum number of member switches that
concurrently download files from the FTP server.
Batch Restart
HGMP can perform batch restart over a specified group of member switches.
l During the process of batch restart, member switches do not save the current configuration.
l After receiving the batch restart command, member switches wait 1 second to guarantee
the pervasion of control packets throughout the cluster.
Incremental Configuration
In a cluster, some member switches may have the same configurations, such as creating a VLAN
and enabling a feature. The incremental configuration function is used to remotely control the
selected member switches in batches. With this mode, you only need to configure a control
command list on the administrator switch. Then, you can deliver the control command list to
member switches at a time and query the control command output on each member switch. The
member selection mode can be all, device type-based, member switch ID-based, or IP address-
based.
Configuration Synchronization
After a cluster is created and configured with basic functions, you can save the configuration
files of the cluster members to a specified FTP server through the configuration synchronization
command.
Security Features
After a cluster is created and configured with basic functions, you can close the network edge
of the cluster as required and then the topology of the cluster becomes stable. When plug and
play is enabled and the Product Adaptive File (PAF) is used to control devices configured with
HGMP functions to automatically enable NDP and NTDP on Layer 2 interfaces, a great number
of Layer 2 interfaces are automatically enabled with NDP and NTDP on member switches. NDP
and NTDP, however, are not required on interfaces unrelated to the cluster. Therefore, you need
to disable NDP or NTDP on unrelated interfaces. As a result, less packets are transmitted and
the topology of the cluster is stable.
l On the administrator switch, disable NDP or NDTP on unrelated interfaces in the cluster.
l After you disable NDP on unrelated interfaces in the cluster, NDP packets of the interfaces
are not sent to the administrator switch.
l After you disable NTDP on unrelated interfaces in the cluster, NTDP packets of the
interfaces are not sent to the administrator switch.
l When the topology of the cluster becomes stable, the unrelated interfaces in the cluster are
defined as interfaces that have not NDP neighbors.
Applicable Environment
When you need to create or manage a cluster, you can configure the cluster with basic HGMP
functions.
Pre-configuration Tasks
Before configuring basic HGMP functions, complete the following tasks:
Data Preparation
To configure basic HGMP functions, you need the following data.
No. Data
2 Cluster name
4 (Optional) Aging time of NDP packets and interval for sending NDP packets
5 (Optional) Range of topology collection, hop delay and interface delay in forwarding
NTDP topology request packets, interval for topology collection
6 (Optional) ID of the management VLAN, aging time of NDP packets, interval for
sending handshake packets, address of the SNMP host, and IP addresses of the FTP
server and the SFTP server
Procedure
l Enabling NDP in the system view
1. Run:
system-view
1. Run:
system-view
1. Run:
system-view
By default, the aging time of NDP packets is set to 180 seconds. The aging time of
NDP packets must be longer than the interval for sending NDP packets.
l (Optional) Setting the interval for sending NDP packets
1. Run:
system-view
Procedure
l Enabling NTDP in the system view
1. Run:
system-view
1. Run:
system-view
By default, the hop delay is 200 ms and the interface delay is 20 ms.
l (Optional) Setting the interval for collecting topology information
1. Run:
system-view
By default, the interval for collecting topology information is set to 0 minutes, that is,
topology information is not collected regularly.
l (Optional) Enabling topology collection
1. Run the following command in the user view:
ntdp explore
You can run this command to collect topology information at any time.
----End
Procedure
l Configuring a management VLAN
1. Run:
system-view
If the administrator switch is rebooted after the HGMP cluster is created, member switches need to
be re-added into the cluster. In such a situation, numbering of these member switches may be changed.
NOTE
Names of the administrator switch and the cluster are configured and the cluster is
created.
This command can only be run on the administrator switch and the switch that does
not join any cluster.
Creating a cluster automatically
These steps need to be configured only on the administrator switch or on the switch which
will be the administrator in a created HGMP cluster.
In this mode, the administrator switch prompts you whether to add all the existing candidate
switches to the cluster.
1. Run:
system-view
3. Run:
version { v2 | v2c | v3 }
NOTE
This command can be run only before the cluster is set up. If the cluster is set up, you
are not allowed to change the range of private IP addresses used in the cluster.
5. Run:
auto-build [ recover ]
The auto-build command can also be used to add member switches automatically.
For configuration details, see Adding a Member Switch.
----End
Context
After a cluster is set up, you can add a member switch to the cluster either manually or
automatically.
Procedure
l Adding a member switch manually
In this mode, you must manually specify the MAC address of the member switch.
1. Run:
system-view
NOTE
If the administrator switch of HGMP cluster A considers that switch N does not belong to
cluster A but switch N considers that it belongs to cluster A, switch N is called the missing
member switch on the administrator switch.
----End
Procedure
l Deleting a cluster
Do as follows on the administrator switch:
1. Run:
system-view
A cluster is deleted.
– After the command is run on an administrator switch, except the mngvlanid and
ip-pool commands, configurations of the administrator switch in the HGMP
cluster view are deleted; all member switches automatically quit the cluster.
l Disabling a cluster
1. Run:
system-view
1. Run:
system-view
NOTE
When you run the undo administrator-address command on member switches, the member
switch temporarily exits from the cluster, whereas the administrator switch does not delete the
member switch. To delete a member switch from the HGMP cluster, run the delete-member
command.
----End
Context
If you do not need a cluster to manage a switch, you can delete the member switch from the
cluster.
Procedure
Step 1 Run:
system-view
Step 2 Run:
cluster
Step 3 Run:
delete-member member-number
----End
Prerequisites
The configurations of the Basic HGMP are complete.
Procedure
l Run the display ndp to check the NDP configuration in the system view.
l Run the display ndp interface { interface-type interface-number [ to interface-type
interface-number ] }&<1-10> to check the neighbor information detected through NDP on
a specified interface.
l Run the display ntdp to check the global NTDP settings.
l Run the display ntdp device-list [ verbose ] to check the device information collected
through NTDP.
l Run the display cluster to check the status and statistics of cluster.
l Run the display cluster candidates [ mac-address mac-address | verbose ] to check
information about candidate switches.
l Run the display cluster members [ member-number | verbose ] to check information about
member switches.
----End
Example
If the NDP neighbor can be normally established, you can run the display ndp command to
check information about the MAC addresses of all the neighboring stations and the number of
the interface on the neighboring station that is connected to the local interface.
<HUAWEI> display ndp
Neighbor discovery protocol is enabled.
Neighbor Discovery Protocol Ver: 1, Hello Timer: 60(s), Aging Timer: 180(s)
Interface: GigabitEthernet1/0/1
Status: Disabled, Packets Sent: 0, Packets Received: 0, Packets Error: 0
Interface: GigabitEthernet1/0/2
Status: Enabled, Packets Sent: 114, Packets Received: 108, Packets Error: 0
If the NDP neighbor is normally established, you can run the display ndp interface command
to check information about the MAC address of the neighboring station and the number of the
interface on the neighboring station that is connected to the local interface.
<HUAWEI> display ndp interface gigabitethernet 1/0/1
Interface: GigabitEthernet1/0/1
Status: Enabled, Packets Sent: 116, Packets Received: 110, Packets Error: 0
Neighbor 1: Aging Time: 174(s)
MAC Address : 0018-8203-39d8
Port Name : GigabitEthernet1/0/1
Software Version: Version
Device Name : NE40E
Port Duplex : FULL
Product Ver : NE40E V100R006C00
If the NTDP neighbor is normally established, you can run the display ntdp command to check
the NTDP settings.
<HUAWEI> display ntdp
Network topology discovery protocol is enabled
Hops : 3
Timer : 10 min
Hop Delay : 200 ms
Port Delay: 20 ms
Total time for last collection: 462ms
If device information is successfully collected through NTDP, you can run the display ntdp
device-list [ verbose ] command to view information lists of all the devices.
<HUAWEI> display ntdp device-list
The device-list of NTDP:
------------------------------------------------------------------------------
MAC HOP IP PLATFORM
------------------------------------------------------------------------------
0004-0004-0004 2 NE40E
0003-0003-0003 1 NE40E
0002-0002-0002 1 NE40E
0001-0001-0001 0 NE40E
If the cluster is established successfully, you can run the display cluster command to view
information about the HGMP cluster to which the device belongs, such as the cluster name and
ID of the management VLAN.
<HUAWEI_0.HUAWEI> display cluster
Cluster name:"HUAWEI"
Role:Administrator switch
Management vlan id : 10
Cluster multicast MAC address : 0180-c200-000a(default)
Cluster auto-join : disabled
If the cluster is established successfully, you can run the display cluster candidates command
to view information about candidate switches, such as the MAC address and device type.
<HUAWEI_0.HUAWEI> display cluster candidates
MAC HOP IP PLATFORM
0004-0004-0004 2 NE40E
0003-0003-0003 1 NE40E
0002-0002-0002 1 NE40E
If the cluster is established successfully, you can run the display cluster members command
to view information about member switches, such as the MAC address and device type. Member
switches are in the Up state.
<HUAWEI_0.HUAWEI> display cluster members
The list of cluster member:
------------------------------------------------------------------------------
SN Device Type MAC Address Status Device Name
------------------------------------------------------------------------------
0 NE40E 00e0-fcb8-d6b6 Admin HUAWEI_0.Administrator-1
1 NE40E 0018-8267-7f7d Up HUAWEI_1.Member-1
2 NE40E 00e0-0003-0003 Up HUAWEI_2.Member-2
Applicable Environment
To optimize the performance parameters of the established cluster, you can configure advanced
HGMP functions to facilitate the management and maintenance of the HGMP cluster and better
manage member switches in the cluster.
Pre-configuration Tasks
Before configuring advanced HGMP functions, complete the following tasks:
l Ensuring that the device is correctly powered on and operates normally
l Configuring basic attributes of interfaces on the device
l Configuring Basic HGMP Functions
Data Preparation
To configure advanced HGMP functions, you need the following data.
No. Data
5 IP addresses of the public FTP server, SFTP server, log host, SNMP host used in the
cluster
6 Default information about the FTP server that is configured for the cluster, including
the IP address, user name, and password
Procedure
l Configure the interval for sending handshake packets.
Do as follows on the administrator switch:
1. Run:
system-view
By default, the interval for sending handshake packets is 10 seconds. This interval
must be equal to or less than one third of the holdtime of the device status.
l Configure the holdtime of the status for the member switch.
Do as follows on the administrator switch:
1. Run:
system-view
1. Run:
system-view
Before setting up a cluster, you need to assign a multicast MAC address to the cluster
or use the default multicast MAC address. To enhance the network security or if the
default multicast MAC address is already used by other services on the network, you
can reassign a multicast MAC address to the cluster within the permitted range. Once
the cluster is set up, you cannot change the multicast MAC address of the cluster. In
addition, you need to assign the same multicast MAC address to all the devices in the
cluster.
l Configure the mode for interfaces in the cluster to join a VLAN.
1. Run:
system-view
Communication interfaces in the cluster are added to the management VLAN in trunk
mode.
l Configure public servers and hosts.
1. Run:
system-view
The member switches in a cluster can communicate with the FTP server in either of the
following modes:
l Non-NAT: There must be reachable routes between member switches and FTP server.
l NAT: The cluster-ftp-nat enable command must be run in the cluster view to enable the
FTP NAT function on the administrator switch. The NAT rules are automatically generated
on the administrator switch, and the member switches obtain the NAT mapped ports.
The FTP NAT function on the administrator switch is disabled by default. That is, the member
switches communicate with the FTP server in non-NAT mode.
After the FTP server for the cluster is configured successfully, you can run the cluster-ftp
command so that the member switches can access the FTP server.
4. Run:
sftp-server ip-address
The member switches in a cluster can communicate with the SNMP server in either of the
following modes:
l Non-NAT: There must be reachable routes between member switches and SNMP server.
l NAT: The cluster-snmp-nat enable command must be run in the cluster view to enable
the SNMP NAT function on the administrator switch. The NAT rules are automatically
generated on the administrator switch, and the member switches obtain the NAT mapped
ports.
The SNMP NAT function on the administrator switch is enabled by default. That is, the member
switches communicate with the SNMP server in NAT mode.
6. Run:
logging-host ip-address
Procedure
l Configuring the batch distribution function
Do as follows on the administrator switch:
1. Run:
system-view
The timeout period for member switches to download the configuration file, the
version file or the patch files through FTP is configured.
5. Run:
cluster-member [ group-by { device-type device-type | ip {ip-address [ to
ip-address ] } &<1-10> | member-number { member-number [ to member-
number ] } &<1-10> } ] get { configuration-file | system-software | patch
| paf | license } file-name [ ip ftp-ip-address user-name user-name
password password ] [ path-separator pathseparator ]
– During the process of batch distribution, the group-by command can be used to
specify member switch groups according to different selection modes.
– If Step 3 is not performed, you must enter the IP address, user name, and password
when using this command.
– If Step 3 is performed, the IP address, user name, and password configured in Step
3 are used by default.
– IP addresses used in batch distribution are private IP addresses used in the cluster.
l Configuring the batch restart function
Do as follows on the administrator switch:
1. Run:
system-view
– To configure the management VLAN for the interface of the administrator switch,
you should run the port trunk allow-pass vlan command rather than the port
default vlan command if the cluster-packet-extend enable command needs to
be used. This interface is directly connected to the candidate switch.
l Configuring the incremental configuration function
1. Run:
system-view
The result whether commands in the command list are sent to the specified member
switch is displayed.
1. Run:
system-view
3. Run:
increment-config synchronization [ group-by { device-type device-type |
ip {ip-address [ to ip-address ] } &<1-10> | member-number { member-
number [ to member-number ] } &<1-10> } ]
The result whether configuration files of the specified member switch are
synchronized to the FTP server is displayed.
– The member selection mode can be device type-based, member switch ID-based,
IP address-based, or all.
– This command is valid only after the cluster is enabled.
l Configuring security features
1. Run:
system-view
----End
Prerequisites
The configurations of the Advanced HGMP are complete.
Procedure
l Run the display cluster-increment-result to check the delivery of incremental
configuration.
l Run the display cluster-license to check the cluster license.
l Run the display cluster-topology-info to check the cluster topology.
l Run the display increment-command to check the incremental configuration command.
l Run the display increment-synchronization-result to check whether configuration files
of member switches are synchronized to the FTP server.
Example
If the incremental configuration command is successfully delivered to member switches, run the
display cluster-increment-result command, and you can view that success is displayed.
<HUAWEI_0.HUAWEI> display cluster-increment-result
The result of member switches executing increment commands:
------------------------------------------------------------------------------
SN Device MacAddress IpAddress Result CommandId
------------------------------------------------------------------------------
2 NE40E 0003-0003-0003 10.0.0.3 success -
3 NE40E 0004-0004-0004 10.0.0.4 success -
Run the display cluster-license command, and you can check the contents of the cluster license,
including the number of member switches that can be managed by the administrator switch and
maximum layers that member switches can concatenate.
<HUAWEI_0.HUAWEI> display cluster-license
The max numbers and hops of manage member switch:
-------------------------------------------------------------
Max numbers of manage member switch: 255
Max hops of manage member switch : 16
Run the display cluster-topology-info command, and you can view the cluster topology,
including the topology of normal links, candidate links, and faulty links.
<HUAWEI> display cluster-topology-info
The topology information about the cluster:
<-->:normal device <++>:candidate device <??>:lost device
-------------------------------------------------------------------------
Total topology node number is 5.
[HUAWEI_0.Administrator: Root-00e0-ad14-c600]
|-(GigabitEthernet1/0/2)<-->(GigabitEthernet1/0/1)[HUAWEI_3.Member-3: 00e0-
da1c-4c00]
| |-(GigabitEthernet1/0/3)<-->(GigabitEthernet1/0/1)[HUAWEI_2.Member-2:
00e0-875b-8f00]
| | |-(GigabitEthernet1/0/0)<-->(GigabitEthernet1/0/0)[HUAWEI_1.Member-1:
00e0-0f68-6f00]
|-(GigabitEthernet1/0/1)<-->(GigabitEthernet1/0/2)[HUAWEI_4.Member-4:
00e0-9f7e-0b00]
Run the display increment-command command, and you can check the incremental
configuration of the cluster, including the number and contents of the incremental configuration.
<HUAWEI> display increment-command
The content of increment commands:
------------------------------------------------------------------------------
SN Content
------------------------------------------------------------------------------
10 vlan batch 10 to 20
20 ip route-static 2.0.0.0 8 10.0.0.1
If the configuration files of member switches are successfully synchronized with the FTP server,
run the display increment-synchronization-result command, and you can view that success
is displayed.
<HUAWEI> display increment-synchronization-result
The result of member switches' synchronization:
------------------------------------------------------------------------------
SN Device MacAddress IpAddress result
------------------------------------------------------------------------------
1 NE40E 0002-0002-0002 10.0.0.2 success
2 NE40E 0003-0003-0003 10.0.0.3 success
3 NE40E 0004-0004-0004 10.0.0.4 success
If member switches successfully obtain configuration files, PAF files, or patch files, run the
display member-getfile-state command, and you can view that success is displayed.
<HUAWEI> display member-getfile-state
The status of member switches getting file:
------------------------------------------------------------------------
SN Device MacAddress IPAddress Result
------------------------------------------------------------------------
2 NE40E 0002-0002-0002 10.0.0.2 success
3 NE40E 0003-0003-0003 10.0.0.3 success
Interfaces running NDP and NTDP are not required on member switches. If NDP and NTDP
are disabled successfully, run the display member-interface-state command, and you can view
that success is displayed.
<HUAWEI_0.HUAWEI> display member-interface-state ndp
The result of member switches executed disable member interface command:
------------------------------------------------------------------------------
SN Device MacAddress IpAddress result
------------------------------------------------------------------------------
3 NE40E 0004-0004-0004 10.0.0.4 success
2 NE40E 0003-0003-0003 10.0.0.3 success
1 NE40E 0002-0002-0002 10.0.0.2 success
[HUAWEI_0.HUAWEI-cluster] display member-interface-state ntdp
The result of member switches executed disable member interface command:
------------------------------------------------------------------------------
SN Device MacAddress IpAddress result
------------------------------------------------------------------------------
3 NE40E 0004-0004-0004 10.0.0.4 success
2 NE40E 0003-0003-0003 10.0.0.3 success
1 NE40E 0002-0002-0002 10.0.0.2 success
If member switches are successfully restarted, run the display member-reboot-state command,
and you can view that success is displayed.
<HUAWEI> display member-reboot-state
The result of member switches rebooting:
------------------------------------------------------------------------
SN Device MacAddress IPAddress Result
------------------------------------------------------------------------
1 NE40E 0002-0002-0002 10.0.0.2 success
2 NE40E 0003-0003-0003 10.0.0.3 success
------------------------------------------------------------------------
If the current configurations are successfully saved on member switches, run the display
member-save-state command, and you can view that success is displayed.
<HUAWEI> display member-save-state
The result of member switches saving:
------------------------------------------------------------------------
SN Device MacAddress IPAddress Result
------------------------------------------------------------------------
If member switches successfully synchronize configuration files to the FTP server, run the
display synchronization-result command, and you can view that success is displayed.
<HUAWEI> display synchronization-result
The result of member switches' synchronization:
------------------------------------------------------------------------------
SN Device MacAddress IpAddress result
------------------------------------------------------------------------------
1 NE40E 0002-0002-0002 10.0.0.2 success
2 NE40E 0003-0003-0003 10.0.0.3 success
3 NE40E 0004-0004-0004 10.0.0.4 success
Context
CAUTION
Once statistics are cleared, they cannot be restored. Confirm the action before you use the
command.
Procedure
Step 1 Run the reset ndp statistics [ interface { interface-type interface-number [ to interface-type
interface-number ] } &<1-10> ] command in the user view to clear the NDP statistics.
----End
Context
In routine maintenance, you can run the following commands in any view to display the operation
stauts of HGMP.
Procedure
l Run the display ndp to check the NDP configuration in the system view.
Context
NOTE
This document takes interface numbers and link types of the NE40E-X8 as an example. In working
situations, the actual interface numbers and link types may be different from those used in this document.
NAT feature cannot be configured on the X1 and X2 models of the NE80E/40E.
Networking Requirements
As shown in Figure 4-3, a carrier sets up a Layer 2 network through Layer 2 devices. Too many
Layer 2 devices are hard to be maintained and managed on the site. In addition, to save public
IP addresses, you cannot assign a public IP address to each device.
To effectively manage the Layer 2 network, you can create a cluster for the Layer 2 network
and manage the cluster through HGMP.
In this example, Administrator-1 is nearest to the network administrator and is therefore
appointed as the administrator switch.
NOTE
For convenience, only four devices in the Layer 2 network are described.
Figure 4-3 Networking diagram of configuring basic HGMP functions for a cluster
FTP Server
2.0.0.1/8 NM station
3.0.0.1/8
IP/MPLS
core
SFTP Server
2.0.0.2/8
GE1/0/1 Log station
1.0.0.2/8 4.0.0.1/8
GE1/0/3
GE1/0/1 GE1/0/2
Administrator-1
10.0.0.1/8
GE1/0/1 GE1/0/1
Member-1 ......
GE1/0/2 Member-2
GE1/0/1
Member-3
10.0.0.4/8
Cluster
Configuration Roadmap
The configuration roadmap is as follows:
1. Create a management VLAN on all devices. Enable NDP and NTDP to ensure that each
device can detect the topology structure of the network through NTDP.
2. Choose the administrator switch, and then create a cluster named HUAWEI on the
administrator switch.
3. Add all the devices that support HGMP in the Layer 2 network to the cluster.
4. Assign an IP address to VLANIF 10 to facilitate the communication between member
switches in the cluster and devices out of the cluster.
5. Configure public servers and hosts for the cluster.
Data Preparation
To complete the configuration, you need the following data:
l Management VLAN ID of the cluster, that is 10
l IP address of VLANIF 10, that is 1.0.0.1/8
l Address pool of the cluster, that is 10.0.0.0/8
l IP address of the administrator in the cluster, that is 10.0.0.1/8
l MAC addresses of devices, as shown in Figure 4-3
l IP addresses of servers and hosts, as shown in Figure 4-3
Procedure
Step 1 Configure a management VLAN.
# Create VLAN 10 on the device and add interfaces of the administrator switch and member
switches to VLAN 10.
# Configure the administrator switch.
<HUAWEI> system-view
[HUAWEI] sysname Administrator-1
[Administrator-1] vlan 10
[Administrator-1-vlan10] quit
[Administrator-1] interface gigabitethernet 1/0/1
[Administrator-1-GigabitEthernet1/0/1] undo shutdown
[Administrator-1-GigabitEthernet1/0/1] portswitch
[Administrator-1-GigabitEthernet1/0/1] port default vlan 10
[Administrator-1-GigabitEthernet1/0/1] quit
[Administrator-1] interface gigabitethernet 1/0/2
[Administrator-1-GigabitEthernet1/0/2] undo shutdown
[Administrator-1-GigabitEthernet1/0/2] portswitch
[Administrator-1-GigabitEthernet1/0/2] port default vlan 10
[Administrator-1-GigabitEthernet1/0/2] quit
[Administrator-1] interface gigabitethernet 1/0/3
[Administrator-1-GigabitEthernet1/0/3] undo shutdown
[Administrator-1-GigabitEthernet1/0/3] portswitch
[Administrator-1-GigabitEthernet1/0/3] port default vlan 10
[Administrator-1-GigabitEthernet1/0/3] quit
[Administrator-1] interface vlanif 10
[Administrator-1-Vlanif10] quit
After the previous configuration, you can find that NDP on the administrator is in the Enable
state, the host name of the neighboring node is Device Name, and the name of the interface
connecting the neighboring node and the local interface is Port Name.
[Administrator-1] display ndp interface gigabitethernet 1/0/1 gigabitethernet 1/0/2
Interface: GigabitEthernet1/0/1
# On devices, enable NTDP in the system view and on the interface and configure the interval
and range for NTDP to collect topologies to 10 minutes and 3 hops respectively.
NOTE
After the previous configuration, globally check the NTDP configuration on the administrator
switch. You can find that the interval and range for NTDP to collect topologies is 10 minutes
and 3 hops respectively.
[Administrator-1] display ntdp
Network topology discovery protocol is enabled
Hops : 3
Timer : 10 min
Hop Delay : 200 ms
Port Delay: 20 ms
Total time for last collection:0ms
Step 4 Enable the cluster function and set the management VLAN.
# Configure the administrator switch.
[Administrator-1] cluster enable
[Administrator-1] cluster
[Administrator-1-cluster] mngvlanid 10
[Administrator-1-cluster] quit
After the topology collection function is enabled manually on the administrator switch, check
the device information collected through NTDP and you can find the MAC address and types
of related devices.
<Administrator-1> ntdp explore
<Administrator-1> display ntdp device-list
The device-list of NTDP:
------------------------------------------------------------------------------
MAC HOP IP PLATFORM
------------------------------------------------------------------------------
0004-0004-0004 2 NE40E
0003-0003-0003 1 NE40E
0002-0002-0002 1 NE40E
0001-0001-0001 0 NE40E
# On the administrator switch, set the range of IP addresses that can be assigned to the cluster
to 10.0.0.0/8, in which the IP address assigned to the administrator switch is 10.0.0.1/8.
[Administrator-1] cluster
[Administrator-1-cluster] ip-pool 10.0.0.1 8
After the previous configuration, check information about the cluster to which the device
belongs. You can find that the device name is changed, the cluster name is HUAWEI, and the
management VLAN ID is 10.
[HUAWEI_0.Administrator-1-cluster] display cluster
Cluster name:"HUAWEI"
Role:Administrator switch
management vlan id : 10
Cluster multicast MAC address : 0180-c200-000a(default)
Cluster auto-join : disabled
On the administrator switch, check information about candidate switches, you can find all the
candidate switches and their types.
[HUAWEI_0.Administrator-1-cluster] display cluster candidates
MAC HOP IP PLATFORM
0004-0004-0004 2 NE40E
0003-0003-0003 1 NE40E
0002-0002-0002 1 NE40E
After the previous configuration, check information about the administrator switch and member
switches in the cluster on the administrator switch. You can find that all the member switches
are added to the cluster and in the Up state.
[HUAWEI_0.Administrator-1-cluster] display cluster members
The list of cluster member:
------------------------------------------------------------------------------
SN Device Type MAC Address Status Device Name
------------------------------------------------------------------------------
0 NE40E 0001-0001-0001 Admin HUAWEI_0.Administrator-1
1 NE40E 0002-0002-0002 Up HUAWEI_1.Member-1
2 NE40E 0003-0003-0003 Up HUAWEI_2.Member-2
3 NE40E 0004-0004-0004 Up HUAWEI_3.Member-3
# To ensure the normal communication between member switches in the cluster and devices out
of the cluster, assign an IP address to VLANIF 10 on the administrator switch.
After the previous configuration, you can find that the interface on the administrator switch is
in the Up state.
[HUAWEI_0.Administrator-1] display interface Vlanif 10
Vlanif10 current state : UP
Line protocol current state : UP
Last line protocol up time : 2010-06-28 21:25:52
Description:HUAWEI, HUAWEI Series, Vlanif10 Interface
Route Port,The Maximum Transmit Unit is 1500
Internet Address is 1.0.0.1/8
Internet Address is 10.0.0.1/8 ClusterIP Sending Frames' Format is PKTFMT_ETHNT_
2, Hardware address is 0001-0001-0001
Physical is VLANIF
Current system time: 2010-07-01 14:37:11-08:00
Last 300 seconds input rate 0 bits/sec, 0 packets/sec
Last 300 seconds output rate 0 bits/sec, 0 packets/sec
Last 0 seconds input rate 0 bits/sec, 0 packets/sec
Last 0 seconds output rate 0 bits/sec, 0 packets/sec
Input: 0 packets,0 bytes,
0 unicast,0 broadcast,0 multicasts
Output:0 packets,0 bytes,
0 unicast,0 broadcast,0 multicasts.
# This operation is to ensure that a reachable route exists between switches in the cluster and
servers or hosts, you can also use dynamic route.
[HUAWEI_0.Administrator-1] ip route-static 0.0.0.0 0 1.0.0.2
After the previous configuration, check information about the cluster to which the administrator
switch belongs. You can find that the public log host, SNMP host, FTP server, and SFTP server
are configured successfully.
[HUAWEI_0.Administrator-1-cluster] display cluster
Cluster name:"HUAWEI"
Role:Administrator switch
management vlan id : 10
Cluster multicast MAC address : 0180-c200-000a(default)
Cluster auto-join : disabled
----End
Configuration Files
l Configuration file of Administrator-1.
#
sysname Administrator-1
#
vlan batch 10
#
cluster enable
ntdp enable
ntdp hop 3
ntdp timer 10
ndp enable
#
interface Vlanif10
ip address 1.0.0.1 255.0.0.0
#
interface GigabitEthernet1/0/1
undo shutdown
portswitch
port default vlan 10
ntdp enable
ndp enable
#
interface GigabitEthernet1/0/2
undo shutdown
portswitch
port default vlan 10
ntdp enable
ndp enable
#
cluster
mngvlanid 10
ip-pool 10.0.0.1 255.0.0.0
build HUAWEI
cluster-ftp-nat enable
ftp-server 2.0.0.1
sftp-server 2.0.0.2
logging-host 4.0.0.1
snmp-host 3.0.0.1
#
#
cluster enable
ntdp enable
ntdp hop 3
ntdp timer 10
ndp enable
#
interface Vlanif10
#
interface GigabitEthernet1/0/2
undo shutdown
portswitch
port default vlan 10
ntdp enable
ndp enable
#
cluster
mngvlanid 10
administrator-address 0001-0001-0001 name HUAWEI
#
return
Networking Requirements
NOTE
As shown in Figure 4-4, all the Layer 2 switches belong to the same cluster. Administrator-1 is
the administrator switch of the cluster and other switches are member switches. The member ID
of Member-1 is 1, the member ID of Member-2 is 2 and the member ID of Member-3 is 3.
To upload files to Member-1, Member-2, and Member-3 or download files from them, you can
set up an FTP connection between the devices out of the cluster and member switches in NAT
or non-NAT mode.
NOTE
In this configuration example where the NAT mode is adopted, Member-3 accesses the FTP server
(2.0.0.1/8) out of the cluster and devices out of the cluster access the FTP server (Member-2) in the cluster.
Figure 4-4 Networking diagram of configuring the interconnection of FTP servers and devices
in and out of the HGMP cluster (in NAT Mode)
FTP Server
2.0.0.1/8 NM station
3.0.0.1/8
IP/MPLS
core
SFTP Server
2.0.0.2/8
GE1/0/1 Log station
1.0.0.2/8 4.0.0.1/8
GE1/0/3
GE1/0/1 GE1/0/2
Administrator-1
10.0.0.1/8
GE1/0/1 GE1/0/1
Member-1 ......
GE1/0/2 Member-2
GE1/0/1
Member-3
10.0.0.4/8
Cluster
Configuration Roadmap
The configuration roadmap is as follows:
1. Create a cluster and configure basic HGMP functions for the cluster according to the steps
described in the section "Example for Configuring Basic HGMP Functions for a
Cluster."
2. For the situation that Member-3 accesses the FTP server (2.0.0.1/8) out of the cluster:
l Run the cluster-ftp command on the member switch to set up a connection with the
public FTP server of the cluster.
3. For the situation that a device out of the cluster accesses the FTP server (Member-2):
l Calculate the port number reserved on the administrator switch for the FTP protocol of
a certain member switch in the cluster.
l Run the FTP client program on the PC and create an FTP connection with the member
switch.
Data Preparation
To complete the configuration, you need the following data:
l Management VLAN ID of the cluster, that is 10
l IP address of VLANIF 10 that is 1.0.0.1/8 and a reachable route between VLANIF 10 and
the FTP server
l Address pool of the cluster, that is 10.0.0.0/8
l IP address of the administrator switch used in the cluster, that is 10.0.0.1/8
l Member-2 serving as the FTP server in the cluster with the member ID being 2
Procedure
Step 1 Configure a management VLAN.
# Create VLAN 10 on the device and add interfaces of the administrator switch and member
switches to VLAN 10.
# Configure the administrator switch.
<HUAWEI> system-view
[HUAWEI] sysname Administrator-1
[Administrator-1] vlan 10
[Administrator-1-vlan10] quit
[Administrator-1] interface gigabitethernet 1/0/1
[Administrator-1-GigabitEthernet1/0/1] undo shutdown
[Administrator-1-GigabitEthernet1/0/1] portswitch
[Administrator-1-GigabitEthernet1/0/1] port default vlan 10
[Administrator-1-GigabitEthernet1/0/1] quit
[Administrator-1] interface gigabitethernet 1/0/2
[Administrator-1-GigabitEthernet1/0/2] undo shutdown
[Administrator-1-GigabitEthernet1/0/2] portswitch
[Administrator-1-GigabitEthernet1/0/2] port default vlan 10
[Administrator-1-GigabitEthernet1/0/2] quit
[Administrator-1] interface gigabitethernet 1/0/3
[Administrator-1-GigabitEthernet1/0/3] undo shutdown
[Administrator-1-GigabitEthernet1/0/3] portswitch
[Administrator-1-GigabitEthernet1/0/3] port default vlan 10
[Administrator-1-GigabitEthernet1/0/3] quit
[Administrator-1] interface vlanif 10
[Administrator-1-Vlanif10] quit
After the previous configuration, you can find that NDP on the administrator is in the Enable
state, the host name of the neighboring node is Device Name, and the name of the interface
connecting the neighboring node and the local interface is Port Name.
After the previous configuration, globally check the NTDP configuration on the administrator
switch. You can find that the interval and range for NTDP to collect topologies is 10 minutes
and 3 hops respectively.
[Administrator-1] display ntdp
Network topology discovery protocol is enabled
Hops : 3
Timer : 10 min
Hop Delay : 200 ms
Port Delay: 20 ms
Total time for last collection:0ms
Step 4 Enable the cluster function and set the management VLAN.
# Configure the administrator switch.
[Administrator-1] cluster enable
[Administrator-1] cluster
[Administrator-1-cluster] mngvlanid 10
[Administrator-1-cluster] quit
After the topology collection function is enabled manually on the administrator switch, check
the device information collected through NTDP and you can find the MAC address and types
of related devices.
<Administrator-1> ntdp explore
<Administrator-1> display ntdp device-list
The device-list of NTDP:
------------------------------------------------------------------------------
MAC HOP IP PLATFORM
------------------------------------------------------------------------------
0004-0004-0004 2 NE40E
0003-0003-0003 1 NE40E
0002-0002-0002 1 NE40E
0001-0001-0001 0 NE40E
# On the administrator switch, set the range of IP addresses that can be assigned to the cluster
to 10.0.0.0/8, in which the IP address assigned to the administrator switch is 10.0.0.1/8.
[Administrator-1] cluster
[Administrator-1-cluster] ip-pool 10.0.0.1 8
After the previous configuration, check information about the cluster to which the device
belongs. You can find that the device name is changed, the cluster name is HUAWEI, and the
management VLAN ID is 10.
[HUAWEI_0.Administrator-1-cluster] display cluster
Cluster name:"HUAWEI"
Role:Administrator switch
management vlan id : 10
Cluster multicast MAC address : 0180-c200-000a(default)
Cluster auto-join : disabled
On the administrator switch, check information about candidate switches, you can find all the
candidate switches and their types.
[HUAWEI_0.Administrator-1-cluster] display cluster candidates
MAC HOP IP PLATFORM
0004-0004-0004 2 NE40E
0003-0003-0003 1 NE40E
0002-0002-0002 1 NE40E
After the previous configuration, check information about the administrator switch and member
switches in the cluster on the administrator switch. You can find that all the member switches
are added to the cluster and in the Up state.
[HUAWEI_0.Administrator-1-cluster] display cluster members
The list of cluster member:
------------------------------------------------------------------------------
SN Device Type MAC Address Status Device Name
------------------------------------------------------------------------------
0 NE40E 0001-0001-0001 Admin HUAWEI_0.Administrator-1
1 NE40E 0002-0002-0002 Up HUAWEI_1.Member-1
2 NE40E 0003-0003-0003 Up HUAWEI_2.Member-2
3 NE40E 0004-0004-0004 Up HUAWEI_3.Member-3
# To ensure the normal communication between member switches in the cluster and devices out
of the cluster, assign an IP address to VLANIF 10 on the administrator switch.
After the previous configuration, you can find that the interface on the administrator switch is
in the Up state.
[HUAWEI_0.Administrator-1] display interface Vlanif 10
Vlanif10 current state : UP
Line protocol current state : UP
Last line protocol up time : 2010-06-28 21:25:52
Description:HUAWEI, HUAWEI Series, Vlanif10 Interface
Route Port,The Maximum Transmit Unit is 1500
Internet Address is 1.0.0.1/8
Internet Address is 10.0.0.1/8 ClusterIP Sending Frames' Format is PKTFMT_ETHNT_
2, Hardware address is 0001-0001-0001
Physical is VLANIF
Current system time: 2010-07-01 14:37:11-08:00
Last 300 seconds input rate 0 bits/sec, 0 packets/sec
Last 300 seconds output rate 0 bits/sec, 0 packets/sec
Last 0 seconds input rate 0 bits/sec, 0 packets/sec
Last 0 seconds output rate 0 bits/sec, 0 packets/sec
Input: 0 packets,0 bytes,
0 unicast,0 broadcast,0 multicasts
Output:0 packets,0 bytes,
0 unicast,0 broadcast,0 multicasts.
# This operation is to ensure that a reachable route exists between switches in the cluster and
servers or hosts, you can also use dynamic route.
[HUAWEI_0.Administrator-1] ip route-static 0.0.0.0 0 1.0.0.2
After the previous configuration, check information about the cluster to which the administrator
switch belongs. You can find that the public log host, SNMP host, FTP server, and SFTP server
are configured successfully.
[HUAWEI_0.Administrator-1-cluster] display cluster
Cluster name:"HUAWEI"
Role:Administrator switch
management vlan id : 10
Cluster multicast MAC address : 0180-c200-000a(default)
Cluster auto-join : disabled
l A device out of the cluster accesses the FTP server in the cluster in NAT mode.
# Configure an FTP server on Member-2. The configuration details see Configuration Files,
and are not mentioned here.
# Calculate the port number reserved for the FTP protocol of a member switch in the cluster.
The member ID of Member-2 is 2. Using the formula for computing port numbers reserved
for a cluster ( Interface number reserved for a cluster = Base interface number + Member
number*2) , you can obtain that the reserved port number, which is used by Member-2 to
enable the FTP server, is 53248 + 2*2 = 53252.
# Run the FTP client program on the PC and set up an FTP connection with Member-2 in
NAT mode.
NOTE
A device out of the cluster accesses the FTP server in the cluster in NAT mode. IP address of the FTP
server is that of the management VLANIF interface on the administrator switch. The FTP server uses
a port number reserved in the cluster instead of the commonly-used 21.
ftp> open 1.0.0.1 53252
Connected to 1.0.0.1.
220 FTP service ready.
User (1.0.0.1:(none)): hgmp
331 Password required for hgmp.
Password:
230 User logged in.
ftp>
----End
Configuration Files
l Configuration file of Administrator-1.
#
sysname Administrator-1
#
vlan batch 10
#
cluster enable
ntdp enable
ntdp hop 5
ntdp timer 10
ndp enable
#
interface Vlanif10
ip address 1.0.0.1 255.0.0.0
#
interface GigabitEthernet1/0/1
undo shutdown
portswitch
port default vlan 10
ntdp enable
ndp enable
#
interface GigabitEthernet1/0/2
undo shutdown
portswitch
port default vlan 10
ntdp enable
ndp enable
#
cluster
mngvlanid 10
ip-pool 10.0.0.1 255.0.0.0
build HUAWEI
cluster-ftp-nat enable
ftp-server 2.0.0.1
sftp-server 2.0.0.2
logging-host 4.0.0.1
snmp-host 3.0.0.1
#
ip route-static 0.0.0.0 0.0.0.0 1.0.0.2
#
return
Networking Requirements
NOTE
As shown in Figure 4-5, all the Layer 2 switches belong to the same cluster. Administrator-1 is
the administrator switch of the cluster and other switches are member switches. The member ID
of Member-2 is 2 and the member ID of Member-3 is 3.
To upload files to Member-1, Member-2, and Member-3 or download files from them, you can
set up an FTP connection between devices out of the cluster and member switches in NAT or
non-NAT mode.
NOTE
In this configuration example where the Non-NAT mode is adopted, Member-3 accesses the FTP server
(2.0.0.1/8) out of the cluster and devices out of the cluster access the FTP server (Member-2) in the cluster.
Figure 4-5 Networking diagram of configuring the interconnection of FTP servers and devices
in and out of the HGMP cluster (in non-NAT mode)
FTP Server
2.0.0.1/8 NM station
3.0.0.1/8
IP/MPLS
core
SFTP Server
2.0.0.2/8
GE1/0/1 Log station
1.0.0.2/8 4.0.0.1/8
GE1/0/3
GE1/0/1 GE1/0/2
Administrator-1
10.0.0.1/8
GE1/0/1 GE1/0/1
Member-1 ......
GE1/0/2 Member-2
GE1/0/1
Member-3
10.0.0.4/8
Cluster
Configuration Roadmap
The configuration roadmap is as follows:
1. Create a cluster and configure basic HGMP functions for the cluster according to the steps
described in the section Example for Configuring Basic HGMP Functions for a
Cluster.
2. Disable FTP NAT on the administrator switch (The function is disabled by default.)
NOTE
l Run the ftp command on the member switch to set up a connection with the public FTP
server of the cluster.
5. For the situation that the device out of the cluster accesses the FTP server (Member-2):
l Run the FTP client program on the PC and create an FTP connection with the member
switch.
Context
To complete the configuration, you need the following data:
l Management VLAN ID of the cluster, that is 10
l IP address of VLANIF 10 that is 1.0.0.1/8 and a reachable route between VLANIF 10 and
the FTP server
l Address pool of the cluster, that is 10.0.0.0/8
l IP address of the administrator switch used in the cluster, that is 10.0.0.1/8
l Member ID serving as the FTP server in the cluster with the member ID being 2
Procedure
Step 1 Configure a management VLAN.
# Create VLAN 10 on the device and add interfaces of the administrator switch and member
switches to VLAN 10.
# Configure the administrator switch.
<HUAWEI> system-view
[HUAWEI] sysname Administrator-1
[Administrator-1] vlan 10
[Administrator-1-vlan10] quit
[Administrator-1] interface gigabitethernet 1/0/1
[Administrator-1-GigabitEthernet1/0/1] undo shutdown
[Administrator-1-GigabitEthernet1/0/1] portswitch
[Administrator-1-GigabitEthernet1/0/1] port default vlan 10
[Administrator-1-GigabitEthernet1/0/1] quit
[Administrator-1] interface gigabitethernet 1/0/2
[Administrator-1-GigabitEthernet1/0/2] undo shutdown
[Administrator-1-GigabitEthernet1/0/2] portswitch
[Administrator-1-GigabitEthernet1/0/2] port default vlan 10
[Administrator-1-GigabitEthernet1/0/2] quit
[Administrator-1] interface gigabitethernet 1/0/3
[Administrator-1-GigabitEthernet1/0/3] undo shutdown
[Administrator-1-GigabitEthernet1/0/3] portswitch
[Administrator-1-GigabitEthernet1/0/3] port default vlan 10
[Administrator-1-GigabitEthernet1/0/3] quit
[Administrator-1] interface vlanif 10
[Administrator-1-Vlanif10] quit
[Member-1-GigabitEthernet1/0/2] portswitch
[Member-1-GigabitEthernet1/0/2] port default vlan 10
[Member-1-GigabitEthernet1/0/2] quit
[Member-1] interface vlanif 10
[Member-1-Vlanif10] quit
[Member-3-GigabitEthernet1/0/1] quit
After the previous configuration, you can find that NDP on the administrator is in the Enable
state, the host name of the neighboring node is Device Name, and the name of the interface
connecting the neighboring node and the local interface is Port Name.
[Administrator-1] display ndp interface gigabitethernet 1/0/1 gigabitethernet 1/0/2
Interface: GigabitEthernet1/0/1
Status: Enabled, Packets Sent: 0, Packets Received: 11, Packets Error: 0
Neighbor 1: Aging Time: 2(s)
MAC Address : 0002-0002-0002
Port Name : GigabitEthernet1/0/1
Software Version: NE40E Version V600R003C00
Device Name : Member-1
Port Duplex : FULL
Product Ver : NE40E
Interface: GigabitEthernet1/0/2
Status: Enabled, Packets Sent: 6, Packets Received: 16, Packets Error: 0
Neighbor 1: Aging Time: 5(s)
MAC Address : 0003-0003-0003
Port Name : GigabitEthernet1/0/1
Software Version: NE40E Version V600R003C00
Device Name : Member-2
Port Duplex : FULL
Product Ver : NE40E
NOTE
After the previous configuration, globally check the NTDP configuration on the administrator
switch. You can find that the interval and range for NTDP to collect topologies is 10 minutes
and 3 hops respectively.
[Administrator-1] display ntdp
Network topology discovery protocol is enabled
Hops : 3
Timer : 10 min
Hop Delay : 200 ms
Port Delay: 20 ms
Total time for last collection:0ms
Step 4 Enable the cluster function and set the management VLAN.
# Configure the administrator switch.
[Administrator-1] cluster enable
[Administrator-1] cluster
[Administrator-1-cluster] mngvlanid 10
[Administrator-1-cluster] quit
After the topology collection function is enabled manually on the administrator switch, check
the device information collected through NTDP and you can find the MAC address and types
of related devices.
<Administrator-1> ntdp explore
<Administrator-1> display ntdp device-list
The device-list of NTDP:
------------------------------------------------------------------------------
MAC HOP IP PLATFORM
------------------------------------------------------------------------------
0004-0004-0004 2 NE40E
0003-0003-0003 1 NE40E
0002-0002-0002 1 NE40E
0001-0001-0001 0 NE40E
# On the administrator switch, set the range of IP addresses that can be assigned to the cluster
to 10.0.0.0/8, in which the IP address assigned to the administrator switch is 10.0.0.1/8.
[Administrator-1] cluster
[Administrator-1-cluster] ip-pool 10.0.0.1 8
After the previous configuration, check information about the cluster to which the device
belongs. You can find that the device name is changed, the cluster name is HUAWEI, and the
management VLAN ID is 10.
[HUAWEI_0.Administrator-1-cluster] display cluster
Cluster name:"HUAWEI"
Role:Administrator switch
management vlan id : 10
Cluster multicast MAC address : 0180-c200-000a(default)
Cluster auto-join : disabled
On the administrator switch, check information about candidate switches, you can find all the
candidate switches and their types.
[HUAWEI_0.Administrator-1-cluster] display cluster candidates
MAC HOP IP PLATFORM
0004-0004-0004 2 NE40E
0003-0003-0003 1 NE40E
0002-0002-0002 1 NE40E
After the previous configuration, check information about the administrator switch and member
switches in the cluster on the administrator switch. You can find that all the member switches
are added to the cluster and in the Up state.
[HUAWEI_0.Administrator-1-cluster] display cluster members
The list of cluster member:
------------------------------------------------------------------------------
SN Device Type MAC Address Status Device Name
------------------------------------------------------------------------------
0 NE40E 0001-0001-0001 Admin HUAWEI_0.Administrator-1
1 NE40E 0002-0002-0002 Up HUAWEI_1.Member-1
2 NE40E 0003-0003-0003 Up HUAWEI_2.Member-2
3 NE40E 0004-0004-0004 Up HUAWEI_3.Member-3
After the previous configuration, you can find that the interface on the administrator switch is
in the Up state.
[HUAWEI_0.Administrator-1] display interface Vlanif 10
Vlanif10 current state : UP
Line protocol current state : UP
Last line protocol up time : 2010-06-28 21:25:52
Description:HUAWEI, HUAWEI Series, Vlanif10 Interface
Route Port,The Maximum Transmit Unit is 1500
Internet Address is 1.0.0.1/8
Internet Address is 10.0.0.1/8 ClusterIP Sending Frames' Format is PKTFMT_ETHNT_
2, Hardware address is 0001-0001-0001
Physical is VLANIF
Current system time: 2010-07-01 14:37:11-08:00
Last 300 seconds input rate 0 bits/sec, 0 packets/sec
Last 300 seconds output rate 0 bits/sec, 0 packets/sec
Last 0 seconds input rate 0 bits/sec, 0 packets/sec
Last 0 seconds output rate 0 bits/sec, 0 packets/sec
Input: 0 packets,0 bytes,
0 unicast,0 broadcast,0 multicasts
Output:0 packets,0 bytes,
0 unicast,0 broadcast,0 multicasts.
Step 9 Configure routes for the member switches and ensure that reachable routes exist between
member switches and the FTP server.
# Configure member switch 1.
[HUAWEI_1.Member-1] ip route-static 2.0.0.0 8 10.0.0.1
NOTE
Multiple member switches can be configured simultaneously through incremental configuration. For
configuration details, see Example for Configuring the Incremental Configuration Function for an
HGMP Cluster.
l Devices out of the cluster access the FTP server in the cluster in non-NAT mode.
# Configure an FTP server on the corresponding member switch (Member-2). The
configuration details see Configuration Files, and are not mentioned here.
# Run the FTP client program on the PC and set up an FTP connection with Member-2 in
non-NAT mode.
NOTE
Devices out of the cluster access the FTP server in the cluster in non-NAT mode. The IP address of
the FTP server is that of the management VLANIF interface on the member switch. The FTP server
uses a port number reserved in the cluster instead of the commonly-used 21.
ftp> open 10.0.0.2
Connected to 10.0.0.2
220 FTP service ready.
User (10.0.0.2:(none)): hgmp
331 Password required for hgmp.
Password:
230 User logged in.
ftp>
----End
Configuration Files
l Configuration file of Administrator-1.
#
sysname Administrator-1
#
vlan batch 10
#
cluster enable
ntdp enable
ntdp hop 5
ntdp timer 10
ndp enable
#
interface Vlanif10
ip address 1.0.0.1 255.0.0.0
#
interface GigabitEthernet1/0/1
undo shutdown
portswitch
port default vlan 10
ntdp enable
ndp enable
#
interface GigabitEthernet1/0/2
undo shutdown
portswitch
port default vlan 10
ntdp enable
ndp enable
#
cluster
mngvlanid 10
ip-pool 10.0.0.1 255.0.0.0
build HUAWEI
sftp-server 2.0.0.2
logging-host 4.0.0.1
snmp-host 3.0.0.1
#
ip route-static 0.0.0.0 0.0.0.0 1.0.0.2
#
return
l Configuration file of Member-1.
#
sysname Member-1
#
vlan batch 10
#
cluster enable
ntdp enable
ntdp hop 5
ntdp timer 10
ndp enable
#
interface Vlanif10
#
interface GigabitEthernet1/0/1
undo shutdown
portswitch
port default vlan 10
ntdp enable
ndp enable
#
interface GigabitEthernet1/0/2
undo shutdown
portswitch
port default vlan 10
ntdp enable
ndp enable
#
cluster
mngvlanid 10
administrator-address 0001-0001-0001 name HUAWEI
#
ip route-static 2.0.0.0 255.0.0.0 10.0.0.1
#
return
l Configuration file of Member-2.
#
sysname Member-2
#
FTP server enable
#
vlan batch 10
#
cluster enable
ntdp enable
ntdp hop 5
ntdp timer 10
ndp enable
#
aaa
local-user hgmp password simple hgmp
local-user hgmp service-type ftp
local-user hgmp ftp-directory cfcard:
#
#
interface Vlanif10
#
interface GigabitEthernet1/0/1
undo shutdown
portswitch
port default vlan 10
ntdp enable
ndp enable
#
cluster
administrator-address 0001-0001-0001 name HUAWEI
#
mngvlanid 10
ip route-static 2.0.0.0 255.0.0.0 10.0.0.1
#
return
Networking Requirements
NOTE
As shown in Figure 4-6, all the Layer 2 switches belong to the same cluster. Administrator-1 is
the administrator switch of the cluster and other switches are member switches. The member ID
of Member-2 is 2 and the member ID of Member-3 is 3.
When Member-1, Member-2, and Member-3 are required to send packets to the SNMP host, a
connection can be set up between the SNMP host out of the cluster and member switches in
NAT or non-NAT mode.
NOTE
In this configuration example where the NAT mode is adopted, Member-3 accesses the outside SNMP host
(3.0.0.1/8).
Figure 4-6 Networking diagram of configuring devices in the HGMP cluster to access the
outside SNMP host (in NAT mode)
FTP Server
2.0.0.1/8 NM station
3.0.0.1/8
IP/MPLS
core
SFTP Server
2.0.0.2/8
GE1/0/1 Log station
1.0.0.2/8 4.0.0.1/8
GE1/0/3
GE1/0/1 GE1/0/2
Administrator-1
10.0.0.1/8
GE1/0/1 GE1/0/1
Member-1 ......
GE1/0/2 Member-2
GE1/0/1
Member-3
10.0.0.4/8
Cluster
Configuration Roadmap
The configuration roadmap is as follows:
1. Create a cluster and configure basic HGMP functions for the cluster according to the steps
described in Example for Configuring Basic HGMP Functions for a Cluster.
2. Enable SNMP NAT on the administrator switch (The function is enabled by default.)
NOTE
Data Preparation
To complete the configuration, you need the following data:
l Management VLAN ID of the cluster, that is 10
l IP address of the SNMP host, that is 3.0.0.1/8
l IP address of VLANIF 10 that is 1.0.0.1/8 and a reachable route between VLANIF 10 and
the SNMP host
l Address pool of the cluster, that is 10.0.0.0/8
l IP address of the administrator switch used in the cluster, that is 10.0.0.1/8
Procedure
Step 1 Configure a management VLAN.
# Create VLAN 10 on the device and add interfaces of the administrator switch and member
switches to VLAN 10.
# Configure the administrator switch.
<HUAWEI> system-view
[HUAWEI] sysname Administrator-1
[Administrator-1] vlan 10
[Administrator-1-vlan10] quit
[Administrator-1] interface gigabitethernet 1/0/1
[Administrator-1-GigabitEthernet1/0/1] undo shutdown
[Administrator-1-GigabitEthernet1/0/1] portswitch
[Administrator-1-GigabitEthernet1/0/1] port default vlan 10
[Administrator-1-GigabitEthernet1/0/1] quit
[Administrator-1] interface gigabitethernet 1/0/2
[Administrator-1-GigabitEthernet1/0/2] undo shutdown
[Administrator-1-GigabitEthernet1/0/2] portswitch
[Administrator-1-GigabitEthernet1/0/2] port default vlan 10
[Administrator-1-GigabitEthernet1/0/2] quit
[Administrator-1] interface gigabitethernet 1/0/3
[Administrator-1-GigabitEthernet1/0/3] undo shutdown
[Administrator-1-GigabitEthernet1/0/3] portswitch
[Administrator-1-GigabitEthernet1/0/3] port default vlan 10
[Administrator-1-GigabitEthernet1/0/3] quit
[Administrator-1] interface vlanif 10
[Administrator-1-Vlanif10] quit
After the previous configuration, you can find that NDP on the administrator is in the Enable
state, the host name of the neighboring node is Device Name, and the name of the interface
connecting the neighboring node and the local interface is Port Name.
[Administrator-1] display ndp interface gigabitethernet 1/0/1 gigabitethernet 1/0/2
Interface: GigabitEthernet1/0/1
Status: Enabled, Packets Sent: 0, Packets Received: 11, Packets Error: 0
Neighbor 1: Aging Time: 2(s)
MAC Address : 0002-0002-0002
Port Name : GigabitEthernet1/0/1
Software Version: NE40E Version V600R003C00
Device Name : Member-1
Port Duplex : FULL
Product Ver : NE40E
Interface: GigabitEthernet1/0/2
Status: Enabled, Packets Sent: 6, Packets Received: 16, Packets Error: 0
Neighbor 1: Aging Time: 5(s)
MAC Address : 0003-0003-0003
Port Name : GigabitEthernet1/0/1
Software Version: NE40E Version V600R003C00
Device Name : Member-2
Port Duplex : FULL
Product Ver : NE40E
NOTE
After the previous configuration, globally check the NTDP configuration on the administrator
switch. You can find that the interval and range for NTDP to collect topologies is 10 minutes
and 3 hops respectively.
[Administrator-1] display ntdp
Network topology discovery protocol is enabled
Hops : 3
Timer : 10 min
Hop Delay : 200 ms
Port Delay: 20 ms
Total time for last collection:0ms
Step 4 Enable the cluster function and set the management VLAN.
# Configure the administrator switch.
[Administrator-1] cluster enable
[Administrator-1] cluster
[Administrator-1-cluster] mngvlanid 10
[Administrator-1-cluster] quit
After the topology collection function is enabled manually on the administrator switch, check
the device information collected through NTDP and you can find the MAC address and types
of related devices.
<Administrator-1> ntdp explore
<Administrator-1> display ntdp device-list
The device-list of NTDP:
------------------------------------------------------------------------------
MAC HOP IP PLATFORM
------------------------------------------------------------------------------
0004-0004-0004 2 NE40E
0003-0003-0003 1 NE40E
0002-0002-0002 1 NE40E
0001-0001-0001 0 NE40E
NOTE
# On the administrator switch, set the range of IP addresses that can be assigned to the cluster
to 10.0.0.0/8, in which the IP address assigned to the administrator switch is 10.0.0.1/8.
[Administrator-1] cluster
[Administrator-1-cluster] ip-pool 10.0.0.1 8
After the previous configuration, check information about the cluster to which the device
belongs. You can find that the device name is changed, the cluster name is HUAWEI, and the
management VLAN ID is 10.
[HUAWEI_0.Administrator-1-cluster] display cluster
Cluster name:"HUAWEI"
Role:Administrator switch
management vlan id : 10
Cluster multicast MAC address : 0180-c200-000a(default)
Cluster auto-join : disabled
On the administrator switch, check information about candidate switches, you can find all the
candidate switches and their types.
[HUAWEI_0.Administrator-1-cluster] display cluster candidates
MAC HOP IP PLATFORM
0004-0004-0004 2 NE40E
0003-0003-0003 1 NE40E
0002-0002-0002 1 NE40E
After the previous configuration, check information about the administrator switch and member
switches in the cluster on the administrator switch. You can find that all the member switches
are added to the cluster and in the Up state.
[HUAWEI_0.Administrator-1-cluster] display cluster members
The list of cluster member:
------------------------------------------------------------------------------
SN Device Type MAC Address Status Device Name
------------------------------------------------------------------------------
0 NE40E 0001-0001-0001 Admin HUAWEI_0.Administrator-1
1 NE40E 0002-0002-0002 Up HUAWEI_1.Member-1
2 NE40E 0003-0003-0003 Up HUAWEI_2.Member-2
3 NE40E 0004-0004-0004 Up HUAWEI_3.Member-3
After the previous configuration, you can find that the interface on the administrator switch is
in the Up state.
[HUAWEI_0.Administrator-1] display interface Vlanif 10
Vlanif10 current state : UP
Line protocol current state : UP
Last line protocol up time : 2010-06-28 21:25:52
Description:HUAWEI, HUAWEI Series, Vlanif10 Interface
Route Port,The Maximum Transmit Unit is 1500
Internet Address is 1.0.0.1/8
Internet Address is 10.0.0.1/8 ClusterIP Sending Frames' Format is PKTFMT_ETHNT_
2, Hardware address is 0001-0001-0001
Physical is VLANIF
Current system time: 2010-07-01 14:37:11-08:00
Last 300 seconds input rate 0 bits/sec, 0 packets/sec
Last 300 seconds output rate 0 bits/sec, 0 packets/sec
Last 0 seconds input rate 0 bits/sec, 0 packets/sec
Last 0 seconds output rate 0 bits/sec, 0 packets/sec
Input: 0 packets,0 bytes,
0 unicast,0 broadcast,0 multicasts
Output:0 packets,0 bytes,
0 unicast,0 broadcast,0 multicasts.
After the previous configuration, check information about the cluster to which the administrator
switch belongs. You can find that the public log host, SNMP host, FTP server, and SFTP server
are configured successfully.
[HUAWEI_0.Administrator-1-cluster] display cluster
Cluster name:"HUAWEI"
Role:Administrator switch
management vlan id : 10
Cluster multicast MAC address : 0180-c200-000a(default)
Cluster auto-join : disabled
----End
Configuration Files
l Configuration file of Administrator-1.
#
sysname Administrator-1
#
vlan batch 10
#
cluster enable
ntdp enable
ntdp hop 5
ntdp timer 10
ndp enable
#
interface Vlanif10
ip address 1.0.0.1 255.0.0.0
#
interface GigabitEthernet1/0/1
undo shutdown
portswitch
port default vlan 10
ntdp enable
ndp enable
#
interface GigabitEthernet1/0/2
undo shutdown
portswitch
port default vlan 10
ntdp enable
ndp enable
#
cluster
mngvlanid 10
ip-pool 10.0.0.1 255.0.0.0
build HUAWEI
cluster-ftp-nat enable
ftp-server 2.0.0.1
sftp-server 2.0.0.2
logging-host 4.0.0.1
snmp-host 3.0.0.1
#
ip route-static 0.0.0.0 0.0.0.0 1.0.0.2
#
return
l Configuration file of Member-1.
#
sysname Member-1
#
vlan batch 10
#
cluster enable
ntdp enable
ntdp hop 5
ntdp timer 10
ndp enable
#
interface Vlanif10
#
interface GigabitEthernet1/0/1
undo shutdown
portswitch
port default vlan 10
ntdp enable
ndp enable
#
interface GigabitEthernet1/0/2
undo shutdown
portswitch
port default vlan 10
ntdp enable
ndp enable
#
cluster
mngvlanid 10
administrator-address 0001-0001-0001 name HUAWEI
#
return
l Configuration file of Member-2.
#
sysname Member-2
#
FTP server enable
#
vlan batch 10
#
cluster enable
ntdp enable
ntdp hop 5
ntdp timer 10
ndp enable
#
aaa
local-user hgmp password simple hgmp
local-user hgmp service-type ftp
local-user hgmp ftp-directory cfcard:
#
#
interface Vlanif10
#
interface GigabitEthernet1/0/1
undo shutdown
portswitch
port default vlan 10
ntdp enable
ndp enable
#
cluster
mngvlanid 10
administrator-address 0001-0001-0001 name HUAWEI
#
return
Networking Requirements
As shown in Figure 4-7, all the Layer 2 switches belong to the same cluster. Administrator-1 is
the administrator switch of the cluster and other switches are member switches. The member ID
of Member-2 is 2 and the member ID of Member-3 is 3.
When Member-1, Member-2, and Member-3 are required to send packets to the SNMP host out
of the cluster, a connection can be set up between the SNMP host and member switches in NAT
or non-NAT mode.
NOTE
In this configuration example where the non-NAT mode is adopted, Member-3 accesses the SNMP host
(3.0.0.1/8).
Figure 4-7 Networking diagram of configuring devices in the HGMP cluster to access the
outside SNMP host (in non-NAT mode)
FTP Server
2.0.0.1/8 NM station
3.0.0.1/8
IP/MPLS
core
SFTP Server
2.0.0.2/8
GE1/0/1 Log station
1.0.0.2/8 4.0.0.1/8
GE1/0/3
GE1/0/1 GE1/0/2
Administrator-1
10.0.0.1/8
GE1/0/1 GE1/0/1
Member-1 ......
GE1/0/2 Member-2
GE1/0/1
Member-3
10.0.0.4/8
Cluster
Configuration Roadmap
The configuration roadmap is as follows:
1. Create a cluster and configure basic HGMP functions for the cluster according to the steps
described in Example for Configuring Basic HGMP Functions for a Cluster.
2. Disable SNMP NAT on the administrator switch (The function is enabled by default.)
NOTE
Data Preparation
To complete the configuration, you need the following data:
l Management VLAN ID of the cluster, that is 10
l IP address of the SNMP host, that is 3.0.0.1/8
l IP address of VLANIF 10 that is 1.0.0.1/8 and a reachable route between VLANIF 10 and
the SNMP host
l Address pool of the cluster, that is 10.0.0.0/8
l IP address of the administrator switch used in the cluster, that is 10.0.0.1/8
Procedure
Step 1 Configure a management VLAN.
# Create VLAN 10 on the device and add interfaces of the administrator switch and member
switches to VLAN 10.
# Configure the administrator switch.
<HUAWEI> system-view
[HUAWEI] sysname Administrator-1
[Administrator-1] vlan 10
[Administrator-1-vlan10] quit
[Administrator-1] interface gigabitethernet 1/0/1
[Administrator-1-GigabitEthernet1/0/1] undo shutdown
[Administrator-1-GigabitEthernet1/0/1] portswitch
[Administrator-1-GigabitEthernet1/0/1] port default vlan 10
[Administrator-1-GigabitEthernet1/0/1] quit
[Administrator-1] interface gigabitethernet 1/0/2
[Administrator-1-GigabitEthernet1/0/2] undo shutdown
[Administrator-1-GigabitEthernet1/0/2] portswitch
[Administrator-1-GigabitEthernet1/0/2] port default vlan 10
[Administrator-1-GigabitEthernet1/0/2] quit
[Administrator-1] interface gigabitethernet 1/0/3
[Administrator-1-GigabitEthernet1/0/3] undo shutdown
[Administrator-1-GigabitEthernet1/0/3] portswitch
[Administrator-1-GigabitEthernet1/0/3] port default vlan 10
[Administrator-1-GigabitEthernet1/0/3] quit
[Administrator-1] interface vlanif 10
[Administrator-1-Vlanif10] quit
After the previous configuration, you can find that NDP on the administrator is in the Enable
state, the host name of the neighboring node is Device Name, and the name of the interface
connecting the neighboring node and the local interface is Port Name.
[Administrator-1] display ndp interface gigabitethernet 1/0/1 gigabitethernet 1/0/2
Interface: GigabitEthernet1/0/1
Status: Enabled, Packets Sent: 0, Packets Received: 11, Packets Error: 0
Neighbor 1: Aging Time: 2(s)
MAC Address : 0002-0002-0002
Port Name : GigabitEthernet1/0/1
Software Version: NE40E Version V600R003C00
Device Name : Member-1
Port Duplex : FULL
Product Ver : NE40E
Interface: GigabitEthernet1/0/2
Status: Enabled, Packets Sent: 6, Packets Received: 16, Packets Error: 0
Neighbor 1: Aging Time: 5(s)
MAC Address : 0003-0003-0003
Port Name : GigabitEthernet1/0/1
Software Version: NE40E Version V600R003C00
Device Name : Member-2
Port Duplex : FULL
Product Ver : NE40E
NOTE
After the previous configuration, globally check the NTDP configuration on the administrator
switch. You can find that the interval and range for NTDP to collect topologies is 10 minutes
and 3 hops respectively.
[Administrator-1] display ntdp
Network topology discovery protocol is enabled
Hops : 3
Timer : 10 min
Hop Delay : 200 ms
Port Delay: 20 ms
Total time for last collection:0ms
Step 4 Enable the cluster function and set the management VLAN.
# Configure the administrator switch.
[Administrator-1] cluster enable
[Administrator-1] cluster
[Administrator-1-cluster] mngvlanid 10
[Administrator-1-cluster] quit
After the topology collection function is enabled manually on the administrator switch, check
the device information collected through NTDP and you can find the MAC address and types
of related devices.
<Administrator-1> ntdp explore
<Administrator-1> display ntdp device-list
The device-list of NTDP:
------------------------------------------------------------------------------
MAC HOP IP PLATFORM
------------------------------------------------------------------------------
0004-0004-0004 2 NE40E
0003-0003-0003 1 NE40E
0002-0002-0002 1 NE40E
0001-0001-0001 0 NE40E
NOTE
# On the administrator switch, set the range of IP addresses that can be assigned to the cluster
to 10.0.0.0/8, in which the IP address assigned to the administrator switch is 10.0.0.1/8.
[Administrator-1] cluster
[Administrator-1-cluster] ip-pool 10.0.0.1 8
After the previous configuration, check information about the cluster to which the device
belongs. You can find that the device name is changed, the cluster name is HUAWEI, and the
management VLAN ID is 10.
[HUAWEI_0.Administrator-1-cluster] display cluster
Cluster name:"HUAWEI"
Role:Administrator switch
management vlan id : 10
Cluster multicast MAC address : 0180-c200-000a(default)
Cluster auto-join : disabled
On the administrator switch, check information about candidate switches, you can find all the
candidate switches and their types.
[HUAWEI_0.Administrator-1-cluster] display cluster candidates
MAC HOP IP PLATFORM
0004-0004-0004 2 NE40E
0003-0003-0003 1 NE40E
0002-0002-0002 1 NE40E
After the previous configuration, check information about the administrator switch and member
switches in the cluster on the administrator switch. You can find that all the member switches
are added to the cluster and in the Up state.
[HUAWEI_0.Administrator-1-cluster] display cluster members
The list of cluster member:
------------------------------------------------------------------------------
SN Device Type MAC Address Status Device Name
------------------------------------------------------------------------------
0 NE40E 0001-0001-0001 Admin HUAWEI_0.Administrator-1
1 NE40E 0002-0002-0002 Up HUAWEI_1.Member-1
2 NE40E 0003-0003-0003 Up HUAWEI_2.Member-2
3 NE40E 0004-0004-0004 Up HUAWEI_3.Member-3
After the previous configuration, you can find that the interface on the administrator switch is
in the Up state.
[HUAWEI_0.Administrator-1] display interface Vlanif 10
Vlanif10 current state : UP
Line protocol current state : UP
Last line protocol up time : 2010-06-28 21:25:52
Description:HUAWEI, HUAWEI Series, Vlanif10 Interface
Route Port,The Maximum Transmit Unit is 1500
Internet Address is 1.0.0.1/8
Internet Address is 10.0.0.1/8 ClusterIP Sending Frames' Format is PKTFMT_ETHNT_
2, Hardware address is 0001-0001-0001
Physical is VLANIF
Current system time: 2010-07-01 14:37:11-08:00
Last 300 seconds input rate 0 bits/sec, 0 packets/sec
Last 300 seconds output rate 0 bits/sec, 0 packets/sec
Last 0 seconds input rate 0 bits/sec, 0 packets/sec
Last 0 seconds output rate 0 bits/sec, 0 packets/sec
Input: 0 packets,0 bytes,
0 unicast,0 broadcast,0 multicasts
Output:0 packets,0 bytes,
0 unicast,0 broadcast,0 multicasts.
After the previous configuration, check information about the cluster to which the administrator
switch belongs. You can find that the public log host, SNMP host, FTP server, and SFTP server
are configured successfully.
[HUAWEI_0.Administrator-1-cluster] display cluster
Cluster name:"HUAWEI"
Role:Administrator switch
management vlan id : 10
Cluster multicast MAC address : 0180-c200-000a(default)
Cluster auto-join : disabled
Step 11 Configure the routes of member switches to ensure that reachable routes exist between member
switches and the SNMP host.
# Configure member switch 1.
[HUAWEI_1.Member-1] ip route-static 3.0.0.0 8 10.0.0.1
NOTE
Multiple member switches can be configured simultaneously through incremental configuration. For
configuration details, see Example for Configuring the Incremental Configuration Function for an
HGMP Cluster.
NOTE
Multiple member switches can be configured simultaneously through incremental configuration. For
configuration details, see Example for Configuring the Incremental Configuration Function for an
HGMP Cluster.
After the previous configuration, run the display current-configuration filter snmp command,
you can find that the SNMP agent is enabled and the UDP domain of the administrator switch
is 3.0.0.1, the address of the SNMP host. Take the displayed information about the administrator
switch and Member-3 as examples.
[HUAWEI_0.Administrator-1] display current-configuration filter snmp
snmp-agent target-host trap address udp-domain 3.0.0.1 params securityname cluster
[HUAWEI_3.Member-3] display current-configuration filter snmp
snmp-agent target-host trap address udp-domain 3.0.0.1 params securityname cluster
----End
Configuration Files
l Configuration file of Administrator-1.
#
sysname Administrator-1
#
vlan batch 10
#
cluster enable
ntdp enable
ntdp hop 5
ntdp timer 10
ndp enable
#
interface Vlanif10
ip address 1.0.0.1 255.0.0.0
#
interface GigabitEthernet1/0/1
undo shutdown
portswitch
port default vlan 10
ntdp enable
ndp enable
#
interface GigabitEthernet1/0/2
undo shutdown
portswitch
port default vlan 10
ntdp enable
ndp enable
#
cluster
mngvlanid 10
ip-pool 10.0.0.1 255.0.0.0
build HUAWEI
undo cluster-snmp-nat enable
sftp-server 2.0.0.2
logging-host 4.0.0.1
snmp-host 3.0.0.1
#
ip route-static 0.0.0.0 0.0.0.0 1.0.0.2
#
return
l Configuration file of Member-1.
#
sysname Member-1
#
vlan batch 10
#
cluster enable
ntdp enable
ntdp hop 5
ntdp timer 10
ndp enable
#
interface Vlanif10
#
interface GigabitEthernet1/0/1
undo shutdown
portswitch
port default vlan 10
ntdp enable
ndp enable
#
interface GigabitEthernet1/0/2
undo shutdown
portswitch
port default vlan 10
ntdp enable
ndp enable
#
cluster
mngvlanid 10
administrator-address 0001-0001-0001 name HUAWEI
#
ip route-static 3.0.0.0 255.0.0.0 10.0.0.1
#
snmp-agent
snmp-agent target-host trap address udp-domain 3.0.0.1 params securityname
cluster
#
return
l Configuration file of Member-2.
#
sysname Member-2
#
FTP server enable
#
vlan batch 10
#
cluster enable
ntdp enable
ntdp hop 5
ntdp timer 10
ndp enable
#
aaa
local-user hgmp password simple hgmp
local-user hgmp service-type ftp
local-user hgmp ftp-directory cfcard:
#
#
interface Vlanif10
#
interface GigabitEthernet1/0/1
undo shutdown
portswitch
port default vlan 10
ntdp enable
ndp enable
#
cluster
mngvlanid 10
administrator-address 0001-0001-0001 name HUAWEI
#
ip route-static 3.0.0.0 255.0.0.0 10.0.0.1
#
snmp-agent
snmp-agent target-host trap address udp-domain 3.0.0.1 params securityname
cluster
#
return
Networking Requirements
As shown in Figure 4-8, all the Layer 2 switches belong to the same cluster. Administrator-1 is
the administrator switch of the cluster and other switches are member switches. The member ID
of Member-2 is 2 and the member ID of Member-3 is 3.
Member-2 and Member-3 are required to download configuration files in batches from the FTP
server.
Figure 4-8 Networking diagram of configuring the batch distribution function for an HGMP
cluster
FTP Server
2.0.0.1/8 NM station
3.0.0.1/8
IP/MPLS
core
SFTP Server
2.0.0.2/8
GE1/0/1 Log station
1.0.0.2/8 4.0.0.1/8
GE1/0/3
GE1/0/1 GE1/0/2
Administrator-1
10.0.0.1/8
GE1/0/1 GE1/0/1
Member-1 ......
GE1/0/2 Member-2
GE1/0/1
Member-3
10.0.0.4/8
Cluster
Configuration Roadmap
The configuration roadmap is as follows:
1. Create a cluster according to the steps described in 4.5.1 Example for Configuring Basic
HGMP Functions for a Cluster.
2. 4.5.1 Example for Configuring Basic HGMP Functions for a Cluster
NOTE
l Configure the interconnection of FTP servers and devices in and out of the HGMP cluster in
NAT or non-NAT mode. The following takes the configuration in NAT mode as an example.
l If the system software, patch files, PAF files, license files, or configuration files, batch
distribution can be distributed in batches without accessing the FTP server out of the cluster, you
can skip this step.
3. Configure batch distribution on the administrator switch.
Data Preparation
To complete the configuration, you need the following data:
l Management VLAN ID of the cluster, that is 10
l IP address of VLANIF 10 that is 1.0.0.1/8 and a reachable route between VLANIF 10 and
the FTP server
l Address pool of the cluster, that is 10.0.0.0/8
l IP address of the administrator switch used in the cluster, that is 10.0.0.1/8
l Member ID of Member-2 being 2 and member ID of Member-3 being 3
Procedure
Step 1 Configure a management VLAN.
# Create VLAN 10 on the device and add interfaces of the administrator switch and member
switches to VLAN 10.
# Configure the administrator switch.
<HUAWEI> system-view
[HUAWEI] sysname Administrator-1
[Administrator-1] vlan 10
[Administrator-1-vlan10] quit
[Administrator-1] interface gigabitethernet 1/0/1
[Administrator-1-GigabitEthernet1/0/1] undo shutdown
[Administrator-1-GigabitEthernet1/0/1] portswitch
[Administrator-1-GigabitEthernet1/0/1] port default vlan 10
[Administrator-1-GigabitEthernet1/0/1] quit
[Administrator-1] interface gigabitethernet 1/0/2
[Administrator-1-GigabitEthernet1/0/2] undo shutdown
[Administrator-1-GigabitEthernet1/0/2] portswitch
[Administrator-1-GigabitEthernet1/0/2] port default vlan 10
[Administrator-1-GigabitEthernet1/0/2] quit
[Administrator-1] interface gigabitethernet 1/0/3
[Administrator-1-GigabitEthernet1/0/3] undo shutdown
[Administrator-1-GigabitEthernet1/0/3] portswitch
[Administrator-1-GigabitEthernet1/0/3] port default vlan 10
[Administrator-1-GigabitEthernet1/0/3] quit
[Administrator-1] interface vlanif 10
[Administrator-1-Vlanif10] quit
After the previous configuration, you can find that NDP on the administrator is in the Enable
state, the host name of the neighboring node is Device Name, and the name of the interface
connecting the neighboring node and the local interface is Port Name.
[Administrator-1] display ndp interface gigabitethernet 1/0/1 gigabitethernet 1/0/2
Interface: GigabitEthernet1/0/1
# On devices, enable NTDP in the system view and on the interface and configure the interval
and range for NTDP to collect topologies to 10 minutes and 3 hops respectively.
NOTE
After the previous configuration, globally check the NTDP configuration on the administrator
switch. You can find that the interval and range for NTDP to collect topologies is 10 minutes
and 3 hops respectively.
[Administrator-1] display ntdp
Network topology discovery protocol is enabled
Hops : 3
Timer : 10 min
Hop Delay : 200 ms
Port Delay: 20 ms
Total time for last collection:0ms
Step 4 Enable the cluster function and set the management VLAN.
# Configure the administrator switch.
[Administrator-1] cluster enable
[Administrator-1] cluster
[Administrator-1-cluster] mngvlanid 10
[Administrator-1-cluster] quit
After the topology collection function is enabled manually on the administrator switch, check
the device information collected through NTDP and you can find the MAC address and types
of related devices.
<Administrator-1> ntdp explore
<Administrator-1> display ntdp device-list
The device-list of NTDP:
------------------------------------------------------------------------------
MAC HOP IP PLATFORM
------------------------------------------------------------------------------
0004-0004-0004 2 NE40E
0003-0003-0003 1 NE40E
0002-0002-0002 1 NE40E
0001-0001-0001 0 NE40E
# On the administrator switch, set the range of IP addresses that can be assigned to the cluster
to 10.0.0.0/8, in which the IP address assigned to the administrator switch is 10.0.0.1/8.
[Administrator-1] cluster
[Administrator-1-cluster] ip-pool 10.0.0.1 8
After the previous configuration, check information about the cluster to which the device
belongs. You can find that the device name is changed, the cluster name is HUAWEI, and the
management VLAN ID is 10.
[HUAWEI_0.Administrator-1-cluster] display cluster
Cluster name:"HUAWEI"
Role:Administrator switch
management vlan id : 10
Cluster multicast MAC address : 0180-c200-000a(default)
Cluster auto-join : disabled
On the administrator switch, check information about candidate switches, you can find all the
candidate switches and their types.
[HUAWEI_0.Administrator-1-cluster] display cluster candidates
MAC HOP IP PLATFORM
0004-0004-0004 2 NE40E
0003-0003-0003 1 NE40E
0002-0002-0002 1 NE40E
After the previous configuration, check information about the administrator switch and member
switches in the cluster on the administrator switch. You can find that all the member switches
are added to the cluster and in the Up state.
[HUAWEI_0.Administrator-1-cluster] display cluster members
The list of cluster member:
------------------------------------------------------------------------------
SN Device Type MAC Address Status Device Name
------------------------------------------------------------------------------
0 NE40E 0001-0001-0001 Admin HUAWEI_0.Administrator-1
1 NE40E 0002-0002-0002 Up HUAWEI_1.Member-1
2 NE40E 0003-0003-0003 Up HUAWEI_2.Member-2
3 NE40E 0004-0004-0004 Up HUAWEI_3.Member-3
# To ensure the normal communication between member switches in the cluster and devices out
of the cluster, assign an IP address to VLANIF 10 on the administrator switch.
After the previous configuration, you can find that the interface on the administrator switch is
in the Up state.
[HUAWEI_0.Administrator-1] display interface Vlanif 10
Vlanif10 current state : UP
Line protocol current state : UP
Last line protocol up time : 2010-06-28 21:25:52
Description:HUAWEI, HUAWEI Series, Vlanif10 Interface
Route Port,The Maximum Transmit Unit is 1500
Internet Address is 1.0.0.1/8
Internet Address is 10.0.0.1/8 ClusterIP Sending Frames' Format is PKTFMT_ETHNT_
2, Hardware address is 0001-0001-0001
Physical is VLANIF
Current system time: 2010-07-01 14:37:11-08:00
Last 300 seconds input rate 0 bits/sec, 0 packets/sec
Last 300 seconds output rate 0 bits/sec, 0 packets/sec
Last 0 seconds input rate 0 bits/sec, 0 packets/sec
Last 0 seconds output rate 0 bits/sec, 0 packets/sec
Input: 0 packets,0 bytes,
0 unicast,0 broadcast,0 multicasts
Output:0 packets,0 bytes,
0 unicast,0 broadcast,0 multicasts.
# This operation is to ensure that a reachable route exists between switches in the cluster and
servers or hosts, you can also use dynamic route.
[HUAWEI_0.Administrator-1] ip route-static 0.0.0.0 0 1.0.0.2
After the previous configuration, check information about the cluster to which the administrator
switch belongs. You can find that the public log host, SNMP host, FTP server, and SFTP server
are configured successfully.
[HUAWEI_0.Administrator-1-cluster] display cluster
Cluster name:"HUAWEI"
Role:Administrator switch
management vlan id : 10
Cluster multicast MAC address : 0180-c200-000a(default)
Cluster auto-join : disabled
# Run the dir command on member switches and you can find that member switches successfully
download the specified configuration files. Take Member-2 as an example.
<HUAWEi_2.Member-2> dir *.zip
Directory of cfcard:/
Idx Attr Size(Byte) Date Time FileName
0 -rw- 1,491 Sep 03 2008 17:43:52 vrpcfg.zip
1 -rw- 752 Aug 05 2008 15:04:36 vrpcfg-hgmp.zip
506,880 KB total (35,920 KB free)
<HUAWEi_2.Member-2> cd slave#cfcard:
<HUAWEi_2.Member-2> dir *.zip
Directory of slave#cfcard:/
# Run the display startup command on member switches and you can find that names of the
configuration files for the next startup of the member switch is changed. Take Member-2 as an
example.
<HUAWEi_2.Member-2> display startup
MainBoard:
Configured startup system software: cfcard:/vrpv500r006c01b100.cc
Startup system software: cfcard:/vrpv500r006c01b100.cc
Next startup system software: cfcard:/vrpv500r006c01b100.cc
Startup saved-configuration file: cfcard:/vrpcfg.zip
Next startup saved-configuration file: cfcard:/vrpcfg-hgmp.zip
Startup paf file: cfcard:/paf.txt
Next startup paf file: cfcard:/paf.txt
Startup license file: cfcard:/license.txt
Next startup license file: cfcard:/license.txt
Startup patch package: NULL
Next startup patch package: NULL
SlaveBoard:
Configured startup system software: cfcard:/vrpv500r006c01b100.cc
Startup system software: cfcard:/vrpv500r006c01b100.cc
Next startup system software: cfcard:/vrpv500r006c01b100.cc
Startup saved-configuration file: cfcard:/vrpcfg.zip
Next startup saved-configuration file: cfcard:/vrpcfg-hgmp.zip
Startup paf file: cfcard:/paf.txt
Next startup paf file: cfcard:/paf.txt
Startup license file: cfcard:/license.txt
Next startup license file: cfcard:/license.txt
Startup patch package: NULL
Next startup patch package: NULL
----End
Configuration Files
l Configuration file of Administrator-1.
#
sysname Administrator-1
#
vlan batch 10
#
cluster enable
ntdp enable
ntdp hop 5
ntdp timer 10
ndp enable
#
interface Vlanif10
ip address 1.0.0.1 255.0.0.0
#
interface GigabitEthernet1/0/1
undo shutdown
portswitch
port default vlan 10
ntdp enable
ndp enable
#
interface GigabitEthernet1/0/2
undo shutdown
portswitch
port default vlan 10
ntdp enable
ndp enable
#
cluster
mngvlanid 10
ip-pool 10.0.0.1 255.0.0.0
build HUAWEI
cluster-ftp-nat enable
ftp-server 2.0.0.1
sftp-server 2.0.0.2
logging-host 4.0.0.1
snmp-host 3.0.0.1
#
ip route-static 0.0.0.0 0.0.0.0 1.0.0.2
#
return
l Configuration file of Member-1.
#
sysname Member-1
#
vlan batch 10
#
cluster enable
ntdp enable
ntdp hop 5
ntdp timer 10
ndp enable
#
interface Vlanif10
#
interface GigabitEthernet1/0/1
undo shutdown
portswitch
port default vlan 10
ntdp enable
ndp enable
#
interface GigabitEthernet1/0/2
undo shutdown
portswitch
port default vlan 10
ntdp enable
ndp enable
#
cluster
mngvlanid 10
administrator-address 0001-0001-0001 name HUAWEI
#
return
l Configuration file of Member-2.
#
sysname Member-2
#
FTP server enable
#
vlan batch 10
#
cluster enable
ntdp enable
ntdp hop 5
ntdp timer 10
ndp enable
#
interface Vlanif10
#
interface GigabitEthernet1/0/1
undo shutdown
portswitch
port default vlan 10
ntdp enable
ndp enable
#
cluster
mngvlanid 10
administrator-address 0001-0001-0001 name HUAWEI
#
return
Networking Requirements
As shown in Figure 4-9, all the Layer 2 switches belong to the same cluster. Administrator-1 is
the administrator switch of the cluster and other switches are member switches. The member ID
of Member-2 is 2 and the member ID of Member-3 is 3.
Member switches Member-2 and Member-3 are required to be restarted.
Figure 4-9 Networking diagram of configuring the batch restart function for an HGMP cluster
FTP Server
2.0.0.1/8 NM station
3.0.0.1/8
IP/MPLS
core
SFTP Server
2.0.0.2/8
GE1/0/1 Log station
1.0.0.2/8 4.0.0.1/8
GE1/0/3
GE1/0/1 GE1/0/2
Administrator-1
10.0.0.1/8
GE1/0/1 GE1/0/1
Member-1 ......
GE1/0/2 Member-2
GE1/0/1
Member-3
10.0.0.4/8
Cluster
Configuration Roadmap
The configuration roadmap is as follows:
1. Create a cluster according to the steps described in 4.5.1 Example for Configuring Basic
HGMP Functions for a Cluster.
2. Configure batch restart on the administrator switch.
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Configure a management VLAN.
# Create VLAN 10 on the device and add interfaces of the administrator switch and member
switches to VLAN 10.
# Configure the administrator switch.
<HUAWEI> system-view
[HUAWEI] sysname Administrator-1
[Administrator-1] vlan 10
[Administrator-1-vlan10] quit
[Administrator-1] interface gigabitethernet 1/0/1
[Administrator-1-GigabitEthernet1/0/1] undo shutdown
[Administrator-1-GigabitEthernet1/0/1] portswitch
[Administrator-1-GigabitEthernet1/0/1] port default vlan 10
[Administrator-1-GigabitEthernet1/0/1] quit
[Administrator-1] interface gigabitethernet 1/0/2
[Administrator-1-GigabitEthernet1/0/2] undo shutdown
[Administrator-1-GigabitEthernet1/0/2] portswitch
[Administrator-1-GigabitEthernet1/0/2] port default vlan 10
[Administrator-1-GigabitEthernet1/0/2] quit
[Administrator-1] interface gigabitethernet 1/0/3
[Administrator-1-GigabitEthernet1/0/3] undo shutdown
[Administrator-1-GigabitEthernet1/0/3] portswitch
[Administrator-1-GigabitEthernet1/0/3] port default vlan 10
[Administrator-1-GigabitEthernet1/0/3] quit
[Administrator-1] interface vlanif 10
[Administrator-1-Vlanif10] quit
<HUAWEI> system-view
[HUAWEI] sysname Member-3
[Member-3] vlan 10
[Member-3-vlan10] quit
[Member-3] interface gigabitethernet 1/0/1
[Member-3-GigabitEthernet1/0/1] undo shutdown
[Member-3-GigabitEthernet1/0/1] portswitch
[Member-3-GigabitEthernet1/0/1] port default vlan 10
[Member-3-GigabitEthernet1/0/1] quit
[Member-3] interface vlanif 10
[Member-3-Vlanif10] quit
After the previous configuration, you can find that NDP on the administrator is in the Enable
state, the host name of the neighboring node is Device Name, and the name of the interface
connecting the neighboring node and the local interface is Port Name.
[Administrator-1] display ndp interface gigabitethernet 1/0/1 gigabitethernet 1/0/2
Interface: GigabitEthernet1/0/1
Status: Enabled, Packets Sent: 0, Packets Received: 11, Packets Error: 0
Neighbor 1: Aging Time: 2(s)
MAC Address : 0002-0002-0002
Port Name : GigabitEthernet1/0/1
Software Version: NE40E Version V600R003C00
Device Name : Member-1
Port Duplex : FULL
Product Ver : NE40E
Interface: GigabitEthernet1/0/2
Status: Enabled, Packets Sent: 6, Packets Received: 16, Packets Error: 0
Neighbor 1: Aging Time: 5(s)
MAC Address : 0003-0003-0003
Port Name : GigabitEthernet1/0/1
# On devices, enable NTDP in the system view and on the interface and configure the interval
and range for NTDP to collect topologies to 10 minutes and 3 hops respectively.
NOTE
After the previous configuration, globally check the NTDP configuration on the administrator
switch. You can find that the interval and range for NTDP to collect topologies is 10 minutes
and 3 hops respectively.
[Administrator-1] display ntdp
Network topology discovery protocol is enabled
Hops : 3
Timer : 10 min
Hop Delay : 200 ms
Port Delay: 20 ms
Total time for last collection:0ms
Step 4 Enable the cluster function and set the management VLAN.
# Configure the administrator switch.
[Administrator-1] cluster enable
[Administrator-1] cluster
[Administrator-1-cluster] mngvlanid 10
[Administrator-1-cluster] quit
After the topology collection function is enabled manually on the administrator switch, check
the device information collected through NTDP and you can find the MAC address and types
of related devices.
<Administrator-1> ntdp explore
<Administrator-1> display ntdp device-list
The device-list of NTDP:
------------------------------------------------------------------------------
MAC HOP IP PLATFORM
------------------------------------------------------------------------------
0004-0004-0004 2 NE40E
0003-0003-0003 1 NE40E
0002-0002-0002 1 NE40E
0001-0001-0001 0 NE40E
# On the administrator switch, set the range of IP addresses that can be assigned to the cluster
to 10.0.0.0/8, in which the IP address assigned to the administrator switch is 10.0.0.1/8.
[Administrator-1] cluster
[Administrator-1-cluster] ip-pool 10.0.0.1 8
After the previous configuration, check information about the cluster to which the device
belongs. You can find that the device name is changed, the cluster name is HUAWEI, and the
management VLAN ID is 10.
[HUAWEI_0.Administrator-1-cluster] display cluster
Cluster name:"HUAWEI"
Role:Administrator switch
management vlan id : 10
Cluster multicast MAC address : 0180-c200-000a(default)
Cluster auto-join : disabled
On the administrator switch, check information about candidate switches, you can find all the
candidate switches and their types.
[HUAWEI_0.Administrator-1-cluster] display cluster candidates
MAC HOP IP PLATFORM
0004-0004-0004 2 NE40E
0003-0003-0003 1 NE40E
0002-0002-0002 1 NE40E
After the previous configuration, check information about the administrator switch and member
switches in the cluster on the administrator switch. You can find that all the member switches
are added to the cluster and in the Up state.
[HUAWEI_0.Administrator-1-cluster] display cluster members
The list of cluster member:
------------------------------------------------------------------------------
SN Device Type MAC Address Status Device Name
------------------------------------------------------------------------------
0 NE40E 0001-0001-0001 Admin HUAWEI_0.Administrator-1
1 NE40E 0002-0002-0002 Up HUAWEI_1.Member-1
2 NE40E 0003-0003-0003 Up HUAWEI_2.Member-2
3 NE40E 0004-0004-0004 Up HUAWEI_3.Member-3
After the previous configuration, you can find that the interface on the administrator switch is
in the Up state.
[HUAWEI_0.Administrator-1] display interface Vlanif 10
Vlanif10 current state : UP
Line protocol current state : UP
Last line protocol up time : 2010-06-28 21:25:52
Description:HUAWEI, HUAWEI Series, Vlanif10 Interface
Route Port,The Maximum Transmit Unit is 1500
Internet Address is 1.0.0.1/8
Internet Address is 10.0.0.1/8 ClusterIP Sending Frames' Format is PKTFMT_ETHNT_
2, Hardware address is 0001-0001-0001
Physical is VLANIF
Current system time: 2010-07-01 14:37:11-08:00
Last 300 seconds input rate 0 bits/sec, 0 packets/sec
Last 300 seconds output rate 0 bits/sec, 0 packets/sec
Last 0 seconds input rate 0 bits/sec, 0 packets/sec
Last 0 seconds output rate 0 bits/sec, 0 packets/sec
Input: 0 packets,0 bytes,
0 unicast,0 broadcast,0 multicasts
Output:0 packets,0 bytes,
0 unicast,0 broadcast,0 multicasts.
After the previous configuration, check information about the cluster to which the administrator
switch belongs. You can find that the public log host, SNMP host, FTP server, and SFTP server
are configured successfully.
[HUAWEI_0.Administrator-1-cluster] display cluster
Cluster name:"HUAWEI"
Role:Administrator switch
management vlan id : 10
Cluster multicast MAC address : 0180-c200-000a(default)
Cluster auto-join : disabled
----End
Configuration Files
l Configuration file of Administrator-1.
#
sysname Administrator-1
#
vlan batch 10
#
cluster enable
ntdp enable
ntdp hop 3
ntdp timer 10
ndp enable
#
interface Vlanif10
ip address 1.0.0.1 255.0.0.0
#
interface GigabitEthernet1/0/1
undo shutdown
portswitch
port default vlan 10
ntdp enable
ndp enable
#
interface GigabitEthernet1/0/2
undo shutdown
portswitch
ntdp enable
ndp enable
#
cluster
mngvlanid 10
administrator-address 0001-0001-0001 name HUAWEI
#
return
Networking Requirements
As shown in Figure 4-10, all the Layer 2 switches belong to the same cluster. Administrator-1
is the administrator switch of the cluster and other switches are member switches. The member
ID of Member-2 is 2 and the member ID of Member-3 is 3.
To configure VLAN 100 to VLAN 200 on Member-2 and Member-3 and a static route with its
next hop address being the administrator switch, you can use the incremental configuration
function of the HGMP cluster.
Figure 4-10 Networking diagram of configuring the incremental configuration function for an
HGMP cluster
FTP Server
2.0.0.1/8 NM station
3.0.0.1/8
IP/MPLS
core
SFTP Server
2.0.0.2/8
GE1/0/1 Log station
1.0.0.2/8 4.0.0.1/8
GE1/0/3
GE1/0/1 GE1/0/2
Administrator-1
10.0.0.1/8
GE1/0/1 GE1/0/1
Member-1 ......
GE1/0/2 Member-2
GE1/0/1
Member-3
10.0.0.4/8
Cluster
Configuration Roadmap
The configuration roadmap is as follows:
1. Create a cluster according to the steps described in 4.5.1 Example for Configuring Basic
HGMP Functions for a Cluster.
2. Edit the list of incremental configurations command on the administrator switch.
3. Deliver the list of incremental configuration commands to the specified member switch.
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Configure a management VLAN.
# Create VLAN 10 on the device and add interfaces of the administrator switch and member
switches to VLAN 10.
# Configure the administrator switch.
<HUAWEI> system-view
[HUAWEI] sysname Administrator-1
[Administrator-1] vlan 10
[Administrator-1-vlan10] quit
[Administrator-1] interface gigabitethernet 1/0/1
[Administrator-1-GigabitEthernet1/0/1] undo shutdown
[Administrator-1-GigabitEthernet1/0/1] portswitch
[Administrator-1-GigabitEthernet1/0/1] port default vlan 10
[Administrator-1-GigabitEthernet1/0/1] quit
[Administrator-1] interface gigabitethernet 1/0/2
[Administrator-1-GigabitEthernet1/0/2] undo shutdown
[Administrator-1-GigabitEthernet1/0/2] portswitch
[Administrator-1-GigabitEthernet1/0/2] port default vlan 10
[Administrator-1-GigabitEthernet1/0/2] quit
[Administrator-1] interface gigabitethernet 1/0/3
[Administrator-1-GigabitEthernet1/0/3] undo shutdown
[Administrator-1-GigabitEthernet1/0/3] portswitch
[Administrator-1-GigabitEthernet1/0/3] port default vlan 10
[Administrator-1-GigabitEthernet1/0/3] quit
[Administrator-1] interface vlanif 10
[Administrator-1-Vlanif10] quit
After the previous configuration, you can find that NDP on the administrator is in the Enable
state, the host name of the neighboring node is Device Name, and the name of the interface
connecting the neighboring node and the local interface is Port Name.
[Administrator-1] display ndp interface gigabitethernet 1/0/1 gigabitethernet 1/0/2
Interface: GigabitEthernet1/0/1
Status: Enabled, Packets Sent: 0, Packets Received: 11, Packets Error: 0
Neighbor 1: Aging Time: 2(s)
MAC Address : 0002-0002-0002
Port Name : GigabitEthernet1/0/1
Software Version: NE40E Version V600R003C00
Device Name : Member-1
Port Duplex : FULL
Product Ver : NE40E
Interface: GigabitEthernet1/0/2
Status: Enabled, Packets Sent: 6, Packets Received: 16, Packets Error: 0
Neighbor 1: Aging Time: 5(s)
MAC Address : 0003-0003-0003
Port Name : GigabitEthernet1/0/1
Software Version: NE40E Version V600R003C00
Device Name : Member-2
Port Duplex : FULL
Product Ver : NE40E
NOTE
After the previous configuration, globally check the NTDP configuration on the administrator
switch. You can find that the interval and range for NTDP to collect topologies is 10 minutes
and 3 hops respectively.
[Administrator-1] display ntdp
Network topology discovery protocol is enabled
Hops : 3
Timer : 10 min
Step 4 Enable the cluster function and set the management VLAN.
After the topology collection function is enabled manually on the administrator switch, check
the device information collected through NTDP and you can find the MAC address and types
of related devices.
<Administrator-1> ntdp explore
<Administrator-1> display ntdp device-list
The device-list of NTDP:
------------------------------------------------------------------------------
MAC HOP IP PLATFORM
------------------------------------------------------------------------------
0004-0004-0004 2 NE40E
0003-0003-0003 1 NE40E
0002-0002-0002 1 NE40E
0001-0001-0001 0 NE40E
# On the administrator switch, set the range of IP addresses that can be assigned to the cluster
to 10.0.0.0/8, in which the IP address assigned to the administrator switch is 10.0.0.1/8.
[Administrator-1] cluster
[Administrator-1-cluster] ip-pool 10.0.0.1 8
After the previous configuration, check information about the cluster to which the device
belongs. You can find that the device name is changed, the cluster name is HUAWEI, and the
management VLAN ID is 10.
[HUAWEI_0.Administrator-1-cluster] display cluster
Cluster name:"HUAWEI"
Role:Administrator switch
management vlan id : 10
Cluster multicast MAC address : 0180-c200-000a(default)
Cluster auto-join : disabled
On the administrator switch, check information about candidate switches, you can find all the
candidate switches and their types.
[HUAWEI_0.Administrator-1-cluster] display cluster candidates
MAC HOP IP PLATFORM
0004-0004-0004 2 NE40E
0003-0003-0003 1 NE40E
0002-0002-0002 1 NE40E
After the previous configuration, check information about the administrator switch and member
switches in the cluster on the administrator switch. You can find that all the member switches
are added to the cluster and in the Up state.
[HUAWEI_0.Administrator-1-cluster] display cluster members
The list of cluster member:
------------------------------------------------------------------------------
SN Device Type MAC Address Status Device Name
------------------------------------------------------------------------------
0 NE40E 0001-0001-0001 Admin HUAWEI_0.Administrator-1
1 NE40E 0002-0002-0002 Up HUAWEI_1.Member-1
# To ensure the normal communication between member switches in the cluster and devices out
of the cluster, assign an IP address to VLANIF 10 on the administrator switch.
After the previous configuration, you can find that the interface on the administrator switch is
in the Up state.
[HUAWEI_0.Administrator-1] display interface Vlanif 10
Vlanif10 current state : UP
Line protocol current state : UP
Last line protocol up time : 2010-06-28 21:25:52
Description:HUAWEI, HUAWEI Series, Vlanif10 Interface
Route Port,The Maximum Transmit Unit is 1500
Internet Address is 1.0.0.1/8
Internet Address is 10.0.0.1/8 ClusterIP Sending Frames' Format is PKTFMT_ETHNT_
2, Hardware address is 0001-0001-0001
Physical is VLANIF
Current system time: 2010-07-01 14:37:11-08:00
Last 300 seconds input rate 0 bits/sec, 0 packets/sec
Last 300 seconds output rate 0 bits/sec, 0 packets/sec
Last 0 seconds input rate 0 bits/sec, 0 packets/sec
Last 0 seconds output rate 0 bits/sec, 0 packets/sec
Input: 0 packets,0 bytes,
0 unicast,0 broadcast,0 multicasts
Output:0 packets,0 bytes,
0 unicast,0 broadcast,0 multicasts.
# This operation is to ensure that a reachable route exists between switches in the cluster and
servers or hosts, you can also use dynamic route.
[HUAWEI_0.Administrator-1] ip route-static 0.0.0.0 0 1.0.0.2
After the previous configuration, check information about the cluster to which the administrator
switch belongs. You can find that the public log host, SNMP host, FTP server, and SFTP server
are configured successfully.
management vlan id : 10
Cluster multicast MAC address : 0180-c200-000a(default)
Cluster auto-join : disabled
Step 10 Edit the list of incremental configuration commands on the administrator switch.
[HUAWEI_0.Administrator-1] cluster
[HUAWEI_0.Administrator-1-cluster] increment
[HUAWEI_0.Administrator-1-cluster-increment] increment-command command-number 10
command-text vlan batch 100 to 200
[HUAWEI_0.Administrator-1-cluster-increment] increment-command command-number 20
command-text ip route-static 2.0.0.0 8 10.0.0.1
After the previous configuration, run the display increment-command command on the
administrator switch to check the list of incremental configuration commands.
[HUAWEI_0.Administrator-1] display increment-command
The content of increment commands:
------------------------------------------------------------------------------
SN Content
------------------------------------------------------------------------------
10 vlan batch 100 to 200
20 ip route-static 2.0.0.0 8 10.0.0.1
Step 11 Deliver the list of incremental configurations command to the specified member switch.
[HUAWEI_0.Administrator-1-cluster-increment] increment-run group-by member-number
2 to 3
----End
Configuration Files
l Configuration file of Administrator-1.
#
sysname Administrator-1
#
vlan batch 10
#
cluster enable
ntdp enable
ntdp hop 3
ntdp timer 10
ndp enable
#
interface Vlanif10
ip address 1.0.0.1 255.0.0.0
#
interface GigabitEthernet1/0/1
undo shutdown
portswitch
port default vlan 10
ntdp enable
ndp enable
#
interface GigabitEthernet1/0/2
undo shutdown
portswitch
port default vlan 10
ntdp enable
ndp enable
#
cluster
mngvlanid 10
ip-pool 10.0.0.1 255.0.0.0
build HUAWEI
cluster-ftp-nat enable
ftp-server 2.0.0.1
sftp-server 2.0.0.2
logging-host 4.0.0.1
snmp-host 3.0.0.1
#
ip route-static 0.0.0.0 0.0.0.0 1.0.0.2
#
return
l Configuration file of Member-1.
#
sysname Member-1
#
vlan batch 10
#
cluster enable
ntdp enable
ntdp hop 3
ntdp timer 10
ndp enable
#
interface Vlanif10
#
interface GigabitEthernet1/0/1
undo shutdown
portswitch
port default vlan 10
ntdp enable
ndp enable
#
interface GigabitEthernet1/0/2
undo shutdown
portswitch
port default vlan 10
ntdp enable
ndp enable
#
cluster
mngvlanid 10
administrator-address 0001-0001-0001 name HUAWEI
#
return
Networking Requirements
As shown in Figure 4-11, all the Layer 2 switches belong to the same cluster. Administrator-1
is the administrator switch of the cluster and other switches are member switches. The member
ID of Member-2 is 2 and the member ID of Member-3 is 3.
To synchronize the configuration files of all member switches to the FTP server as required, you
can configure the configuration synchronization function for the HGMP cluster.
Figure 4-11 Networking diagram of configuring the configuration synchronization function for
an HGMP cluster
FTP Server
2.0.0.1/8 NM station
3.0.0.1/8
IP/MPLS
core
SFTP Server
2.0.0.2/8
GE1/0/1 Log station
1.0.0.2/8 4.0.0.1/8
GE1/0/3
GE1/0/1 GE1/0/2
Administrator-1
10.0.0.1/8
GE1/0/1 GE1/0/1
Member-1 ......
GE1/0/2 Member-2
GE1/0/1
Member-3
10.0.0.4/8
Cluster
Configuration Roadmap
The configuration roadmap is as follows:
1. Create a cluster according to the steps described in 4.5.1 Example for Configuring Basic
HGMP Functions for a Cluster.
l Configure the interconnection of FTP servers and devices in and out of the HGMP cluster in
NAT or non-NAT mode. The following takes the configuration in NAT mode as an example.
l If it is not required to synchronize the configuration files of the HGMP cluster by accessing the
FTP server out of the cluster, you can skip this step.
3. Run the configuration synchronization command on the administrator switch.
Data Preparation
To complete the configuration, you need the following data:
l Management VLAN ID of the cluster, that is 10
l IP address of VLANIF 10 that is 1.0.0.1/8 and a reachable route between VLANIF 10 and
the FTP server
l Address pool of the cluster, that is 10.0.0.0/8
l IP address of the administrator switch used in the cluster, that is 10.0.0.1/8
Procedure
Step 1 Configure a management VLAN.
# Create VLAN 10 on the device and add interfaces of the administrator switch and member
switches to VLAN 10.
# Configure the administrator switch.
<HUAWEI> system-view
[HUAWEI] sysname Administrator-1
[Administrator-1] vlan 10
[Administrator-1-vlan10] quit
[Administrator-1] interface gigabitethernet 1/0/1
[Administrator-1-GigabitEthernet1/0/1] undo shutdown
[Administrator-1-GigabitEthernet1/0/1] portswitch
[Administrator-1-GigabitEthernet1/0/1] port default vlan 10
[Administrator-1-GigabitEthernet1/0/1] quit
[Administrator-1] interface gigabitethernet 1/0/2
[Administrator-1-GigabitEthernet1/0/2] undo shutdown
[Administrator-1-GigabitEthernet1/0/2] portswitch
[Administrator-1-GigabitEthernet1/0/2] port default vlan 10
[Administrator-1-GigabitEthernet1/0/2] quit
[Administrator-1] interface gigabitethernet 1/0/3
[Administrator-1-GigabitEthernet1/0/3] undo shutdown
[Administrator-1-GigabitEthernet1/0/3] portswitch
[Administrator-1-GigabitEthernet1/0/3] port default vlan 10
[Administrator-1-GigabitEthernet1/0/3] quit
[Administrator-1] interface vlanif 10
[Administrator-1-Vlanif10] quit
After the previous configuration, you can find that NDP on the administrator is in the Enable
state, the host name of the neighboring node is Device Name, and the name of the interface
connecting the neighboring node and the local interface is Port Name.
[Administrator-1] display ndp interface gigabitethernet 1/0/1 gigabitethernet 1/0/2
Interface: GigabitEthernet1/0/1
Status: Enabled, Packets Sent: 0, Packets Received: 11, Packets Error: 0
Neighbor 1: Aging Time: 2(s)
MAC Address : 0002-0002-0002
Port Name : GigabitEthernet1/0/1
Software Version: NE40E Version V600R003C00
Device Name : Member-1
Port Duplex : FULL
Product Ver : NE40E
Interface: GigabitEthernet1/0/2
Status: Enabled, Packets Sent: 6, Packets Received: 16, Packets Error: 0
Neighbor 1: Aging Time: 5(s)
MAC Address : 0003-0003-0003
Port Name : GigabitEthernet1/0/1
Software Version: NE40E Version V600R003C00
Device Name : Member-2
Port Duplex : FULL
Product Ver : NE40E
# On devices, enable NTDP in the system view and on the interface and configure the interval
and range for NTDP to collect topologies to 10 minutes and 3 hops respectively.
NOTE
After the previous configuration, globally check the NTDP configuration on the administrator
switch. You can find that the interval and range for NTDP to collect topologies is 10 minutes
and 3 hops respectively.
[Administrator-1] display ntdp
Network topology discovery protocol is enabled
Hops : 3
Timer : 10 min
Hop Delay : 200 ms
Port Delay: 20 ms
Total time for last collection:0ms
Step 4 Enable the cluster function and set the management VLAN.
# Configure the administrator switch.
[Administrator-1] cluster enable
[Administrator-1] cluster
[Administrator-1-cluster] mngvlanid 10
[Administrator-1-cluster] quit
After the topology collection function is enabled manually on the administrator switch, check
the device information collected through NTDP and you can find the MAC address and types
of related devices.
<Administrator-1> ntdp explore
<Administrator-1> display ntdp device-list
The device-list of NTDP:
------------------------------------------------------------------------------
MAC HOP IP PLATFORM
------------------------------------------------------------------------------
0004-0004-0004 2 NE40E
0003-0003-0003 1 NE40E
0002-0002-0002 1 NE40E
0001-0001-0001 0 NE40E
NOTE
# On the administrator switch, set the range of IP addresses that can be assigned to the cluster
to 10.0.0.0/8, in which the IP address assigned to the administrator switch is 10.0.0.1/8.
[Administrator-1] cluster
[Administrator-1-cluster] ip-pool 10.0.0.1 8
After the previous configuration, check information about the cluster to which the device
belongs. You can find that the device name is changed, the cluster name is HUAWEI, and the
management VLAN ID is 10.
[HUAWEI_0.Administrator-1-cluster] display cluster
Cluster name:"HUAWEI"
Role:Administrator switch
management vlan id : 10
Cluster multicast MAC address : 0180-c200-000a(default)
Cluster auto-join : disabled
On the administrator switch, check information about candidate switches, you can find all the
candidate switches and their types.
[HUAWEI_0.Administrator-1-cluster] display cluster candidates
MAC HOP IP PLATFORM
0004-0004-0004 2 NE40E
0003-0003-0003 1 NE40E
0002-0002-0002 1 NE40E
After the previous configuration, check information about the administrator switch and member
switches in the cluster on the administrator switch. You can find that all the member switches
are added to the cluster and in the Up state.
[HUAWEI_0.Administrator-1-cluster] display cluster members
The list of cluster member:
------------------------------------------------------------------------------
SN Device Type MAC Address Status Device Name
------------------------------------------------------------------------------
0 NE40E 0001-0001-0001 Admin HUAWEI_0.Administrator-1
1 NE40E 0002-0002-0002 Up HUAWEI_1.Member-1
2 NE40E 0003-0003-0003 Up HUAWEI_2.Member-2
3 NE40E 0004-0004-0004 Up HUAWEI_3.Member-3
After the previous configuration, you can find that the interface on the administrator switch is
in the Up state.
[HUAWEI_0.Administrator-1] display interface Vlanif 10
Vlanif10 current state : UP
Line protocol current state : UP
Last line protocol up time : 2010-06-28 21:25:52
Description:HUAWEI, HUAWEI Series, Vlanif10 Interface
Route Port,The Maximum Transmit Unit is 1500
Internet Address is 1.0.0.1/8
Internet Address is 10.0.0.1/8 ClusterIP Sending Frames' Format is PKTFMT_ETHNT_
2, Hardware address is 0001-0001-0001
Physical is VLANIF
Current system time: 2010-07-01 14:37:11-08:00
Last 300 seconds input rate 0 bits/sec, 0 packets/sec
Last 300 seconds output rate 0 bits/sec, 0 packets/sec
Last 0 seconds input rate 0 bits/sec, 0 packets/sec
Last 0 seconds output rate 0 bits/sec, 0 packets/sec
Input: 0 packets,0 bytes,
0 unicast,0 broadcast,0 multicasts
Output:0 packets,0 bytes,
0 unicast,0 broadcast,0 multicasts.
After the previous configuration, check information about the cluster to which the administrator
switch belongs. You can find that the public log host, SNMP host, FTP server, and SFTP server
are configured successfully.
[HUAWEI_0.Administrator-1-cluster] display cluster
Cluster name:"HUAWEI"
Role:Administrator switch
management vlan id : 10
Cluster multicast MAC address : 0180-c200-000a(default)
Cluster auto-join : disabled
# Run the configuration synchronization command on the administrator switch, and then member
switches synchronize configuration files to the FTP server (2.0.0.1) in NAT mode.
[HUAWEI_0.Administrator-1] cluster
[HUAWEI_0.Administrator-1-cluster] cluster-plug-play ip 2.0.0.1 username hgmp
password hgmp
[HUAWEI_0.Administrator-1-cluster] increment-config synchronization
On the FTP server, you can view that the names of configuration files are the MAC address of
member switches, which indicates that configuration synchronization is successful.
----End
Configuration Files
l Configuration file of Administrator-1.
#
sysname Administrator-1
#
vlan batch 10
#
cluster enable
ntdp enable
ntdp hop 5
ntdp timer 10
ndp enable
#
interface Vlanif10
ip address 1.0.0.1 255.0.0.0
#
interface GigabitEthernet1/0/1
undo shutdown
portswitch
port default vlan 10
ntdp enable
ndp enable
#
interface GigabitEthernet1/0/2
undo shutdown
portswitch
port default vlan 10
ntdp enable
ndp enable
#
cluster
mngvlanid 10
ip-pool 10.0.0.1 255.0.0.0
build HUAWEI
cluster-ftp-nat enable
ftp-server 2.0.0.1
sftp-server 2.0.0.2
logging-host 4.0.0.1
snmp-host 3.0.0.1
#
ip route-static 0.0.0.0 0.0.0.0 1.0.0.2
#
return
Networking Requirements
As shown in Figure 4-12, all the Layer 2 switches belong to the same cluster. Administrator-1
is the administrator switch of the cluster and other switches are member switches. The member
ID of Member-2 is 2 and the member ID of Member-3 is 3.
Disable NDP and NTDP on the interfaces of all the member switches that do not need NDP or
NTDP. To perform the action and improve the security of the cluster, you can configure security
features for the HGMP cluster.
NOTE
After NDP or NTDP is disabled on unrelated interfaces of member switches, if the new candidate switches
are connected to these unrelated interfaces, they cannot join the cluster until NDP or NTDP is enabled.
Figure 4-12 Networking diagram of configuring security features for an HGMP cluster
FTP Server
2.0.0.1/8 NM station
3.0.0.1/8
IP/MPLS
core
SFTP Server
2.0.0.2/8
GE1/0/1 Log station
1.0.0.2/8 4.0.0.1/8
GE1/0/3
GE1/0/1 GE1/0/2
Administrator-1
10.0.0.1/8
GE1/0/1 GE1/0/1
Member-1 ......
GE1/0/2 Member-2
GE1/0/1
Member-3
10.0.0.4/8
Cluster
Configuration Roadmap
The configuration roadmap is as follows:
1. Create a cluster according to the steps described in 4.5.1 Example for Configuring Basic
HGMP Functions for a Cluster.
2. On the administrator switch, disable NDP and NTDP on unrelated interfaces of member
switches.
Data Preparation
To complete the configuration, you need the following data:
l Management VLAN ID of the cluster, that is 10
l IP address of VLANIF 10 that is 1.0.0.1/8 and a reachable route between VLANIF 10 and
the FTP server
l Address pool of the cluster, that is 10.0.0.0/8
l IP address of the administrator switch used in the cluster, that is 10.0.0.1/8
Procedure
Step 1 Configure a management VLAN.
# Create VLAN 10 on the device and add interfaces of the administrator switch and member
switches to VLAN 10.
# Configure the administrator switch.
<HUAWEI> system-view
[HUAWEI] sysname Administrator-1
[Administrator-1] vlan 10
[Administrator-1-vlan10] quit
[Administrator-1] interface gigabitethernet 1/0/1
[Administrator-1-GigabitEthernet1/0/1] undo shutdown
[Administrator-1-GigabitEthernet1/0/1] portswitch
[Administrator-1-GigabitEthernet1/0/1] port default vlan 10
[Administrator-1-GigabitEthernet1/0/1] quit
[Administrator-1] interface gigabitethernet 1/0/2
[Administrator-1-GigabitEthernet1/0/2] undo shutdown
[Administrator-1-GigabitEthernet1/0/2] portswitch
[Administrator-1-GigabitEthernet1/0/2] port default vlan 10
[Administrator-1-GigabitEthernet1/0/2] quit
[Administrator-1] interface gigabitethernet 1/0/3
[Administrator-1-GigabitEthernet1/0/3] undo shutdown
[Administrator-1-GigabitEthernet1/0/3] portswitch
[Administrator-1-GigabitEthernet1/0/3] port default vlan 10
[Administrator-1-GigabitEthernet1/0/3] quit
[Administrator-1] interface vlanif 10
[Administrator-1-Vlanif10] quit
After the previous configuration, you can find that NDP on the administrator is in the Enable
state, the host name of the neighboring node is Device Name, and the name of the interface
connecting the neighboring node and the local interface is Port Name.
[Administrator-1] display ndp interface gigabitethernet 1/0/1 gigabitethernet 1/0/2
Interface: GigabitEthernet1/0/1
Status: Enabled, Packets Sent: 0, Packets Received: 11, Packets Error: 0
Neighbor 1: Aging Time: 2(s)
MAC Address : 0002-0002-0002
Port Name : GigabitEthernet1/0/1
Software Version: NE40E Version V600R003C00
Device Name : Member-1
Port Duplex : FULL
Product Ver : NE40E
Interface: GigabitEthernet1/0/2
Status: Enabled, Packets Sent: 6, Packets Received: 16, Packets Error: 0
Neighbor 1: Aging Time: 5(s)
MAC Address : 0003-0003-0003
Port Name : GigabitEthernet1/0/1
Software Version: NE40E Version V600R003C00
Device Name : Member-2
Port Duplex : FULL
Product Ver : NE40E
# On devices, enable NTDP in the system view and on the interface and configure the interval
and range for NTDP to collect topologies to 10 minutes and 3 hops respectively.
NOTE
After the previous configuration, globally check the NTDP configuration on the administrator
switch. You can find that the interval and range for NTDP to collect topologies is 10 minutes
and 3 hops respectively.
[Administrator-1] display ntdp
Network topology discovery protocol is enabled
Hops : 3
Timer : 10 min
Hop Delay : 200 ms
Port Delay: 20 ms
Total time for last collection:0ms
Step 4 Enable the cluster function and set the management VLAN.
# Configure the administrator switch.
[Administrator-1] cluster enable
[Administrator-1] cluster
[Administrator-1-cluster] mngvlanid 10
[Administrator-1-cluster] quit
After the topology collection function is enabled manually on the administrator switch, check
the device information collected through NTDP and you can find the MAC address and types
of related devices.
<Administrator-1> ntdp explore
<Administrator-1> display ntdp device-list
The device-list of NTDP:
------------------------------------------------------------------------------
MAC HOP IP PLATFORM
------------------------------------------------------------------------------
0004-0004-0004 2 NE40E
0003-0003-0003 1 NE40E
0002-0002-0002 1 NE40E
0001-0001-0001 0 NE40E
# On the administrator switch, set the range of IP addresses that can be assigned to the cluster
to 10.0.0.0/8, in which the IP address assigned to the administrator switch is 10.0.0.1/8.
[Administrator-1] cluster
[Administrator-1-cluster] ip-pool 10.0.0.1 8
After the previous configuration, check information about the cluster to which the device
belongs. You can find that the device name is changed, the cluster name is HUAWEI, and the
management VLAN ID is 10.
[HUAWEI_0.Administrator-1-cluster] display cluster
Cluster name:"HUAWEI"
Role:Administrator switch
management vlan id : 10
Cluster multicast MAC address : 0180-c200-000a(default)
Cluster auto-join : disabled
On the administrator switch, check information about candidate switches, you can find all the
candidate switches and their types.
[HUAWEI_0.Administrator-1-cluster] display cluster candidates
MAC HOP IP PLATFORM
0004-0004-0004 2 NE40E
0003-0003-0003 1 NE40E
0002-0002-0002 1 NE40E
After the previous configuration, check information about the administrator switch and member
switches in the cluster on the administrator switch. You can find that all the member switches
are added to the cluster and in the Up state.
[HUAWEI_0.Administrator-1-cluster] display cluster members
The list of cluster member:
------------------------------------------------------------------------------
SN Device Type MAC Address Status Device Name
------------------------------------------------------------------------------
0 NE40E 0001-0001-0001 Admin HUAWEI_0.Administrator-1
1 NE40E 0002-0002-0002 Up HUAWEI_1.Member-1
2 NE40E 0003-0003-0003 Up HUAWEI_2.Member-2
3 NE40E 0004-0004-0004 Up HUAWEI_3.Member-3
After the previous configuration, you can find that the interface on the administrator switch is
in the Up state.
[HUAWEI_0.Administrator-1] display interface Vlanif 10
Vlanif10 current state : UP
Line protocol current state : UP
Last line protocol up time : 2010-06-28 21:25:52
Description:HUAWEI, HUAWEI Series, Vlanif10 Interface
Route Port,The Maximum Transmit Unit is 1500
Internet Address is 1.0.0.1/8
Internet Address is 10.0.0.1/8 ClusterIP Sending Frames' Format is PKTFMT_ETHNT_
2, Hardware address is 0001-0001-0001
Physical is VLANIF
Current system time: 2010-07-01 14:37:11-08:00
Last 300 seconds input rate 0 bits/sec, 0 packets/sec
Last 300 seconds output rate 0 bits/sec, 0 packets/sec
Last 0 seconds input rate 0 bits/sec, 0 packets/sec
Last 0 seconds output rate 0 bits/sec, 0 packets/sec
Input: 0 packets,0 bytes,
0 unicast,0 broadcast,0 multicasts
Output:0 packets,0 bytes,
0 unicast,0 broadcast,0 multicasts.
[HUAWEI_0.Administrator-1] cluster
[HUAWEI_0.Administrator-1-cluster] cluster-ftp-nat enable
[HUAWEI_0.Administrator-1-cluster] ftp-server 2.0.0.1
After the previous configuration, check information about the cluster to which the administrator
switch belongs. You can find that the public log host, SNMP host, FTP server, and SFTP server
are configured successfully.
[HUAWEI_0.Administrator-1-cluster] display cluster
Cluster name:"HUAWEI"
Role:Administrator switch
management vlan id : 10
Cluster multicast MAC address : 0180-c200-000a(default)
Cluster auto-join : disabled
----End
Configuration Files
l Configuration file of Administrator-1.
#
sysname Administrator-1
#
vlan batch 10
#
cluster enable
ntdp enable
ntdp hop 3
ntdp timer 10
ndp enable
#
interface Vlanif10
ip address 1.0.0.1 255.0.0.0
#
interface GigabitEthernet1/0/1
undo shutdown
portswitch
port default vlan 10
ntdp enable
ndp enable
#
interface GigabitEthernet1/0/2
undo shutdown
portswitch
port default vlan 10
ntdp enable
ndp enable
#
cluster
mngvlanid 10
ip-pool 10.0.0.1 255.0.0.0
build HUAWEI
cluster-ftp-nat enable
ftp-server 2.0.0.1
sftp-server 2.0.0.2
logging-host 4.0.0.1
snmp-host 3.0.0.1
#
ip route-static 0.0.0.0 0.0.0.0 1.0.0.2
#
return
#
interface GigabitEthernet1/0/1
undo shutdown
portswitch
port default vlan 10
ntdp enable
ndp enable
#
interface GigabitEthernet1/0/2
undo shutdown
portswitch
port default vlan 10
ntdp enable
ndp enable
#
cluster
mngvlanid 10
administrator-address 0001-0001-0001 name HUAWEI
#
return
l Configuration file of Member-2.
#
sysname Member-2
#
vlan batch 10
#
cluster enable
ntdp enable
ntdp hop 3
ntdp timer 10
ndp enable
#
interface Vlanif10
#
interface GigabitEthernet1/0/1
undo shutdown
portswitch
port default vlan 10
ntdp enable
ndp enable
#
cluster
mngvlanid 10
administrator-address 0001-0001-0001 name HUAWEI
#
return
l Configuration file of Member-3.
#
sysname Member-3
#
vlan batch 10
#
cluster enable
ntdp enable
ntdp hop 3
ntdp timer 10
ndp enable
#
interface Vlanif10
#
interface GigabitEthernet1/0/2
undo shutdown
portswitch
port default vlan 10
ntdp enable
ndp enable
#
cluster
mngvlanid 10
administrator-address 0001-0001-0001 name HUAWEI
#
return
5 NTP Configuration
This chapter describes how to configure Network Time Protocol (NTP) to make clocks of the
devices on the network identical.
Network Time Protocol (NTP) synchronizes clocks of all devices in a network. It keeps all the
clocks of these devices consistent, and enables devices to implement various applications based
on the uniform time.
Any local system that runs NTP can be time synchronized by other clock sources, and also
functions as a clock source to synchronize other clocks. In addition, mutual synchronization can
be performed by exchanging NTP packets.
NTP packets are encapsulated in UDP packets for transmission and the port used by the NTP
protocol is 123.
NTP Application
NTP is applied to the following situations where all the clocks of hosts or routers in a network
need to be consistent:
When all the devices on a network need to be synchronized, it is almost impossible for an
administrator to manually change the system clock by executing command lines. This is because
the work load is heavy and clock accuracy cannot be ensured. NTP can quickly synchronize the
clocks of network devices and ensure their precision.
l Defines clock accuracy by means of stratum to synchronize the time of network devices in
a short time
l Supports access control and MD5 authentication
l Transmits packets in unicast, manycast, or broadcast mode
Principles of NTP
Figure 5-1 shows the principles of NTP. Router A and Router B are connected through a WAN.
They both have their own system clocks. NTP implements automatic synchronization of their
clocks.
Suppose:
l Before the system clocks of Router A and Router B are synchronized, the clock of Router
A is set to 10:00:00 am and the clock of Router B is set to 11:00:00 am.
l Router B functions as an NTP time server. That is, Router A synchronizes its clock with
that of Router B.
l One-way transmission of data packets between Router A and Router B takes one second.
l Processing of data packets on the Router A or the Router B takes one second.
Step1: Network
RouterA RouterB
NTP packet 10:00:00am 11:00:01am
Step2: Network
RouterA RouterB
Step3: Network
RouterA RouterB
Step4: Network
RouterA RouterB
1. Router A sends an NTP packet to Router B. The packet carries the originating timestamp
when it leaves Router A, which is 10:00:00 am (T1).
2. When the NTP packet reaches Router B, Router B adds its receiving timestamp to the NTP
packet, which is 11: 00:01 am (T2).
3. When the NTP packet leaves Router B, Router B adds its transmitting timestamp to the
NTP packet, which is 11:00:02 am (T3).
4. When Router A receives the response packet, it adds a new receiving timestamp to it, which
is 10:00:03 am (T4).
Router A uses the received information to calculate the following two important values:
l Delay for the NTP message cycle: Delay = (T4 - T1) - (T3 - T2).
l Offset of Router A relative to Router B: Offset = ((T2 - T1) + (T3 - T4))/2.
According to the delay and the offset, Router A sets its own clock again to synchronize
with the clock of Router B.
The preceding example is only a simple description of the NTP operating principle. As
described in RFC 1305, NTP uses a complex algorithm to ensure the precision of clock
synchronization.
The server and client are two relative concepts. The device that provides standard time is
referred to as a time server, and the device that enjoys the time service is referred to as a
client.
Peer Mode
In this mode, you need to configure NTP only on the symmetric active end. The symmetric active
end and symmetric passive end can be synchronized with each other.
Note that the clock with a lower stratum is synchronized to the one with a higher stratum.
After the configurations, the following actions occur:
l The symmetric active end sends a synchronization request packet to the symmetric passive
end with the mode field being set to 1. The value 1 indicates the symmetric active mode.
l Upon receiving the request packet, the symmetric passive end automatically works in
symmetric passive mode and sends a response packet with the mode field being set to 2.
The value 2 indicates the symmetric passive mode.
Broadcast Mode
In this mode, you need to configure both the server and the client.
l The server periodically sends clock synchronization packets to the broadcast address
255.255.255.255.
l The client senses broadcast packets from the server.
l After receiving the first broadcast packet, to estimate the network delay, the client enables
a temporary Client/Server model for exchanging messages with the remote server.
l The client then works in broadcast client mode, and continues to sense the incoming
broadcast packets to synchronize the local clock.
Multicast Mode
In this mode, you need to configure both the server and the client.
l The server periodically sends clock synchronization packets to the configured multicast
address. By default, the multicast address is 224.0.1.1.
l The client senses multicast packets from the server.
l After receiving the first multicast packet, to estimate the network delay, the client enables
a temporary Client/Server model for exchanging messages with the remote server.
l The client works in multicast client mode, and continues to sense the incoming multicast
packets to synchronize the local clock.
Manycast Mode
In this mode, you need to configure both the server and the client.
l The manycast client periodically sends clock synchronization packets manycast server with
specified multicast address. By default, the multicast address is 224.0.1.1.
l The manycast server senses manycast packets from the manycast client and responds to
the client with unicast packet.
l After receiving the first unicast packet by manycast client, to estimate the network delay,
the client creates an ephemeral association with the server for exchanging unicast packets.
l The server works in manycast server mode, and continues to sense the incoming manycast
packets.
Applicable Environment
NTP has the following operation modes:
l Client/Server mode
l Peer mode
l Broadcast mode
l Multicast mode
l Manycast mode
In actual applications, a proper operation mode needs to be selected according to the networking
topology to meet various clock synchronization requirements.
For the unicast Client/Sever mode and the peer mode, all the NTP packets sent locally can have
the same interface IP address as the source IP address.
Pre-configuration Tasks
Before configuring basic functions of NTP, you need to complete the following tasks:
Data Preparation
To configure basic functions of NTP, you need the following data.
No. Data
3 NTP version
No. Data
Context
If you want to configure a router to provide a primary NTP clock, do as follows on the router
functioning as the NTP server.
Procedure
Step 1 Run:
system-view
Step 2 Run:
ntp-service refclock-master [ ip-address ] [ stratum ]
ip-address is the IP address of the local reference clock. Its value is 127.127.t.u. Here, "t" ranges
from 0 to 37. Currently, "t" can be only 1, indicating the local reference clock. "u" indicates the
NTP process number, ranging from 0 to 3.
When no IP address is specified, the local clock whose IP address is 127.127.1.0 functions as
the primary NTP clock by default, with the stratum being 8.
----End
Context
If you want to configure the time interval to update the client clock, do as follows on the
router functioning as a client:
Procedure
Step 1 Run:
system-view
----End
Context
Commonly, specify the IP address of the NTP server on the client. The client and server can
then exchange NTP packets using this IP address.
If the source interface to send NTP packets is specified on the server, the IP address of the server
configured on the client should be the same; otherwise, the client cannot process NTP packets
sent from the server and clock synchronization fails.
Procedure
l Configuring the NTP Client
Do as follows on the router functioning as a client:
1. Run:
system-view
The local source interface that receives the NTP packet is configured.
3. Run:
– ntp-service unicast-server ip-address [ version number |
[ authentication-keyid key-id | autokey ] | source-interface interface-
type interface-number | preference | vpn-instance vpn-instance-name |
maxpoll max-number | minpoll min-number | burst | iburst | preempt ] *
ip-address or ipv6-address is the address of the NTP server. It can be the IPv4 or
IPv6 address of the host other than a broadcast address, a multicast address, or the IP
address of the reference clock.
NOTE
When the unicast NTP server is specified, the local router functions as the client automatically.
The server needs to be configured with only a primary clock.
l (Optional) Configuring the NTP Server
1. Run:
system-view
----End
Procedure
l Configuring the NTP Symmetric Active End
1. Run:
system-view
Step 2 is optional. If source-interface is specified in both Step 2 and Step 3, use the
source interface specified in Step 3 preferentially.
ip-address or ipv6-address is the address of the NTP peer. It can be the IPv4 or
IPv6 address of a host other than a broadcast address, a multicast address, or the IP
address of the reference clock.
NOTE
After the NTP peer is specified, the local router runs in symmetric active mode. The symmetric
passive end need not be configured.
l (Optional) Configuring the Source Interface of the NTP Symmetric Passive End
1. Run:
system-view
Commonly, specify the IP address of the NTP server on the client. The client and
server can then exchange NTP packets using this IP address
If the source interface to send NTP packets is specified on the symmetric active end,
the IP address of the NTP peer configured on the symmetric passive end should be
the same; otherwise, the passive end cannot process NTP packets sent from the active
end and clock synchronization fails.
----End
Procedure
l Configuring an NTP Broadcast Server
1. Run:
system-view
After the configurations, the local router periodically sends the clock synchronization
packets to the broadcast address 255.255.255.255.
NOTE
Broadcast mode can be used only in the same LAN.
l Configuring an NTP Broadcast Client
Do as follows on the router functioning as an NTP broadcast client:
1. Run:
system-view
Procedure
l Configuring an NTP Multicast Server
Do as follows on the router functioning as an NTP multicast server:
1. Run:
system-view
Context
Do as follows on the router that needs to be disabled from receiving NTP packets.
Procedure
Step 1 Run:
system-view
Step 3 Run:
quit
Step 4 Run:
quit
Step 5 Run:
l ntp-service in-interface disable
The interface on the router is disabled from receiving IPv4 NTP packets.
l ntp-service ipv6 in-interface disable
The interface on the router is disabled from receiving IPv6 NTP packets.
----End
Context
To prevent a device from synchronizing the clock with IPv4 or IPv6 external servers or peers,
you can disable NTP IPv4 or IPv6 service on the device. Also, if it not required to provide the
reference clock source for IPv4 or IPv6 external clients, you can disable NTP IPv4 or IPv6
service.
Procedure
Step 1 Run:
system-view
Step 2 Run:
l ntp-service disable
----End
Prerequisites
The configurations of the Basic NTP Functions are complete.
Procedure
l Run the display ntp-service status command to view the status of the NTP service.
l Run the display ntp-service sessions [ verbose ] command to view the status of NTP
sessions.
l Run the display ntp-service trace command to view the summary information on each
passing NTP server when tracing from the local device to the reference clock source.
----End
Example
Run the display ntp-service status command to view the status of the NTP service.
<HUAWEI> display ntp-service status
clock status: synchronized
clock stratum: 2
reference clock ID: LOCAL(0)
nominal frequency: 60.0002 Hz
actual frequency: 60.0002 Hz
clock precision: 2^18
clock offset: 0.0000 ms
root delay: 0.00 ms
root dispersion: 0.00 ms
peer dispersion: 10.00 ms
reference time: 15:51:36.259 UTC Apr 25 2010(C6179088.426490A3)
Run the display ntp-service sessions [ verbose ] command to view the status of NTP sessions.
<HUAWEI> display ntp-service sessions
source reference stra reach poll now offset delay
disper
********************************************************************************
[12345]127.127.1.0 LOCAL(0) 7 1 64 2 - 0.0
15.6
note: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured,6 vpn-
instance
Run the display ntp-service trace command to view the summary information on each passing
NTP server when tracing from the local device to the reference clock source.
<HUAWEI> display ntp-service trace
server 127.0.0.1,stratum 5, offset 0.024099, synch distance 0.06337
server 171.1.1.2,stratum 4, offset 0.028786, synch distance 0.04575
server 201.1.1.2,stratum 3, offset 0.035199, synch distance 0.03075
server 200.1.7.1,stratum 2, offset 0.039855, synch distance 0.01096
refid 127.127.1.0
Applicable Environment
NTP supports two security mechanisms: access authority and NTP authentication.
l Access authority
Access authority is a type of simple security method provided by the NE80E/40E to protect
local NTP services.
The NE80E/40E provides four access authority levels. When an NTP access request packet
reaches the local end, it is matched in an order from the minimum access authority to the
maximum access authority. The first matched authority level takes effect. The matching
order is as follows:
– peer: indicates the minimum access authority. The remote end can send the request of
the local time and the control query to the local end. The local clock can also be
synchronized with that of the remote server.
– server: indicates the remote end can perform the time request and control query to the
local end but the local clock cannot be synchronized with that of the remote end.
– synchronization: indicates that the remote end can perform only the time request to the
local end.
– query: indicates the maximum access authority. The remote end can perform only the
control query to the local end.
l NTP authentication
NTP authentication is required in some networks with high security demands.
The configuration of NTP authentication involves configuring NTP authentication on both
the client and the server.
During the configuration of NTP authentication, pay attention to the following rules:
– Configure NTP authentication on both the client and the server; otherwise, the
authentication does not take effect.
– If NTP authentication is enabled, a reliable key needs to be configured at the same time.
– The authentication key configured on the server and that on the client should be
consistent.
– In NTP peer mode, the symmetric active end equals the client, and the symmetric passive
end equals the server.
Pre-configuration Tasks
Before configuring NTP security mechanisms, complete the following tasks:
Data Preparation
To configure NTP security mechanisms, you need the following data.
No. Data
1 ACL rules
5 NTP version
Context
Do as follows on the router.
Procedure
Step 1 Run:
system-view
Step 2 Run:
ntp-service access { peer | query | server | synchronization | synchronization |
limited } { acl-number | ipv6 acl6-number } *
Access authority for the NTP service on the local router is configured.
You can configure the ntp-service access command depending on the actual situations.
NTP multicast mode Synchronizing the client with NTP multicast client
the server
NTP broadcast mode Synchronizing the client with NTP broadcast client
the server
NTP manycast client mode Synchronizing the client with NTP manycast client
the server
----End
Context
NTP client synchronizes to authenticated NTP servers to ensure that time service is reliable
across the network. Authentication prevents the modification of NTP message data from
malicious network attacks.
Procedure
l Configuring NTP MD5 authentication
NOTE
l Configure the same authentication key on the server and client and affirm that the key is reliable;
otherwise, NTP authentication fails.
l Enable NTP authentication before performing actual authentication.
1. Run:
system-view
l Ensure correct keys and certificate files are loaded on both the client and the server; otherwise,
autokey authentication fails.
l If a standby board is present, ensure that all the keys and certificate files present in the master
board are also present on the standby board in the same path (default is cfcard: ). Otherwise, it
may result in autokey configuration loss.
1. Run:
system-view
Context
Do as follows on the router that functions as an NTP unicast client.
Procedure
Step 1 Run:
system-view
----End
Context
Do as follows on the router that functions as the symmetric active end.
Procedure
Step 1 Run:
system-view
Step 2 Run:
l ntp-service unicast-peer ip-address [ version number | [ authentication-keyid
key-id | autokey ] | source-interface interface-type interface-number |
preference | vpn-instance vpn-instance-name | maxpoll max-number | minpoll min-
number | preempt ] *
----End
Context
Do as follows on the router that functions as an NTP broadcast server.
Procedure
Step 1 Run:
system-view
The ID of the authentication key used by the NTP broadcast server is configured.
For configuring the broadcast client, see "Configuring the Broadcast Mode".
----End
Context
Do as follows on the router that functions as an NTP multicast server.
Procedure
Step 1 Run:
system-view
The authentication key ID used by the NTP multicast server is configured in IPv4 network.
l ntp-service multicast-server [ ipv6 [ ipv6-address ] ] [ [ authentication-keyid
key-id | autokey ] | ttl ttl-number ] *
The authentication key ID used by the NTP multicast server is configured in IPv6 network.
For configuring the multicast client, see "Configuring the Broadcast Mode".
----End
Context
Do as follows on the router that functions as NTP manycast client.
Procedure
Step 1 Run:
system-view
----End
Prerequisites
The configurations of the NTP Security Mechanisms are complete.
Procedure
l Run the display ntp-service status command to view the status of the NTP service.
l Run the display ntp-service sessions [ verbose ] command to view the status of NTP
sessions.
----End
Example
Run the display ntp-service status command to view the status of the NTP service.
<HUAWEI> display ntp-service status
clock status: synchronized
clock stratum: 2
Run the display ntp-service sessions [ verbose ] command to view the status of NTP sessions.
<HUAWEI> display ntp-service sessions
source reference stra reach poll now offset delay
disper
********************************************************************************
[12345]127.127.1.0 LOCAL(0) 7 1 64 2 - 0.0
15.6
note: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured,6 vpn-
instance
Applicable Environment
NTP has the following operation modes:
l Client/Server mode
l Peer mode
l Manycast mode
NOTE
Pre-configuration Tasks
Before configuring NTP KOD, you need to complete the following tasks:
Procedure
l Configuring a router to send KOD packets.
1. Run:
system-view
ntp-service kod-enable
4. Configure the minimum interval and average interval for inter-packet spacing check.
ntp-service discard { min-interval min-interval-val | avg-interval avg-
interval-val } *
5. Set the local clock to be the NTP master clock that provides the synchronizing time
for other devices.
ntp-service refclock-master [ ip-address ] [ stratum ]
----End
----End
Procedure
Step 1 Run:
system-view
----End
Procedure
Step 1 Run:
system-view
----End
Prerequisites
The configurations of the NTP KOD are complete.
Procedure
l Run the display current-configuration command to check the configuration parameters
currently validated for KOD on the router.
----End
Context
NOTE
This document takes interface numbers and link types of the NE40E-X8 as an example. In working
situations, the actual interface numbers and link types may be different from those used in this document.
Networking Requirements
As shown in Figure 5-2,
l RouterA functions as a unicast NTP server. The clock on it functions as a primary NTP
clock with the stratum being 2.
l RouterB functions as a unicast NTP client. Its clock needs to be synchronized with the
clock on RouterA.
l RouterC and RouterD function as NTP clients of RouterB.
l Enable NTP authentication.
GE 1/0/0
10.0.0.2/24
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure RouterA to be an NTP server and configure a primary clock on it.
2. Configure RouterB to be an NTP client and synchronize its clock with the clock of
RouterA.
3. Configure RouterC and RouterD to synchronize their clocks with the clock of RouterB.
4. Enable NTP authentication on all Routers.
NOTE
l You must enable NTP authentication on the client prior to specifying the IP address of the NTP server
and authentication key to be sent to the server; otherwise, NTP authentication is not performed before
clock synchronization.
l To implement authentication successfully, configure both the server and the client.
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Configure the IP addresses based on Figure 5-2 so that RouterA, RouterB, RouterC and
RouterD are routable. The detailed procedures are not mentioned here.
Step 2 Configure a primary NTP clock on RouterA and enable NTP authentication.
# On RouterA, set its local clock as a primary NTP clock with stratum being 2.
<RouterA> system-view
[RouterA] ntp-service refclock-master 2
# Enable NTP authentication, configure the authentication key, and declare the key to be reliable.
[RouterA] ntp-service authentication enable
[RouterA] ntp-service authentication-keyid 42 authentication-mode md5 Hello
[RouterA] ntp-service reliable authentication-keyid 42
Note that authentication keys configured on the server and the client should be the same.
Step 3 Configure a primary NTP clock on RouterB and enable NTP authentication.
# On RouterB, enable NTP authentication. Configure the authentication key and declare the key
to be reliable.
<RouterB> system-view
[RouterB] ntp-service authentication enable
[RouterB] ntp-service authentication-keyid 42 authentication-mode md5 Hello
[RouterB] ntp-service reliable authentication-keyid 42
# Specify RouterA to be the NTP server of RouterB and use the authentication key.
[RouterB] ntp-service unicast-server 2.2.2.2 authentication-keyid 42
After the configurations, the clock on RouterC can be synchronized with the clock on RouterB.
View the NTP status on RouterC and find that the clock is synchronized. The stratum of the
clock is 4, one stratum lower than that on RouterB.
[RouterC] display ntp-service status
clock status: synchronized
clock stratum: 4
reference clock ID: 10.0.0.1
nominal frequency: 60.0002 Hz
actual frequency: 60.0002 Hz
clock precision: 2^18
clock offset: 3.8128 ms
root delay: 31.26 ms
root dispersion: 74.20 ms
peer dispersion: 34.30 ms
reference time: 11:55:56.833 UTC Mar 2 2006(C7B15BCC.D5604189)
View the NTP status on RouterD and find that the clock is synchronized. The stratum of the
clock is 4, one stratum lower than that on RouterB.
[RouterD] display ntp-service status
clock status: synchronized
clock stratum: 4
reference clock ID: 10.0.0.1
nominal frequency: 60.0002 Hz
actual frequency: 60.0002 Hz
clock precision: 2^18
clock offset: 3.8128 ms
root delay: 31.26 ms
root dispersion: 74.20 ms
peer dispersion: 34.30 ms
reference time: 11:55:56.833 UTC Mar 2 2006(C7B15BCC.D5604189)
----End
Configuration Files
l Configuration file of RouterA
#
sysname RouterA
#
interface GigabitEthernet1/0/0
undo shutdown
Networking Requirements
As shown in Figure 5-3, three devices are located in a LAN.
l Configure the clock on RouterC to be an primary NTP clock with the stratum being 2.
l RouterD takes RouterC as its NTP server. That is, RouterD functions as the client.
l RouterE takes RouterD as its symmetric passive end. That is, RouterE is the symmetric
active end.
GE1/0/0
10.0.1.1/24
GE1/0/0 GE1/0/0
10.0.1.3/24 10.0.1.2/24
RouterE RouterD
Configuration Roadmap
The configuration roadmap is as follows
1. Configure the clock on RouterC to be the NTP primary clock. The clock on RouterD should
be synchronized to the clock on RouterC.
2. Configure RouterE and RouterD to be NTP peer so that RouterE should send clock
synchronization requests to RouterD
3. Finally, the clocks on RouterC, RouterD and RouterE can be synchronized.
Data Preparation
To complete the configuration, you need the following data:
l IP address of RouterC
l IP address of RouterD
l Stratum of the NTP primary clock
Procedure
Step 1 Configure IP addresses for RouterC, RouterD, and RouterE.
Configure an IP address for each interface based on Figure 5-3. After configurations, the three
routers can ping through each other.
After configurations, the clock on RouterD can be synchronized to the clock on RouterC.
View the NTP status on RouterD and find that the clock is synchronized. The stratum of the
clock on RouterD is 3, one stratum lower than that on RouterC.
[RouterD] display ntp-service status
clock status: synchronized
clock stratum: 3
reference clock ID: 10.0.1.1
nominal frequency: 64.0029 Hz
actual frequency: 64.0029 Hz
clock precision: 2^7
clock offset: 0.0000 ms
root delay: 62.50 ms
root dispersion: 0.20 ms
peer dispersion: 7.81 ms
reference time: 06:52:33.465 UTC Mar 7 2006(C7B7AC31.773E89A8)
Since no primary clock is configured on RouterE, the clock on RouterE should be synchronized
to the clock on RouterD.
Step 4 Verify the configuration.
View the status of RouterE after clock synchronization and you can find that the status is
"synchronized". That is, clock synchronization completes. You can also find that the stratum of
the clock on RouterE is 4, one stratum lower than that on RouterD.
[RouterE] display ntp-service status
clock status: synchronized
clock stratum: 4
reference clock ID: 10.0.1.2
nominal frequency: 64.0029 Hz
actual frequency: 64.0029 Hz
clock precision: 2^7
clock offset: 0.0000 ms
root delay: 124.98 ms
root dispersion: 0.15 ms
peer dispersion: 10.96 ms
reference time: 06:55:50.784 UTC Mar 7 2006(C7B7ACF6.C8D002E2)
----End
Configuration Files
l Configuration file of RouterC
#
sysname RouterC
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.0.1.1 255.255.255.0
#
ntp-service refclock-master 2
#
return
Networking Requirements
As shown in Figure 5-4,
l RouterA,RouterB and RouterC are in the same network segment;
l RouterA functions as the NTP broadcast server and its local clock is the NTP primary clock
with the stratum being 3. Broadcast packets are sent from GE 1/0/0.
l RouterB and RouterC sense the broadcast packets respectively on GE 1/0/0 of them.
RouterB
GE1/0/0
10.1.1.1/24
RouterA
GE1/0/0
10.1.1.3/24
RouterC
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure RouterA as an NTP broadcast server.
2. Configure RouterB and RouterC as the NTP broadcast clients.
3. Configure NTP authentication on RouterA, RouterB, and RouterC.
Data Preparation
To complete the configuration, you need the following data:
l IP addresses of RouterA, RouterB,andRouterC
l Stratum of the NTP primary clock
l Authentication key and its ID
Procedure
Step 1 Configure an IP address for each Router.
Configure IP addresses based on Figure 5-4. The detailed procedures are not mentioned here.
Step 2 Configure an NTP broadcast server and enable NTP authentication on it.
# Set the local clock of RouterA as a primary clock with stratum being 3.
<RouterA> system-view
[RouterA] ntp-service refclock-master 3
# Configure RouterA to be an NTP broadcast server. Broadcast packets are encrypted by using
the authentication key ID 16 and then sent from GE 1/0/0.
[RouterA] interface gigabitethernet 1/0/0
[RouterA-GigabitEthernet1/0/0] ntp-service broadcast-server authentication-keyid 16
[RouterA-GigabitEthernet1/0/0] quit
# Configure RouterB to be the NTP broadcast client. RouterB senses the broadcast packets on
GE 1/0/0.
[RouterB] interface gigabitethernet 1/0/0
[RouterB-GigabitEthernet1/0/0] ntp-service broadcast-client
[RouterB-GigabitEthernet1/0/0] quit
# Configure RouterC to be the NTP broadcast client. RouterC senses the NTP broadcast packets
on GE 1/0/0.
[RouterC] interface gigabitethernet 1/0/0
[RouterC-GigabitEthernet1/0/0] ntp-service broadcast-client
[RouterC-GigabitEthernet1/0/0]quit
----End
Configuration Files
l Configuration file of RouterA
#
sysname RouterA
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
ntp-service broadcast-server authentication-keyid 16
#
ntp-service authentication enable
ntp-service authentication-keyid 16 authentication-mode md5 %@ENC;8HX
\#Q=^Q`MAF4<1!!
ntp-service reliable authentication-keyid 16
ntp-service refclock-master 3
#
return
Networking Requirements
As shown in Figure 5-5,
l RouterA ,RouterB and RouterC are in the same network segment;
l RouterA functions as an NTP multicast server and its local clock is a primary clock with
the stratum 2. Multicast packets are sent out from GE 1/0/0.
l RouterB and RouterC sense the multicast packets respectively on GE 1/0/0 of them.
RouterB
GE1/0/0
10.1.1.1/24
RouterA
GE1/0/0
10.1.1.3/24
RouterC
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure RouterA as an NTP multicast server.
2. Configure RouterB and RouterC as NTP multicast clients.
Data Preparation
To complete the configuration, you need the following data:
l IP addresses of RouterA, RouterB, and RouterC
l Stratum of the NTP primary clock
Procedure
Step 1 Configure an IP address for each router
Configure IP addresses based on Figure 5-5. The detailed procedures are not mentioned here.
Step 2 Configure an NTP multicast server.
# Set the local clock on RouterA as an NTP primary clock with stratum 2.
<RouterA> system-view
[RouterA] ntp-service refclock-master 2
# Configure RouterA to be an NTP multicast server. NTP multicast packets are sent from GE
1/0/0.
[RouterA] interface gigabitethernet 1/0/0
[RouterA-GigabitEthernet1/0/0] ntp-service multicast-server
----End
Configuration Files
l Configuration file of RouterA
#
sysname RouterA
#
ntp-service refclock-master 2
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
ntp-service multicast-server
#
return
ntp-service multicast-client
#
return
Networking Requirements
As shown in Figure 5-6,
l RouterC and RouterD are in the same network segment; RouterA is in another network
segment; RouterF connects with the two network segments.
l RouterC functions as an NTP manycast server and its local clock is a primary clock with
the stratum 2. Manycast packets are sent out from GE1/0/0.
l RouterD and RouterA are manycast clients and send packets on their respective GE1/0/0.
RouterC
GE1/0/0 GE2/0/0
1.0.1.11/24 3.0.1.2/24
GE1/0/0
RouterA 1.0.1.2/24 RouterF
GE1/0/0
3.0.1.32/24
RouterD
Configuration Notes
Ensure manycast client is reachable to the manycast server before synchronization. This can be
checked using ping command on the console interface.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure RouterC as an NTP manycast server.
2. Configure RouterA and RouterD as NTP manycast clients.
Data Preparation
To complete the configuration, you need the following data:
l IP addresses of RouterA, RouterC, RouterD, and RouterF
l Stratum of the NTP primary clock
Procedure
Step 1 Configure an IP address for each router.
Configure IP addresses based on Figure 5-6. The detailed procedures are not mentioned here.
Step 2 Configure an NTP manycast server.
# Set the local clock on RouterC as an NTP primary clock with stratum 2.
<RouterC> system-view
[RouterC] ntp-service refclock-master 2
# Configure RouterC to be an NTP manycast server. NTP manycast server sends NTP manycast
packets after receiving manycast client packets.
[RouterC] interface gigabitethernet 1/0/0
[RouterC-GigabitEthernet1/0/0] ntp-service manycast-server
----End
Configuration Files
l Configuration file of RouterA
#
sysname RouterA
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.0.1.1 255.255.255.0
ntp-service manycast-client
#
return
GE 1/0/0
10.0.0.2/24
Configuration Notes
l Before configuring autokey at the client and server side, ensure the keys and certificate
files already exists.
l In Private Scheme same key and certificate is put in all routers.
l Authentication key should be reliable at client and server side. Also enable authentication
at client side.
Configuration Roadmap
The configuration roadmap is as follows:
1. Enable Autokey and Configure RouterA to be an NTP server and configure a primary clock
on it.
2. Enable Autokey and Configure RouterB to be an NTP client and synchronize its clock with
the clock of RouterA.
3. Enable Autokey and Configure RouterC and RouterD to synchronize their clocks with the
clock of RouterB.
NOTE
You must enable NTP authentication autokey on all routers before configuring server.
Data Preparation
To complete the configuration, you need the following data:
l IP address of the reference clock
l Stratum of the primary NTP clock
l Authentication Autokey key files and certificate
l Password to read the files
Procedure
Step 1 Configure the IP addresses based on Figure 5-7 so that RouterA, RouterB, RouterC and
RouterD are routable. The detailed procedures are not mentioned here.
Step 2 Configure a primary NTP clock on RouterA and enable NTP authentication.
# On RouterA, set its local clock as a primary NTP clock with stratum being 2.
<RouterA> system-view
[RouterA] ntp-service refclock-master 2
Note that authentication keys configured on the server and the client should be the same.
It reads ntpkey_host_private, ntpkey_cert_private, (optional)ntpkey_sign_private files.
Step 3 Configure a primary NTP clock on RouterB and enable NTP authentication.
# On RouterB, enable NTP authentication. Configure the autokey.
<RouterB> system-view
# Specify RouterA to be the NTP server of RouterB and use the authentication key.
[RouterB] ntp-service unicast-server 2.2.2.2 autokey
After the configurations, the clock on RouterC can be synchronized with the clock on RouterB.
View the NTP status on RouterC and find that the clock is synchronized. The stratum of the
clock is 4, one stratum lower than that on RouterB.
[RouterC] display ntp-service status
clock status: synchronized
clock stratum: 4
reference clock ID: 10.0.0.1
nominal frequency: 60.0002 Hz
actual frequency: 60.0002 Hz
clock precision: 2^18
clock offset: 3.8128 ms
root delay: 31.26 ms
root dispersion: 74.20 ms
peer dispersion: 34.30 ms
reference time: 11:55:56.833 UTC Mar 2 2006(C7B15BCC.D5604189)
View the NTP status on RouterD and find that the clock is synchronized. The stratum of the
clock is 4, one stratum lower than that on RouterB.
[RouterD] display ntp-service status
clock status: synchronized
clock stratum: 4
reference clock ID: 10.0.0.1
nominal frequency: 60.0002 Hz
actual frequency: 60.0002 Hz
clock precision: 2^18
clock offset: 3.8128 ms
----End
Configuration Files
l Configuration file of RouterA
#
sysname RouterA
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 2.2.2.2 255.255.255.0
#
ospf 1
area 0.0.0.0
network 2.2.2.0 0.0.0.255
#
ntp-service authentication enable
ntp-service authentication auto-key hostname private password Hello
ntp-service refclock-master 2
#
return
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.0.0.2 255.255.255.0
#
ntp-service authenticationenable
ntp-service authentication auto-key hostname private password Hello
ntp-service unicast-server 10.0.0.1 autokey
#
return
Figure 5-8 Networking diagram of the NTP Autokey with Trusted Scheme in peer mode
GE1/0/0
3.0.1.31/24
RouterC
GE1/0/0 GE2/0/0
1.0.1.11/24 3.0.1.2/24
GE1/0/0
RouterA 1.0.1.2/24 RouterF
GE1/0/0
3.0.1.32/24
RouterD
Configuration Notes
Before configuring a peer mode, ensure the peer is reachable from host side.
Configuration Roadmap
The configuration roadmap is as follows
1. Enable Autokey and configure the clock on RouterC to be the NTP primary clock. The
clock on RouterD should be synchronized to the clock on RouterC.
2. Enable Autokey and configure RouterF and RouterD to be NTP peer so that RouterF should
send clock synchronization requests to RouterD
3. Finally, the clocks on RouterC, RouterD and RouterF can be synchronized.
Data Preparation
To complete the configuration, you need the following data:
l IP address of RouterC
l IP address of RouterD
l Stratum of the NTP primary clock
l Key and certificate files for all routers
l Password to read files
Procedure
Step 1 Configure IP addresses for RouterC, RouterD, and RouterF.
Configure an IP address for each interface based on Figure 5-8. After configurations, the three
routers can ping through each other.
The detailed procedures are not mentioned here.
Step 2 Configure the NTP Client/Server mode.
# Configure the clock on RouterC to be its own reference clock with the stratum being 2.
<RouterC> system-view
[RouterC] ntp-service refclock-master 2
# Enable NTP authentication, configure Autokey, make sure the certificate ntpkey_cert_routerc
is trusted.
<RouterC> system-view
<RouterC> ntp-service authentication enable
<RouterC> ntp-service authentication auto-key password Hello
----End
Configuration Files
l Configuration file of RouterC
#
sysname RouterC
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
#
ntp-service authentication enable
ntp-service authentication auto-key password Hello
ntp-service refclock-master 2
#
return
sysname RouterD
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.3 255.255.255.0
#
ntp-service authentication enable
ntp-service authentication auto-key password Hello
ntp-service unicast-server 10.1.1.2 autokey
#
return
Networking Requirements
As shown in Figure 5-9,
l RouterC and RouterD are in the same network segment; RouterA is in another network
segment; RouterF connects with the two network segments.
l RouterC functions as the NTP broadcast server and its local clock is the NTP primary clock
with the stratum being 3. Broadcast packets are sent from GE 1/0/0.
l RouterD and RouterA sense the broadcast packets respectively on GE 1/0/0 of them.
l Enable NTP authentication.
GE1/0/0
3.0.1.31/24
RouterC
GE1/0/0 GE2/0/0
1.0.1.11/24 3.0.1.2/24
GE1/0/0
RouterA 1.0.1.2/24 RouterF
GE1/0/0
3.0.1.32/24
RouterD
Configuration Notes
Before configuring key at the client and server side, ensure the autokey is configured.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure RouterC as an NTP broadcast server.
2. Configure RouterA and RouterD as the NTP broadcast clients.
3. Configure NTP autokey on RouterA, RouterC, and RouterD.
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Configure an IP address for each Router.
Configure IP addresses based on Figure 5-9. The detailed procedures are not mentioned here.
Step 2 Configure an NTP broadcast server and enable NTP authentication on it.
# Set the local clock of RouterC as a primary clock with stratum being 3.
<RouterC> system-view
[RouterC] ntp-service refclock-master 3
# Enable NTP authentication, configure autokey, make sure the certificate ntpkey_cert_routerc
is trusted.
[RouterC] ntp-service authentication enable
[RouterC] ntp-service authentication auto-key password Hello groupname RouterC
----End
Configuration Files
l Configuration file of RouterA
#
sysname RouterA
#
ospf 1
area 0.0.0.0
network 10.0.1.0 0.0.0.255
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.0.1.1 255.255.255.0
ntp-service broadcast-client
#
ntp-service authentication enable
ntp-service authentication auto-key password Hello groupname RouterC
#
return
Networking Requirements
As shown in Figure 5-10,
l RouterC and RouterD are in the same network segment; RouterA is in another network
segment; RouterF connects with the two network segments.
l RouterC functions as an NTP multicast server and its local clock is a primary clock with
the stratum 2. Multicast packets are sent out from GE 1/0/0.
l RouterD and RouterA sense the multicast packets respectively on GE 1/0/0 of them.
Figure 5-10 Networking diagram of the NTP Autokey with GQ scheme in multicast mode
GE1/0/0
3.0.1.31/24
RouterC
GE1/0/0 GE2/0/0
1.0.1.11/24 3.0.1.2/24
GE1/0/0
RouterA 1.0.1.2/24 RouterF
GE1/0/0
3.0.1.32/24
RouterD
Configuration Notes
Ensure the client and server group addresses are same.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Configure an IP address for each router.
Configure IP addresses based on Figure 5-10. The detailed procedures are not mentioned here.
Step 2 Configure an NTP multicast server.
# Set the local clock on RouterC as an NTP primary clock with stratum 2.
<RouterC> system-view
[RouterC] ntp-service refclock-master 2
# Enable NTP authentication, configure autokey make sure the certificate ntpkey_cert_routerc
is trusted.
[RouterC] ntp-service authentication enable
[RouterC] ntp-service authentication auto-key password Hello groupname RouterC
----End
Configuration Files
l Configuration file of RouterA
#
sysname RouterA
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.0.1.1 255.255.255.0
ntp-service multicast-client
#
ntp-service authentication enable
ntp-service authentication auto-key password Hello groupname RouterC
#
return
#
ntp-service authentication enable
ntp-service authentication auto-key password Hello groupname RouterC
#
return
Networking Requirements
As shown in Figure 5-11,
l RouterC and RouterD are in the same network segment; RouterA is in another network
segment; RouterF connects with the two network segments.
l RouterC functions as an NTP manycast server and its local clock is a primary clock with
the stratum 2. Manycast packets are sent out from GE 1/0/0.
l RouterD and RouterA are manycast clients and send packets on their respective GE 1/0/0.
Figure 5-11 Networking diagram of the NTP Autokey with MV scheme in manycast mode
GE1/0/0
3.0.1.31/24
RouterC
GE1/0/0 GE2/0/0
1.0.1.11/24 3.0.1.2/24
GE1/0/0
RouterA 1.0.1.2/24 RouterF
GE1/0/0
3.0.1.32/24
RouterD
Configuration Notes
Ensure manycast client is reachable to the manycast server before synchronization. This can be
checked using ping command on the console interface.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure RouterC as an NTP manycast server.
2. Configure RouterA and RouterD as NTP manycast clients.
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Configure an IP address for each router.
Configure IP addresses based on Figure 5-11. The detailed procedures are not mentioned here.
Step 2 Configure an NTP manycast server.
# Set the local clock on RouterC as an NTP primary clock with stratum 2.
<RouterC> system-view
[RouterC] ntp-service refclock-master 2
# Enable NTP authentication, configure autokey make sure the certificate ntpkey_cert_routerc
is trusted.
[RouterC] ntp-service authentication enable
[RouterC] ntp-service authentication auto-key password Hello groupname RouterC
# Configure RouterA to be an NTP manycast client. RouterA sends NTP manycast packets to
manycast server on GE 1/0/0.
[RouterA] interface gigabitethernet 1/0/0
[RouterA-GigabitEthernet1/0/0] ntp-service manycast-client autokey
----End
Configuration Files
l Configuration file of RouterA
#
sysname RouterA
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.0.1.1 255.255.255.0
ntp-service manycast-client autokey
#
ntp-service authentication enable
ntp-service authentication auto-key password Hello groupname RouterC
#
return
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.3 255.255.255.0
ntp-service manycast-client autokey
#
ntp-service authentication enable
ntp-service authentication auto-key password Hello groupname RouterC
#
return
GE 1/0/0
10.0.0.2/24
Configuration Notes
l Before configuring key at the client and server side, ensure the key already exists.
l Authentication key should be reliable at client and server side. Also enable authentication
at client side.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure RouterA to be an NTP server and configure a primary clock on it.
2. Configure RouterB to be an NTP client and synchronize its clock with the clock of
RouterA.
3. Configure RouterC and RouterD to synchronize their clocks with the clock of RouterB.
4. Enable NTP authentication on all Routers.
NOTE
l You must enable NTP authentication on the client prior to specifying the IP address of the NTP server
and authentication key to be sent to the server; otherwise, NTP authentication is not performed before
clock synchronization.
l To implement authentication successfully, configure both the server and the client.
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Configure the IP addresses based on Figure 5-12 so that RouterA, RouterB, RouterC and
RouterD are routable. The detailed procedures are not mentioned here.
Step 2 Configure a primary NTP clock on RouterA and enable NTP authentication.
# On RouterA, set its local clock as a primary NTP clock with stratum being 2.
<RouterA> system-view
[RouterA] ntp-service refclock-master 2
# Enable NTP authentication, configure the authentication key, and declare the key to be reliable.
[RouterA] ntp-service authentication enable
[RouterA] ntp-service authentication-keyid 42 authentication-mode md5 Hello
[RouterA] ntp-service reliable authentication-keyid 42
Note that authentication keys configured on the server and the client should be the same.
# Configure acl rule.
[RouterA] acl 2000
[RouterA] rule 2000 permit source 10.0.0.1 0
# Enable KOD.
[RouterA] ntp-service kod-enable
Step 3 Configure a primary NTP clock on RouterB and enable NTP authentication.
# On RouterB, enable NTP authentication. Configure the authentication key and declare the key
to be reliable.
<RouterB> system-view
[RouterB] ntp-service authentication enable
[RouterB] ntp-service authentication-keyid 42 authentication-mode md5 Hello
[RouterB] ntp-service reliable authentication-keyid 42
# Specify RouterA to be the NTP server of RouterB and use the authentication key.
[RouterB] ntp-service unicast-server 2.2.2.2 authentication-keyid 42
After the configurations, the clock on RouterC can be synchronized with the clock on RouterB.
View the NTP status on RouterC and find that the clock is synchronized. The stratum of the
clock is 4, one stratum lower than that on RouterB.
[RouterC] display ntp-service status
clock status: synchronized
clock stratum: 4
reference clock ID: 10.0.0.1
nominal frequency: 60.0002 Hz
actual frequency: 60.0002 Hz
clock precision: 2^18
clock offset: 3.8128 ms
root delay: 31.26 ms
root dispersion: 74.20 ms
peer dispersion: 34.30 ms
reference time: 11:55:56.833 UTC Mar 2 2006(C7B15BCC.D5604189)
View the NTP status on RouterD and find that the clock is synchronized. The stratum of the
clock is 4, one stratum lower than that on RouterB.
[RouterD] display ntp-service status
clock status: synchronized
clock stratum: 4
reference clock ID: 10.0.0.1
nominal frequency: 60.0002 Hz
actual frequency: 60.0002 Hz
clock precision: 2^18
clock offset: 3.8128 ms
root delay: 31.26 ms
----End
Configuration Files
l Configuration file of RouterA
#
sysname RouterA
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 2.2.2.2 255.255.255.0
#
ospf 1
area 0.0.0.0
network 2.2.2.0 0.0.0.255
#
ntp-service authentication enable
ntp-service authentication-keyid 42 authentication-mode md5 %@ENC;8HX
\#Q=^Q`MAF4<1!!
ntp-service reliable authentication-keyid 42
ntp-service refclock-master 2
acl 2000
rule 2000 permit source 10.0.0.1 0
ntp-service access limited 2000
ntp-service discard min-interval 4 avg-interval 4
ntp-service kod-enable
#
return
RouterC
GE1/0/0 GE2/0/0
1.0.1.11/24 3.0.1.2/24
GE1/0/0
RouterA 1.0.1.2/24 RouterF
GE1/0/0
3.0.1.32/24
RouterD
Configuration Notes
Before configuring a peer mode, ensure the peer is reachable from host side.
Configuration Roadmap
The configuration roadmap is as follows
1. Configure the clock on RouterC to be the NTP primary clock. The clock on RouterD should
be synchronized to the clock on RouterC.
2. Configure RouterF and RouterD to be NTP peer so that RouterF should send clock
synchronization requests to RouterD
3. Finally, the clocks on RouterC, RouterD and RouterF can be synchronized.
Data Preparation
To complete the configuration, you need the following data:
l IP address of RouterC
l IP address of RouterD
l Stratum of the NTP primary clock
Procedure
Step 1 Configure IP addresses for RouterC, RouterD, and RouterF.
Configure an IP address for each interface based on Figure 5-13. After configurations, the three
routers can ping through each other.
The detailed procedures are not mentioned here.
Step 2 Configure the NTP Client/Server mode.
# Configure the clock on RouterC to be its own reference clock with the stratum being 2.
<RouterC> system-view
[RouterC] ntp-service refclock-master 2
# Enable KOD
[RouterD] ntp-service kod-enable
After configurations, the clock on RouterD can be synchronized to the clock on RouterC.
View the NTP status on RouterD and find that the clock is synchronized. The stratum of the
clock on RouterD is 3, one stratum lower than that on RouterC.
[RouterD] display ntp-service status
clock status: synchronized
clock stratum: 3
reference clock ID: 10.1.1.2
nominal frequency: 64.0029 Hz
actual frequency: 64.0029 Hz
clock precision: 2^7
clock offset: 0.0000 ms
root delay: 62.50 ms
root dispersion: 0.20 ms
peer dispersion: 7.81 ms
reference time: 06:52:33.465 UTC Mar 7 2006(C7B7AC31.773E89A8)
autokey crypto flags: 0x80021
Since no primary clock is configured on RouterF, the clock on RouterF should be synchronized
to the clock on RouterD.
Step 4 Verify the configuration.
View the status of RouterF after clock synchronization and you can find that the status is
"synchronized". That is, clock synchronization completes. You can also find that the stratum of
the clock on RouterF is 4, one stratum lower than that on RouterD.
[RouterF] display ntp-service status
clock status: synchronized
clock stratum: 4
reference clock ID: 10.1.1.3
nominal frequency: 64.0029 Hz
actual frequency: 64.0029 Hz
clock precision: 2^7
clock offset: 0.0000 ms
root delay: 124.98 ms
root dispersion: 0.15 ms
peer dispersion: 10.96 ms
reference time: 06:55:50.784 UTC Mar 7 2006(C7B7ACF6.C8D002E2)
autokey crypto flags: 0x80021
----End
Configuration Files
l Configuration file of RouterC
#
sysname RouterC
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
#
ntp-service refclock-master 2
#
return
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.3 255.255.255.0
#
acl 2000
rule 2000 permit source 128.1.1.4 0
ntp-service access limited 2000
ntp-service discard min-interval 4 avg-interval 4
ntp-service kod-enable
ntp-service unicast-server 10.1.1.2
#
return
Networking Requirements
As shown in Figure 5-14,
l RouterC and RouterD are in the same network segment; RouterA is in another network
segment; RouterF connects with the two network segments.
l RouterC functions as an NTP manycast server and its local clock is a primary clock with
the stratum 2. Manycast packets are sent out from GE 1/0/0.
l RouterD and RouterA are manycast clients and send packets on their respective GE 1/0/0.
GE1/0/0
3.0.1.31/24
RouterC
GE1/0/0 GE2/0/0
1.0.1.11/24 3.0.1.2/24
GE1/0/0
RouterA 1.0.1.2/24 RouterF
GE1/0/0
3.0.1.32/24
RouterD
Configuration Notes
Ensure manycast client is reachable to the manycast server before synchronization. This can be
checked using ping command on the console interface.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure RouterC as an NTP manycast server.
2. Configure RouterA and RouterD as NTP manycast clients.
Data Preparation
To complete the configuration, you need the following data:
l IP addresses of RouterA, RouterC, RouterD, and RouterF
l Stratum of the NTP primary clock
Procedure
Step 1 Configure an IP address for each router.
Configure IP addresses based on Figure 5-14. The detailed procedures are not mentioned here.
Step 2 Configure an NTP manycast server.
# Set the local clock on RouterC as an NTP primary clock with stratum 2.
<RouterC> system-view
[RouterC] ntp-service refclock-master 2
# Enable KOD.
[RouterC] ntp-service kod-enable
# Configure RouterC to be an NTP manycast server. NTP manycast server sends NTP manycast
packets after receiving manycast client packets.
[RouterC] interface gigabitethernet 1/0/0
[RouterC-GigabitEthernet1/0/0] ntp-service manycast-server
----End
Configuration Files
l Configuration file of RouterA
#
sysname RouterA
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.0.1.1 255.255.255.0
ntp-service manycast-client
#
return
6 1588v2 Configuration
By configuring IEEE 1588v2, you can enable devices in the IP RAN scenario to implement time
synchronization and clock synchronization.
6.1 Overview of 1588v2
IEEE 1588, defined by the Institute of Electrical and Electronics Engineers (IEEE), is a standard
for Precision Clock Synchronization Protocol for Networked Measurement And Control
Systems (PTP). As a time synchronization protocol, 1588v2 is used to implement high-precise
time synchronization between devices. In addition, 1588v2 can be used to implement clock
synchronization between devices.
6.2 Configuring 1588v2 on OC
An ordinary clock (OC) has only one 1588v2 clock interface (a clock interface enabled with
1588v2) through which the OC synchronizes with an upstream node or distributes time signals
to downstream nodes.
6.3 Configuring 1588v2 on BC
A boundary clock (BC) has multiple 1588v2 clock interfaces, one of which is used to synchronize
with an upstream node. The other interfaces are used to distribute time signals to downstream
nodes.
6.4 Configuring 1588v2 on TC
Unlike the BC and OC, a Transparent Clock (TC) does not need to be synchronized with other
clocks. A TC has multiple 1588v2 interfaces, among which 1588v2 messages are forwarded to
correct the message forwarding delay on each interface. The TC is not synchronized with other
clocks through any of these interfaces.
6.5 Configuring 1588v2 on TCandBC
A TCandBC can function as both a TC and a BC. It has several physical interfaces to
communicate with the 1588v2 network. Some interfaces are of the TC type and other interfaces
are of the BC type. The domain value of a BC interface must be the one configured in the system
view; the domain value of a TC interface must be configured in the interface view.
6.6 Configuring the 1588v2 Time Source
This section describes how to configure a 1588v2 clock source, including how to obtain a
standard synchronous time through a clock interface from a BITS device without using 1588v2
and how to use 1588v2 to advertise the standard synchronous time to downstream nodes through
the other two interfaces.
Definition of synchronization
On a modern communications network, the proper functioning of most telecommunications
services requires that the frequency offset or time difference between devices be kept in a
reasonable range. This is the network's requirement for clock synchronization. Network clock
synchronization consists of time synchronization and frequency synchronization.
l Frequency synchronization
Frequency synchronization, namely, clock synchronization, refers to a strict relationship
between signals based on a constant frequency offset or phase offset, in which signals are
sent or received at an average rate in an instance time. In this manner, all devices in the
communications network operate at the same rate. The difference in phases between signals
is a constant value.
l Time synchronization
Time synchronization, namely, phase synchronization, refers to consistency of both
frequencies and phases between signals. The phase offset between signals is always 0.
Phase synchronization
Watch A
Watch B
Frequency synchronization
Watch A
Watch B
Figure 6-1 shows the difference between time synchronization and frequency synchronization.
In time synchronization, Watch A and Watch B always keep the same time, but in frequency
synchronization, Watch A and Watch B keep different time, but the time difference between the
two watches is a constant value, for example, six hours.
Phase synchronization is also called time synchronization; frequency synchronization is also
known as clock synchronization.
Background
With the evolution towards IP network, devices on the wireless bearer network require high-
accurate clock synchronization. To achieve clock synchronization between base stations in an
IP RAN, you need to ensure that clock frequencies between base stations are within a certain
precision. Call dropping occurs during handoff. In certain wireless communications systems,
phase synchronization is required in addition to frequency synchronization.
Table 6-1 shows different requirements for network clock synchronization.
Clock synchronization on different base stations of different standards are implemented by using
various methods, such as physical clocks (such as the building integrated timing supply system
(BITS) clock, WAN clock, or synchronous Ethernet clock) and recovery clocks by exchanging
packets (such as the Communication Engineering Standard Adaptive Clock Recovery (CES
ACR)/Data Clock Recovery (DCR), and 1588v2 clock). Base stations usually directly access
the global positioning system (GPS) to meet the requirement for time synchronization. Packet-
based time synchronization cannot meet the requirement of base stations. Time synchronization
reaches sub-second precision by using the Network Time Protocol (NTP) and sub-millisecond
precision through 1588v1. With the assistance of hardware, 1588v2 provides time
synchronization of sub-micro second precision required by wireless networks.
Operation and maintenance costs of 1588v2 is lower than GPS (which needs to be deployed at
each base station). In addition, 1588v2 works independently of GPS, which is of strategic
significance.
Concepts of 1588v2
The Precision Time Protocol (PTP), also called 1588, is a standard defined by the Institute of
Electrical and Electronics Engineers (IEEE) for Precision Clock Synchronization Protocol For
Networked Measurement and Control Systems. IEEE 1588v2 is a time synchronization protocol.
IEEE 1588v2 ensures high-precision time synchronization between devices, and is also used in
clock synchronization between devices.
A physical network can be logically divided into multiple clock domains. In each clock domain,
there is synchronized time, with which all devices in the domain are synchronized. The
synchronized time of one clock domain is independent of that of another clock domain.
Each node on a time synchronization network is called a clock. 1588v2 defines the following
types of clocks:
l Ordinary clock
An ordinary clock (OC) has only one 1588v2 clock interface (a clock interface enabled
with 1588v2) through which the local clock is synchronized with an upstream 1588-aware
node or distributes time signals to downstream 1588-aware nodes.
l Boundary clock
A boundary clock (BC) has multiple 1588v2 clock interfaces. One port is synchronized
with an upstream 1588-aware node and the others distribute time signals to downstream
1588-aware nodes.
In the case that a router obtains the standard time through an external non-1588v2 port from
a BITS device and distributes the time to downstream nodes through two 1588v2 ports. As
the router has more than one 1588v2 port, the router is called a BC.
l Transparent clock
Distinct from BC and OC that need to be synchronized with other clocks, TC does not need
to be synchronized with other clocks. A TC has multiple 1588v2 ports, through which
1588v2 packets are forwarded. In addition, the TC corrects forwarding delays for these
1588v2 packets (for details, see the following sections) and is not synchronized with other
clocks through any port.
TCs are classified into end-to-end (E2E) TCs and peer-to-peer (P2P) TCs.
– End-to-End Transparent Clock (E2ETC): transparently forwards Sync and Announce
packets and expires the other 1588v2 packets. It calculates the entire end-to-end link
delay.
– Peer-to-Peer Transparent Clock (P2PTC): transparently forwards Sync and Announce
packets and expires the other 1588v2 packets. It calculates every peer-to-peer segment
delay along an entire link.
In addition to the three basic types of clocks, the NE80E/40E supports the following two
compound types of clocks:
l TCOC: carries the characteristics of both a TC and an OC. A TCOC provides multiple ports
connected to a 1588v2 network. Among those ports, one is OC and the others are TCs. A
TCOC implements 1588v2 frequency synchronization, not time synchronization.
l TCandBC: carries the characteristics of both the TC and BC. A TCandBC provides multiple
ports connected to a 1588v2 network. Among those ports, some are TCs and the others are
BCs. TCs and BCs belong to different clock domains. A TCandBC implements both 1588v2
frequency synchronization and time synchronization. The domain value of BC ports is the
same as the 1588v2 domain value configured in the global view. But the domain value of
each TC port should be configured in its interface view.
In a 1588v2 system, all clocks are organized based on the master/slave synchronization
hierarchy, with the grandmaster clock at the top of the hierarchy. Clock synchronization is
implemented by exchanging 1588v2 packets. The slave clock calculates its offset and delay
comparing with the master clock based on the timestamp information carried in the 1588v2
packet and then synchronizes its local clock with the master clock.
A 1588v2 packet carries information about clock information and time. On the network shown
in Figure 6-2, the 1588v2 device accesses and writes a timestamp carried in a 1588v2 packet at
the data link layer to calculate the delay of every link segment. Compared with the Network
Time Protocol (NTP), 1588v2 ensures a higher precision.
MAC MAC
Timestamping Timestamping
PHY PHY
PTP Packet
1588 ACR
Adoptive Clock Recovery (ACR)/Adoptive Time Recovery (ATR) carries out clock/time
synchronization by exchanging 1588v2 packets. Unlike 1588v2 that achieves frequency
synchronization only when all devices on a network support 1588v2, 1588 ACR is capable of
implementing frequency synchronization on a network with both 1588v2-aware devices and
1588v2-unaware devices.
Applications of 1588v2
On the network shown in Figure 6-3, an OC encapsulates clock information with high accuracy
provided by the Global Positioning System (GPS) into a 1588v2 packet, and provides clock
information for a bearer network by using the 1588v2 packet. A TC, as a core device,
transparently transmits clock information provided by the OC over the entire bearer network.
After that, edge devices on the bearer network function as BC and provide the high-accurate
clock information obtained through the 1588v2 packet to wireless access devices, such as a
NodeB or an RNC.
GPS
BC TC
RNC OC
TC
TC
TC TC
BC BC
NodeB NodeB
NodeB NodeB
PTP Packet
– Unicast encapsulation
– Multicast encapsulation
l UDP encapsulation: Differentiated Service CodePoint (DSCP) values are carried in 1588v2
packets. UDP encapsulation is classified into two types:
– Unicast encapsulation
– Multicast encapsulation
An encapsulation mode depends on either of the following types of links:
l On a Layer 2 link: The MAC encapsulation mode is used.
l On a Layer 3 link: The UDP encapsulation mode is used.
BMC Algorithm and Static Clock Source Selection Supported by the NE80E/40E
The NE80E/40E supports the best master clock (BMC) algorithm and static clock source
selection.
l BMC
1588v2 devices using the BMC algorithm dynamically selects the best master clock on a
network, ensuring clock accuracy of devices.
l Static clock source selection
A specified clock source is selected as the master clock source by using a configuration
command.
Applicable Environment
As shown in Figure 6-4, when two devices transmit wireless data on the IP bearer network, a
low delay transmission of real-time radio services should be guaranteed. The two devices serve
as OC to transmit time information through 1588v2 packets, which ensures clock
synchronization between devices. OC can provide a high-accurate time source for wireless
devices through the Building Integrated Timing Supply (BITS) system.
OC
OC1 OC2
Pre-configuration Tasks
Before configuring 1588v2 on OC, complete the following tasks:
l Configuring physical parameters for the interfaces so that the physical layer of the interfaces
is Up
l (Optional) Configuring static routes or IGP protocols to make IP routes reachable among
nodes
l Ensuring that the OC has correctly imported the clock and time signals from the BITS
Data Preparation
To configure 1588v2 on OC, you need the following data.
No. Data
No. Data
4 (Optional) Interval for sending Announce packets and the timeout period for
receiving Announce packets
Context
Do as follows on the OC:
Procedure
Step 1 Run:
system-view
1588v2 is enabled.
Step 3 Run:
ptp device-type oc
NOTE
Clocks that need to be synchronized through 1588v2 packets must belong to the same 1588v2 clock domain.
The clock ID of the clock source that is permitted to participate in local BMC calculation is set.
----End
Context
Do as follows on the OC:
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface interface-type interface-number
Step 3 Run:
ptp delay-mechanism { delay | pdelay }
One of the following delay measurement mechanism is configured for the device:
l Delay mode:
A delay request-response mechanism, in which information about the clock and time is
calculated according to the delay of the entire link between the master clock and slave clock.
l PDelay mode:
A peer delay mechanism, in which information about the clock and time is calculated
according to the delay of each segment of the link between the master clock and slave clock.
NOTE
Different delay measurement mechanisms cannot replace each other. Therefore, delay measurement
mechanisms configured on 1588v2 interfaces on the same link segment must be identical.
Step 4 Run:
ptp enable
The asymmetric correction time for sending 1588v2 packets on the interface is set.
The timestamping mode of the synchronization packets sending by the 1588v2 port is set.
----End
Context
Do as follows on the 1588v2 device:
Procedure
l Configuring time attributes for Announce packets
1. Run:
system-view
The interval for sending Announce packets on an interface is set to the announce-
intervalth power of 2, in 1/1024 seconds.
The default value of announce-interval is 7, which means that the interval for sending
Announce packets on the interface is 128/1024s.
4. (Optional) Run:
ptp announce-receipt-timeout timeout-time
The timeout period for receiving Announce packets on an interface is set to the
timeout-timeth power of 2, in 1/1024 seconds.
The default timeout-time is 9, which means that the timeout period for receiving
Announce packets on the interface is 512/1024s.
l Configuring time attributes for Sync packets
1. Run:
system-view
The interval for sending Sync packets on an interface is set to the sync-intervalth power
of 2, in 1/1024 seconds.
The interval for sending Sync packets on an interface is set to the sync-intervalth power
of 2, in 1/1024 seconds.
The default sync-interval is 0, which means that the interval for sending Sync packets
on the interface is 1/1024s.
l Configuring time attributes for Delay packets
1. Run:
system-view
The interval for sending Delay_Req packets on an interface is set to the min-delayreq-
intervalth power of 2, in 1/1024 seconds.
The default min-delayreq-interval is 7, which means that the interval for sending
Delay_Req packets on the interface is 128/1024s.
4. Run:
ptp min-pdelayreq-interval min-pdelayreq-interval
The default min-pdelayreq-interval is 7, which means that the interval for sending
PDelay_Req packets on the interface is 128/1024s.
----End
Prerequisites
Before configuring encapsulation modes for 1588v2 packets, check the link type for 1588v2
packet transmission:
l The Layer 2 link adopts the MAC encapsulation mode for 1588v2 packets.
l The Layer 3 link adopts the UDP encapsulation mode for 1588v2 packets.
Context
Do as follows on the 1588v2 device:
Procedure
l Configuring the MAC encapsulation mode
1. Run:
system-view
NOTE
If the unicast destination MAC address is not configured, a multicast destination MAC address
is adopted by default.
4. Run:
ptp mac-egress vlan vlan-id [ priority priority ]
The VLAN ID for transmitting MAC-encapsulated 1588v2 packets and the 802.1p
priority of the 1588v2 packet are configured.
l Configuring the UDP encapsulation mode
1. Run:
system-view
The 1588v2 packets to be sent from the interface are encapsulated in UDP
encapsulation mode, and the source and destination IP addresses are configured.
NOTE
5. Run:
ptp udp-egress source-ip source-ip [ dscp dscp ]
The VLAN ID for sending and receiving 1588v2 packets and the priority of the UDP-
encapsulated 1588v2 packet are configured on the interface.
----End
Prerequisites
All configurations of OC1 and OC2 are complete.
Procedure
l Run the display ptp all [ state | config ] command to display the operation status and
configuration of 1588v2.
l Run the display ptp interface interface-type interface-number command to display
1588v2 information of the interface on the 1588v2 device.
----End
Example
Run the display ptp all command, and you can view the configuration and operation status of
1588v2.
l The 1588v2 configuration includes the following:
– 1588v2 is enabled.
– The 1588v2 domain value is 1.
– The device type is OC.
– The device works in slave-only mode.
l The 1588v2 operation information includes the following:
– The clock ID of the local clock is 001882fffe1b1bf4.
– The clock ID of the time source is 001882fffe77c2cf.
– The clock ID of the parent clock is 001882fffe77c2cf.
– The interface enabled with 1588v2 is GE 1/0/0.
– The delay measurement mechanism on the interface is Delay.
– The timeout period for receiving Announce packets on the interface is 1s.
<HUAWEI> display ptp all
Device config info
------------------------------------------------------------------
PTP state :enabled Domain value :1
Slave only :yes Device type :OC
Port info
Name State Delay-mech Ann-timeout Type Domain
------------------------------------------------------------------------
GigabitEthernet1/0/0 slave delay 10 OC 1
Time Performance Statistics(ns): Slot 1 Card 0 Port 0
------------------------------------------------------------------------
Realtime(T2-T1) :534 Pathdelay :0
Max(T2-T1) :887704804
Min(T2-T1) :512
Applicable Environment
As shown in Figure 6-5, NodeBs need to synchronize with the BITS time source. All routers
on the bearer network support 1588v2, and NodeBs do not support 1588v2. BC is connected to
the BITS to synchronize with the BITS clock and advertise clock information to other clocks on
the bearer network. Other backbone nodes on the bearer network are deployed as BC, which can
therefore synchronize with the BITS clock source and advertise clock information to downstream
clocks. Besides that, two OCs are deployed at the user side of the bearer network to synchronize
with the upstream BITS clock and advertise clock information to NodeBs in traditional mode.
By adopting the preceding network deployment scheme that combines 1588v2 and traditional
synchronization mode, clocks on the bearer network and wireless network can be synchronized
based on the combination of 1588v2 and traditional synchronization mode.
BITS
Pre-configuration Tasks
Before configuring 1588v2 on BC, complete the following tasks:
l Configuring physical parameters for the interfaces so that the physical layer of the interfaces
is Up
l (Optional) Configuring the static route or enabling IGP to ensure that IP routes between
the nodes are reachable
l Ensuring that BC2 has correctly imported clock and time signals from the BITS
Data Preparation
To configure 1588v2 on BC, you need the following data.
No. Data
4 (Optional) Interval for sending Announce packets and the timeout period for
receiving Announce packets
Context
Do as follows on the BC:
Procedure
Step 1 Run:
system-view
correction time, mode in which packets are timestamped, and statically configure the status of
1588v2 interface on each interface.
Context
Do as follows on the BC:
Procedure
Step 1 Run:
system-view
A delay measurement mechanism is configured for the device, which can be either of the
following:
l Delay mode:
A delay request-response mechanism, in which information about the clock and time is
calculated according to the delay of the entire link between the master clock and slave clock.
l PDelay mode:
A peer delay mechanism, in which information about the time and clock is calculated
according to the delay of each segment of the link between the master clock and slave clock.
NOTE
Different delay measurement mechanisms cannot replace each other. Therefore, delay measurement
mechanisms configured on 1588v2 interfaces on the same link segment must be identical.
Step 4 Run:
ptp enable
The interface of the 1588v2 device is configured to discard the received Announce packets.
NOTE
Announce packets can ensure the 1588v2 clock synchronization between devices. If an interface discards
Announce packets, the device where the interface resides cannot receive clock synchronization information
from other 1588v2 devices. Usually, this command is configured on the interface at the user side.
The asymmetric correction time for sending 1588v2 packets on the interface is set.
Step 7 (Optional) Run:
ptp clock-step { one-step | two-step }
The timestamping mode of the synchronization packets sending by the 1588v2 port is set.
Step 8 (Optional) Run:
ptp port-state { slave | uncalibrated | passive | master | premaster | listening |
faulty | disabled | initializing }
----End
Context
Do as follows on the 1588v2 device:
Procedure
l Configuring time attributes for Announce packets
1. Run:
system-view
The interval for sending Announce packets on an interface is set to the announce-
intervalth power of 2, in 1/1024 seconds.
The default value of announce-interval is 7, which means that the interval for sending
Announce packets on the interface is 128/1024s.
4. (Optional) Run:
ptp announce-receipt-timeout timeout-time
The timeout period for receiving Announce packets on an interface is set to the
timeout-timeth power of 2, in 1/1024 seconds.
The default timeout-time is 9, which means that the timeout period for receiving
Announce packets on the interface is 512/1024s.
l Configuring time attributes for Sync packets
1. Run:
system-view
The interval for sending Sync packets on an interface is set to the sync-intervalth power
of 2, in 1/1024 seconds.
The interval for sending Sync packets on an interface is set to the sync-intervalth power
of 2, in 1/1024 seconds.
The default sync-interval is 0, which means that the interval for sending Sync packets
on the interface is 1/1024s.
l Configuring time attributes for Delay packets
1. Run:
system-view
The interval for sending Delay_Req packets on an interface is set to the min-delayreq-
intervalth power of 2, in 1/1024 seconds.
The default min-delayreq-interval is 7, which means that the interval for sending
Delay_Req packets on the interface is 128/1024s.
4. Run:
ptp min-pdelayreq-interval min-pdelayreq-interval
The default min-pdelayreq-interval is 7, which means that the interval for sending
PDelay_Req packets on the interface is 128/1024s.
----End
Prerequisites
Before configuring encapsulation modes for 1588v2 packets, check the link type for 1588v2
packet transmission:
l The Layer 2 link adopts the MAC encapsulation mode for 1588v2 packets.
l The Layer 3 link adopts the UDP encapsulation mode for 1588v2 packets.
Context
Do as follows on the 1588v2 device:
Procedure
l Configuring the MAC encapsulation mode
1. Run:
system-view
NOTE
If the unicast destination MAC address is not configured, a multicast destination MAC address
is adopted by default.
4. Run:
ptp mac-egress vlan vlan-id [ priority priority ]
The VLAN ID for transmitting MAC-encapsulated 1588v2 packets and the 802.1p
priority of the 1588v2 packet are configured.
l Configuring the UDP encapsulation mode
1. Run:
system-view
The 1588v2 packets to be sent from the interface are encapsulated in UDP
encapsulation mode, and the source and destination IP addresses are configured.
NOTE
The VLAN ID for sending and receiving 1588v2 packets and the priority of the UDP-
encapsulated 1588v2 packet are configured on the interface.
----End
Prerequisites
All configurations of the BC are complete.
Procedure
l Run the display ptp all command to display the operation status and configuration of
1588v2 on the BC.
l Run the display ptp interface interface-type interface-number command to display
1588v2 information of the interface on the BC.
----End
Example
As shown in Figure 6-5, BC2 is the grandmaster clock on the 1588v2 network. Run the display
ptp all command on BC2, and you can view the operation status and configuration of 1588v2.
l The 1588v2 configuration includes the following:
– 1588v2 is enabled.
– The 1588v2 domain value is 1.
– The device type is BC.
– The device works in non-slave-only mode.
l The 1588v2 operation information includes the following:
– Clock ID of the local clock is 001882fffe77c2cf.
– Interface enabled with 1588v2 are GE 1/0/0 and GE 2/0/0.
– GE 1/0/0 and GE 2/0/0 are in the Master state.
– The delay measurement mechanism on GE 1/0/0 and GE 2/0/0 is Delay.
– The timeout periods for receiving Announce packets on GE 1/0/0 and GE 2/0/0 are both
512/1024s.
<HUAWEI> display ptp all
Port info
Name State Delay-mech Ann-timeout Type Domain
------------------------------------------------------------------------
GigabitEthernet1/0/0 master delay 9 BC 1
GigabitEthernet2/0/0 master delay 9 BC 1
BC1 and BC3 are slave clocks of BC2; meanwhile, they are master clocks of OC1 and OC2
respectively. After configurations are complete, run the display ptp all command. You can view
the configuration and operation status of 1588v2. Take the command output on BC1 as an
example.
l The 1588v2 configuration includes the following:
– 1588v2 is enabled.
– The 1588v2 domain value is 1.
– The device type is BC.
– The device works in non-slave-only mode.
l The 1588v2 operation information includes the following:
– The clock ID of the local clock is 001882fffe1b1bf4.
– The clock ID of the time source is 001882fffe77c2cf.
– The clock ID of the parent clock is 001882fffe77c2cf.
– Interfaces enabled with 1588v2 are GE 1/0/0 and GE 2/0/0.
– The delay measurement mechanism on GE 1/0/0 and GE 2/0/0 is Delay.
– The timeout period for receiving Announce packets on GE 1/0/0 is 1s.
<HUAWEI> display ptp all
Device config info
------------------------------------------------------------------
PTP state :enabled Domain value :1
Slave only :no Device type :BC
Set port state :no Local clock ID :001882fffe1b1bf4
Acl :no Virtual clock ID :no
Acr :no Time lock success :no
Port info
Name State Delay-mech Ann-timeout Type Domain
------------------------------------------------------------------------
GigabitEthernet1/0/0 slave delay 10 bc 1
GigabitEthernet2/0/0 master delay 10 bc 1
Time Performance Statistics(ns): Slot 1 Card 0 Port 0
------------------------------------------------------------------------
Realtime(T2-T1) :534 Pathdelay :0
Max(T2-T1) :887704804
Min(T2-T1) :512
OC1 and OC2 serve as the leaf nodes of the 1588v2 network to synchronize with the clock
signals of BC1 and BC3, and expire 1588v2 packets. After the configurations, run the display
ptp all command. You can view the configuration and operation status of 1588v2. Take the
command output on OC1 as an example.
Port info
Name State Delay-mech Ann-timeout Type Domain
------------------------------------------------------------------------
GigabitEthernet1/0/0 slave delay 10 OC 1
Time Performance Statistics(ns): Slot 1 Card 0 Port 0
------------------------------------------------------------------------
Realtime(T2-T1) :534 Pathdelay :0
Max(T2-T1) :887704804
Min(T2-T1) :512
Applicable Environment
As shown in , NodeBs support 1588v2 and function as OC. 1588v2 is configured to ensure the
clock synchronization between devices on the bearer network. Core devices on the bearer
network function as TC to forward 1588v2 packets and synchronize the clock or time between
BC and OC.
BITS
Master
TC1 BC TC2
Pre-configuration Tasks
Before configuring 1588v2 on TC, complete the following tasks:
l Configuring physical parameters for the interfaces so that the physical layer of the interfaces
is Up
l (Optional) Configuring the static route or enabling IGP to ensure that IP routes between
the nodes are reachable
l Ensuring that Master has correctly imported clock and time signals from the BITS
Data Preparation
To configure 1588v2 on TC, you need the following data.
No. Data
4 (Optional) Interval for sending Announce packets and the timeout period for
receiving Announce packets
Context
Do as follows on the TC:
Procedure
Step 1 Run:
system-view
Step 4 Run:
NOTE
Clocks need to be synchronized through 1588v2 packets must belong to the same 1588v2 clock domain.
----End
Context
Do as follows on the TC:
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface interface-type interface-number
Step 3 Run:
ptp enable
Step 4 Run:
ptp tcoc-clock-id clock-source-id port-num port-num
NOTE
This command takes effect only on the TCOC.
The asymmetric correction time for sending 1588v2 packets on the interface is set.
The timestamping mode of the synchronization packets sending by the 1588v2 port is set.
----End
Context
Do as follows on the 1588v2 device:
Procedure
l Configuring time attributes for Announce packets
1. Run:
system-view
The interval for sending Announce packets on an interface is set to the announce-
intervalth power of 2, in 1/1024 seconds.
The default value of announce-interval is 7, which means that the interval for sending
Announce packets on the interface is 128/1024s.
4. (Optional) Run:
ptp announce-receipt-timeout timeout-time
The timeout period for receiving Announce packets on an interface is set to the
timeout-timeth power of 2, in 1/1024 seconds.
The default timeout-time is 9, which means that the timeout period for receiving
Announce packets on the interface is 512/1024s.
l Configuring time attributes for Sync packets
1. Run:
system-view
The interval for sending Sync packets on an interface is set to the sync-intervalth power
of 2, in 1/1024 seconds.
The interval for sending Sync packets on an interface is set to the sync-intervalth power
of 2, in 1/1024 seconds.
The default sync-interval is 0, which means that the interval for sending Sync packets
on the interface is 1/1024s.
l Configuring time attributes for Delay packets
1. Run:
system-view
The interval for sending Delay_Req packets on an interface is set to the min-delayreq-
intervalth power of 2, in 1/1024 seconds.
The default min-delayreq-interval is 7, which means that the interval for sending
Delay_Req packets on the interface is 128/1024s.
4. Run:
ptp min-pdelayreq-interval min-pdelayreq-interval
Prerequisites
Before configuring encapsulation modes for 1588v2 packets, check the link type for 1588v2
packet transmission:
l The Layer 2 link adopts the MAC encapsulation mode for 1588v2 packets.
l The Layer 3 link adopts the UDP encapsulation mode for 1588v2 packets.
Context
Do as follows on the 1588v2 device:
Procedure
l Configuring the MAC encapsulation mode
1. Run:
system-view
NOTE
If the unicast destination MAC address is not configured, a multicast destination MAC address
is adopted by default.
4. Run:
ptp mac-egress vlan vlan-id [ priority priority ]
The VLAN ID for transmitting MAC-encapsulated 1588v2 packets and the 802.1p
priority of the 1588v2 packet are configured.
l Configuring the UDP encapsulation mode
1. Run:
system-view
The 1588v2 packets to be sent from the interface are encapsulated in UDP
encapsulation mode, and the source and destination IP addresses are configured.
NOTE
The VLAN ID for sending and receiving 1588v2 packets and the priority of the UDP-
encapsulated 1588v2 packet are configured on the interface.
----End
Prerequisites
All configurations of the TC are configured.
Procedure
l Run the display ptp all [state | config ] command to display the operation status and
configuration of 1588v2 on the TC.
----End
Example
Run the display ptp all command, and you can view the configuration and operation status of
1588v2 on the TC.
<HUAWEI> display ptp all
Port info
Name State Delay-mech Ann-timeout Type Domain
------------------------------------------------------------------------
GigabitEthernet1/0/0 premaster pdelay 9 TC 1
GigabitEthernet1/0/1 premaster pdelay 9 TC 1
Applicable Environment
As shown in Figure 6-7, all routers and NodeB support 1588v2. Operator A has NodeBs, OC2,
OC3, and a BITS standard clock source BTIS2, but do not have bearer network devices. Operator
B leases its bearer network to Operator A. Devices on the bearer network synchronize with the
BITS standard clock source BTIS1 of Operator B. The following network deployment scheme
is adopted to ensure that clock synchronization is implemented independently on devices of
Operator A and Operator B:
l OC1 and OC2 are respectively connected to BITS1 and BITS2, and advertise clock
synchronization information to downstream clocks through 1588v2 packets.
l The interface on TCandBC1 that is directly connected to OC1 is a BC interface, which
synchronizes the clock in Domain1; the interface of TCandBC1 at the user side is a TC
interface, which exchanges 1588v2 packets with TCandBC2 through an L2VPN, MPLS,
or L3VPN tunnel.
l The interface on TCandBC2 that is directly connected to OC1 is a BC interface, which
synchronizes the clock in Domain1; the interface of TCandBC2 at the user side is a TC
interface, which exchanges 1588v2 packets with TCBC1 through an L2VPN. MPLS, or
L3VPN tunnel.
l OC3 receives the 1588v2 packets sent from TCandBC1 and synchronizes with the clock
signals from TCandBC1. Then, OC3 advertises clock signals to NodeB in the traditional
mode, such as the Ethernet-based clock synchronization.
l P node functions as a BC to implement 1588v2 synchronization and transmit messages
between TCandBC1 and TCandBC2.
The entire bearer network functions as a huge TC, which transparently transmits BIST2 clock
information to NodeB.
OC3 P OC2
TCBC2 TCBC1
PW
BITS2
NodeB
Pre-configuration Tasks
Before configuring 1588v2 on a TCandBC, complete the following tasks:
l Configuring physical parameters for the interfaces so that the physical layer of the interfaces
is Up
l (Optional) Configuring the static route or enabling IGP to ensure that IP routes between
the nodes are reachable
l Ensuring that OC1 and OC2 have correctly imported clock and time signals from the BITS
Data Preparation
To configure 1588v2 on a TCandBC, you need the following data.
No. Data
4 (Optional) Interval for sending Announce packets and the timeout period for
receiving Announce packets
Context
Do as follows on the TCandBC:
Procedure
Step 1 Run:
system-view
Step 2 Run:
ptp enable
Step 3 Run:
ptp device-type tcandbc
The value of the 1588v2 domain to which the BC ports of TCandBC belong is configured.
Step 5 (Optional) Run:
ptp virtual-clock-id clock-id-value
The clock ID of the clock source that is permitted to participate in local BMC calculation is set.
Step 8 (Optional) Run:
ptp set-port-state enable
----End
Context
Do as follows on the device:
Procedure
Step 1 Run:
system-view
NOTE
The 1588v2 clock domain configured in the system view is the domain to which the BC interface belongs,
and you do not need to configure a domain for the BC interface. The domain to which the TC interface
belongs needs to be configured in the interface view.
Step 5 Run:
ptp delay-mechanism { delay | pdelay }
A delay measurement mechanism is configured for the device, which can be either of the
following:
l Delay mode:
A delay request-response mechanism, in which information about the clock and time is
calculated according to the delay of the entire link between the master clock and slave clock.
l PDelay mode:
A peer delay mechanism, in which information about the clock and time is calculated
according to the delay of each segment of the link between the master clock and slave clock.
NOTE
Different delay measurement mechanisms cannot replace each other. Therefore, delay measurement
mechanisms configured on 1588v2 interfaces on the same link segment must be identical.
Step 6 Run:
ptp enable
The interface of the 1588v2 device is configured to discard the received Announce packets.
NOTE
Announce packets can ensure the 1588v2 clock synchronization between devices. If an interface discards
Announce packets, the device where the interface resides cannot receive clock synchronization information
from other 1588v2 clocks. Usually, this command is configured on the interface at the user side.
The asymmetric correction time for sending 1588v2 packets on the interface is set.
The timestamping mode of the synchronization packets sending by the 1588v2 port is set.
----End
Context
Do as follows on the 1588v2 device:
Procedure
l Configuring time attributes for Announce packets
1. Run:
system-view
The interval for sending Announce packets on an interface is set to the announce-
intervalth power of 2, in 1/1024 seconds.
The default value of announce-interval is 7, which means that the interval for sending
Announce packets on the interface is 128/1024s.
4. (Optional) Run:
ptp announce-receipt-timeout timeout-time
The timeout period for receiving Announce packets on an interface is set to the
timeout-timeth power of 2, in 1/1024 seconds.
The default timeout-time is 9, which means that the timeout period for receiving
Announce packets on the interface is 512/1024s.
l Configuring time attributes for Sync packets
1. Run:
system-view
The interval for sending Sync packets on an interface is set to the sync-intervalth power
of 2, in 1/1024 seconds.
The interval for sending Sync packets on an interface is set to the sync-intervalth power
of 2, in 1/1024 seconds.
The default sync-interval is 0, which means that the interval for sending Sync packets
on the interface is 1/1024s.
l Configuring time attributes for Delay packets
1. Run:
system-view
The interval for sending Delay_Req packets on an interface is set to the min-delayreq-
intervalth power of 2, in 1/1024 seconds.
The default min-delayreq-interval is 7, which means that the interval for sending
Delay_Req packets on the interface is 128/1024s.
4. Run:
ptp min-pdelayreq-interval min-pdelayreq-interval
Prerequisites
Before configuring encapsulation modes for 1588v2 packets, check the link type for 1588v2
packet transmission:
l The Layer 2 link adopts the MAC encapsulation mode for 1588v2 packets.
l The Layer 3 link adopts the UDP encapsulation mode for 1588v2 packets.
Context
Do as follows on the 1588v2 device:
Procedure
l Configuring the MAC encapsulation mode
1. Run:
system-view
NOTE
If the unicast destination MAC address is not configured, a multicast destination MAC address
is adopted by default.
4. Run:
ptp mac-egress vlan vlan-id [ priority priority ]
The VLAN ID for transmitting MAC-encapsulated 1588v2 packets and the 802.1p
priority of the 1588v2 packet are configured.
l Configuring the UDP encapsulation mode
1. Run:
system-view
The 1588v2 packets to be sent from the interface are encapsulated in UDP
encapsulation mode, and the source and destination IP addresses are configured.
– For unicast UDP encapsulation
NOTE
The VLAN ID for sending and receiving 1588v2 packets and the priority of the UDP-
encapsulated 1588v2 packet are configured on the interface.
----End
Prerequisites
All configurations of the TCandBC are configured.
Procedure
l Run the display ptp all [ state | config ] command to display the operation status and
configuration of 1588v2 on the TCandBC.
l Run the display ptp interface interface-type interface-number command to display
1588v2 information of the interface.
----End
Example
Run the display ptp all state command on TCandBC1. You can view the configuration and
operation status of 1588v2. Take the command output on TCandBC1 as an example.
l The 1588v2 configuration includes the following:
– 1588v2 is enabled.
– The device type is TCandBC.
– The 1588v2 domain value is 1.
– The device works in non-slave-only mode.
l The 1588v2 operation information includes the following:
– The clock ID of the local clock is 001882fffe1b1bf4.
– The clock ID of the time source is 001882fffe77c2cf.
– The clock ID of the parent clock is 001882fffe77c2cf.
– Interfaces enabled with 1588v2 are GE 1/0/0 and GE 2/0/0.
– The value of the 1588v2 domain to which th the BC interface belongs is 1; the value of
the 1588v2 domain to which th the TC interface belongs is 2.
– The BC interface is in the Slave state.
– The delay measurement mechanism on the interface is Delay.
– The timeout periods for receiving Announce packets on the BC and TC interfaces are
both 512/1024s.
<HUAWEI> display ptp all
Device config info
------------------------------------------------------------------
PTP state :enabled Domain value :1
Slave only :no Device type :TCandBC
Set port state :no Local clock ID :001882fffe1b1bf4
Acl :no Virtual clock ID :no
Acr :no Time lock success :no
Port info
Name State Delay-mech Ann-timeout Type Domain
------------------------------------------------------------------------
GigabitEthernet1/0/0 slave delay 9 bc 1
GigabitEthernet2/0/0 premaster delay 9 tc 2
Time Performance Statistics(ns): Slot 1 Card 0 Port 0
------------------------------------------------------------------------
Realtime(T2-T1) :534 Pathdelay :0
Max(T2-T1) :887704804
Min(T2-T1) :512
Applicable Environment
On a 1588v2 network, the grandmaster clock usually imports clock or time signals from an
external BITS time source, such as a GPS, and then advertises these clock or time signals to
downstream clocks through 1588v2 packets to implement clock synchronization of the entire
network. In this case, to ensure clock synchronization between 1588v2 devices, a BITS time
source must be correctly imported.
Pre-configuration Tasks
None.
Data Preparation
To configure a 1588v2 time source, you need the following data.
No. Data
1 Number of the interface from which the clock and time signals of the BITS time
source is imported
Context
Do as follows on the device:
Procedure
Step 1 Run:
system-view
The new MPU that supports 1588v2 is deployed with four ports, that is, CLK/TOD0, CLK/
TOD1, CLK/1PPS, and CLK/Serial. MPUs used on NE40E-X1, NE40E-X2 and NE40E-X3
only contain two RJ45 ports. The usage of these RJ45 ports is the same as the usage of BITS0
and BITS1 which is described as follows. For the figures of interfaces on MPUs of different
models, refer to the section "Panel Instruction" in the chapter "Cabinet" of the HUAWEI
NetEngine80E/40E - Hardware Description.
CLK/TOD0 is called as BITS0 and CLK/TOD1 is called as BITS1; CLK/1PPS and CLK/Serial
of SMB type are bound together to be bits2. A BITS port can transmit one type of signal at a
time.
Both the RJ45 port and SMB port must be installed with dedicated clock cables to input and
output clock signals and time signals. For descriptions of clock cables, refer to the chapter "Clock
Cables" in the HUAWEI NetEngine80E/40E - Hardware Description.
The following table shows types of signals that can be transmitted through ports.
On each device, the output of time signals of various types is restricted as follows:
l If only one channel of time signals is output, the signals can be output effectively.
l If two channels of 1PPS+ASCII signals are output simultaneously, both channels of signals
can be output effectively.
l If 1PPS+ASCII signals and DCLS signals are output simultaneously, the one that is
configured later takes effect.
Step 3 Run:
ptp clock-source { bits0 | bits1 |bits2 } { on | off }
BITS signals can be configured to participate in or do not participate in the BMC calculation.
NOTE
The BITS signal input port must be the CLK port on the active system control board. If the system control boards
undergo an active/standby switchover, switch the BITS signal input port to the CLK port on the new active
system control board.
Step 5 Run:
clock source { bits0 | bits2 | ptp } priority priority-value
----End
Context
Do as follows on the device:
Procedure
Step 1 Run:
system-view
Step 2 Run:
ptp clock-source { local | bits0 | bits1 | bits2 }time-source time-source-value
NOTE
The attribute of the time-source can be configured only on the grandmaster clock. The external time source
to which the router connects should be configured with corresponding parameters. The mapping between
the time-source-value and external time source is on the Command Reference.
Step 3 Run:
ptp clock-source { local | bits0 | bits1 | bits2 }clock-accuracy clock-accuracy-
value
Step 4 Run:
ptp clock-source { local | bits0 | bits1 | bits2 }clock-class clock-class-value
NOTE
When clock-class-value is smaller than 128, the device cannot be slave clock.
Step 5 Run:
ptp clock-source { local | bits0 | bits1 | bits2 }priority1 priority1-value
Step 6 Run:
ptp clock-source { local | bits0 | bits1 | bits2 }priority2 priority2-value
----End
Prerequisites
All configurations of the 1588v2 time source are complete.
Procedure
l Run the display clock source command to check time information about the BITS clock
source that the device traces.
----End
Example
When the NE40E traces a BITS clock source successfully, run the display clock source
command on the device to view obtained time information from the clock source.
System trace source State: lock mode
into pull-in range
Current system trace source: bits0
Current 2M-1 trace source: Ethernet1/0/0
Current 2M-2 trace source: Ethernet1/0/0
Master board
source Pri(sys/2m-1/2m-2) In-SSM Out-SSM State
--------------------------------------------------------------------------
bits0 3 /---/--- prc ssua normal
bits1 3 /---/--- prc ssua abnormal
Ethernet2/0/0 2 /1 /1 ssub -- normal
Ethernet1/0/0 1 /1 /1 ssua -- normal
Run the display ptp all command, and you can view the 1588v2 configuration and BMC
operation status on the device.
l Accuracy
l Class
l Type of the time source
l Input signals of the clock
<OC1> display ptp all
Context
NOTE
1588 ACR Server cannot be configured on the X1 and X2 models of the NE80E/40E.
Applicable Environment
On the IP RAN shown in Figure 6-8, two PEs are connected by a Layer 3 network deployed
with 1588v2-unaware devices. PE1 is a clock server and PE2 is a client. PE1 attached to an RNC
is connected to a BITS. 1588 ACR-capable PE2 initiates a request for negotiation and exchanges
Layer 3 unicast packets with PE2 to set up a connection. If the connection is successful, PE2
exchanges 1588v2 packets with PE1 over the connection to implement clock synchronization.
IP/MPLS
Backbone
IP CLK
1588v2 ACR
Pre-configuration Tasks
Before configuring 1588 ACR in single-server mode, complete the following tasks:
l (Optional) Configuring static routes or configuring an IGP to ensure that IP routes between
nodes are reachable
l Ensuring that the clock server has correctly imported clock and time signals from a BITS
Data Preparation
To configure 1588 ACR clock synchronization in single-server mode, you need the following
data.
No. Data
1 IP address of a client
4 (Optional) Name of a VPN instance bound to the interface to which the local IP
address is assigned to
Context
ACR, which is an adaptive clock recovery technology, allows a 1588 ACR client to exchange
1588v2 packets with a clock server on a link where a 1588v2-incapable device resides. After
receiving 1588v2 packets, the client uses clock information carried in the packets to restore clock
information.
1588 ACR and 1588v2 (which implements hop-by-hop clock synchronization) are mutually
exclusive. If 1588 ACR is enabled on a 1588v2-capable device, the 1588v2 configurations on
the device no longer take effect.Before enabling 1588 ACR, first disable IEEE 1588v2. After
1588 ACR is enabled, configurations related to IEEE 1588v2 will be deleted automatically.
Procedure
Step 1 Run:
system-view
Step 2 Run:
ptp-adaptive enable
Step 3 Run:
ptp-adaptive device-type client
NOTE
The client and clock server, which exchange 1588v2 packets for clock or time synchronization, must be
in one 1588v2 clock domain.
Step 5 Run:
ptp-adaptive local-ip ip-adress
An IP address is assigned to the client, which is used to initiate a request for negotiation and
send Layer 3 unicast packets.
The clock server's and client's IP addresses uniquely identify a 1588 ACR connection, which is
set up by exchanging Layer 3 unicast packets between a client and a clock server during
negotiation. Configuring a loopback address as the client's IP address is recommended, helping
the clock server direct packets to the client.
Step 6 Run:
ptp-adaptive { remote-server1-ip | remote-server2-ip } ip-address
Running this command twice specifies master and slave clock servers.
If two clock servers are configured, the client initiates a request for a connection to one clock
server. If the connection fails to be established or the established connection is closed, the client
initiates a request for a connection to the other clock server. If the connection also fails, the client
re-initiates a request for a connection to the first clock server. The procedure repeats until a
connection is created.
Step 7 Run:
ptp-adaptive acr [ one-way | two-way ] unicast-negotiate enable
1588 ACR unicast negotiation is enabled on the HUAWEI NetEngine80E/40E and the frequency
recovery mode is configured.
----End
Context
ACR, which is an adaptive clock recovery technology, allows a 1588 ACR client to exchange
1588v2 packets with a clock server on a link where a 1588v2-incapable device resides. After
receiving 1588v2 packets, the client uses clock information carried in the packets to restore clock
information.
1588 ACR and 1588v2 (which implements hop-by-hop clock synchronization) are mutually
exclusive. If 1588 ACR is enabled on a 1588v2-capable device, the 1588v2 configurations on
the device no longer take effect.
Procedure
Step 1 Run:
system-view
Step 2 Run:
ptp-adaptive enable
Step 3 Run:
ptp-adaptive device-type server
NOTE
The client and clock server, which exchange 1588v2 packets for clock synchronization, must be in one
1588v2 clock domain.
Step 5 Run:
ptp-adaptive local-ip ip-adress
The clock server's and client's IP addresses uniquely identify a 1588 ACR connection, which is
set up by exchanging Layer 3 unicast packets between a client and a clock server during
negotiation. Configuring a loopback address as the server's IP address is recommended, helping
the clock server direct packets to the client.
The VPN instance name carried in 1588v2 packets is specified, which identifies the VPN
instance bound to the server's loopback interface.
Step 7 Run:
ptp-adaptive acr unicast-negotiate enable
----End
Context
Adjustable parameters on a client are as follows:
l 1. the timeout periods for the Announce packets
l 2. Duration field values in Sync and Announce packets
l 3. DSCP value for 1588 ACR packets
l 4. Interval at which Sync and Announce packets are sent
Procedure
Step 1 Run:
ptp-adaptive dscp priority-value
Setting a large DSCP value to ensure that 1588v2 packets reach the destination even if a
congestion occurs on a network. This value is adjustable on both the client and clock server.
Step 2 Run:
ptp-adaptive { announce-duration | sync-duration } duration-value
The duration field value is set for each type of 1588 ACR packet.
If a set duration time expires, the client re-initiates a request for a connection to a clock server.
The default value is recommended. By default, the duration value in all 1588v2 packets is 300,
in seconds.
Step 3 Run:
ptp-adaptive request sync-interval sync-interval
The interval at which an ACR clock server sends Sync packets is set.
By default, the interval is 8/1024 seconds.
Step 4 Run:
ptp-adaptive request announce-interval announce-interval
The interval at which an ACR clock server sends Announce packets is set.
By default, the interval at which Sync packets are sent is 2 seconds.
Step 5 Run:
ptp-adaptive announce-receipt-timeout announce-receipt-timeout
The timeout period for receiving Announce packets on the router is set.
By default, the timeout period for receiving Announce packets is 8 seconds.
NOTE
When unicast negotiation parameters are being configured on the client, the timeout period within which
the client receives an Announce packet cannot be shorter than the interval at which the server sends an
Announce packet. Otherwise, the status of the client becomes master and the client cannot synchronize
with the server. It is recommended to set the timeout period within which the client receives an Announce
packet to be four times the interval at which the server sends an Announce packet.
----End
Procedure
Step 1 Run the display ptp-adaptive all command to check the status of a connection that is set up by
exchanging Layer 3 unicast packets during 1588 ACR negotiation.
----End
Example
After the configurations are successful, run the display ptp-adaptive all command, and you can
view the status of a connection that is set up by exchanging Layer 3 unicast packets during 1588
ACR negotiation.
The command output depends on the role that a device plays:
If the HUAWEI NetEngine80E/40E is configured as a client, the command output shows server
information:
l Server IP address
l Negotiation status
If the HUAWEI NetEngine80E/40E is configured as a server, the command output shows client
information:
l Synchronous client ID
l Client IP address
# Display 1588 ACR configurations on the current client.
<HUAWEI> display ptp-adaptive all
Device config info
---------------------------------------------------------------------------
Ptp adaptive state : Enable Device type : client
Sync mode : Frequency Current state : slave
Packet dscp : 56 Domain value : 0
Announce interval : 11 Announce duration : 300s
Sync interval : 10 Sync duration : 300s
Announce receipt timeout: 8s Acr mode : Two-way
Local ip : 2.2.2.2
Ptp port name : GigabitEthernet1/0/0
Client info
Client ID Client Ip
---------------------------------------------------------------------------
1 0 2.2.2.2
Context
CAUTION
Statistics cannot be restored after being cleared. So, confirm the action before you run the
command.
After confirming that 1588v2 statistics need to be cleared, run the following command in the
user view.
Procedure
Step 1 Run:
reset ptp statistics { all | interface interface-type interface-number }
The counter counting the number of sent and received 1588v2 packets on the interface is reset,
making statistics on 1588v2 packets to be cleared.
----End
Context
In routine maintenance, you can run the following command in any view to view the operation
status of 1588v2.
Procedure
l Run:
display ptp { all [ config | state ] | interface interface-type interface-
number }
Example
# Display the status and statistics of all the modules related to 1588v2 on the current device.
l The slave clock
<HUAWEI> display ptp all
Device config info
------------------------------------------------------------------
PTP state :enabled Domain value :1
Slave only :no Device type :BC
Set port state :no Local clock ID :000a0bfffe0c0d42
Acl :no Virtual clock ID :no
Acr :no Time lock success :no
Port info
Name State Delay-mech Ann-timeout Type Domain
------------------------------------------------------------------------
GigabitEthernet1/0/0 slave delay 10 BC 1
Time Performance Statistics(ns): Slot 1 Card 0 Port 0
------------------------------------------------------------------------
Realtime(T2-T1) :534 Pathdelay :0
Max(T2-T1) :887704804
Min(T2-T1) :512
Item Description
Context
NOTE
This document takes interface numbers and link types of the NE40E-X8 as an example. In working
situations, the actual interface numbers and link types may be different from those used in this document.
6.9.1 Example for Configuring the BITS as the 1588v2 Clock Source
1588v2 is used to transmit clock signals within a network. If the clock signals within a network
need to be synchronized with those of an external clock source, the external standard clock source
is required.
Configuration Roadmap
As shown in Figure 6-9, the BITS is connected to an external GPS to advertise the input clock
or time signals to the device named Master, which serves as the master clock of the bearer
network and advertises the received clock or time signals to devices on the bearer network.
Figure 6-9 Networking diagram of configuring the BITS as the 1588v2 clock source
BITS
Master
GE1/0/0
GE1/0/0 GE1/0/0
GE1/0/1 GE1/0/1
NodeB CE1 CE2 NodeB
The new MPU that supports 1588v2 is deployed with four ports, that is, CLK/TOD0, CLK/
TOD1, CLK/1PPS, and CLK/Serial. MPUs used on NE40E-X1, NE40E-X2 and NE40E-
X3 only contain two RJ45 ports. The usage of these RJ45 ports is the same as the usage of
BITS0 and BITS1 which is described as follows. For the figures of interfaces on MPUs of
different models, refer to the section "Panel Instruction" in the chapter "Cabinet" of the
HUAWEI NetEngine80E/40E - Hardware Description.
CLK/TOD0 is called as BITS0 and CLK/TOD1 is called as BITS1; CLK/1PPS and CLK/
Serial of SMB type are bound together to be bits2. A BITS port can transmit one type of
signal at a time.
Both the RJ45 port and SMB port must be installed with dedicated clock cables to input
and output clock signals and time signals. For descriptions of clock cables, refer to the
chapter "Clock Cables" in the HUAWEI NetEngine80E/40E - Hardware Description.
The following table shows types of signals that can be transmitted through ports.
Data Preparation
To complete the configuration, you need the following data:
l BITS signal types (in this example, including 2 MHz clock signals are input through BITS0
and time signals of 1 pps of the RS422 level and ASCII of the RS422 level through BITS1)
l Attributes of the BITS time source, including time source value, clock accuracy, clock
stratum, priority 1, and priority 2
l Priority of the static clock source
Procedure
Step 1 Use a clock cables connect BITS0 to the clock signal source and connect BITS1 to the time
signal source.
Step 2 Configure attributes for the input signals of the BITS clock.
<Master> system-view
NOTE
BITS is connected to an external time source, namely, GPS, and its time-source is 2.
NOTE
If clock-class is set smaller than 128, then the clock cannot be a slave clock.
Step 4 Enable basic 1588v2 functions on Master and configure the device type as OC.
<Master> system-view
[Master] ptp enable
[Master] ptp domain 1
[Master] ptp device-type oc
[Master] interface gigabitethernet 1/0/0
[Master-GigabitEthernet1/0/0] ptp delay-mechanism pdelay
[Master-GigabitEthernet1/0/0] ptp enable
[Master-GigabitEthernet1/0/0] quit
Run the display clock source command in any view on Master. You can view that BITS0 is in
the Normal state, which means that Master has successfully input frequency signals from BITS0
port.
<Master> display clock source
System trace source State: lock mode
into pull-in range
Current system trace source: GigabitEthernet1/0/0
Current 2M-1 trace source: system PLL
Current 2M-2 trace source: system PLL
Master board
source Pri(sys/2m-1/2m-2) In-SSM Out-SSM State
--------------------------------------------------------------------------
bits0 5 /---/--- unk ssua normal
bits1 ---/---/--- prc ssua initial
bits2 ---/---/--- prc ssua initial
GigabitEthernet1/0/0 3 /---/--- ssua dnu normal
GigabitEthernet3/1/0 3 /---/--- unk ssua normal
GigabitEthernet3/1/1 8 /---/--- unk ssua normal
Run the display clock config command in any view on Master. You can view that Master has
stepped into lock mode, which means the frequency of Master has traced the signal from BITS0
port.
<Master> display clock config
Current source: 11
Workmode: manual
SSM control: off
Primary source: 11
Output SSM Level: unknown
Current source step into pull-in range
After the configurations, run the display ptp all state command on Master. You can view the
current operation status of 1588v2.
<Master> display ptp all
Device config info
------------------------------------------------------------------
PTP state :enabled Domain value :1
Slave only :no Device type :OC
Static BMC :no Local clock ID :101122fffe225555
Port info
Name State Delay-mech Ann-timeout Type Domain
------------------------------------------------------------------------
GigabitEthernet1/0/0 master pdelay 9 OC 1
----End
Configuration Files
l Configuration file of Master
#
Master
#
ptp enable
ptp device-type oc
clock bits-type bits0 2mhz
clock manual source bits0
clock bits-type bits1 1pps input
ptp clock-source bits1 on
Networking Requirements
As shown in Figure 6-10, a BITS server can generate 1588v2 packets carrying frequency
information and send them to NodeBs over a QoS-guaranteed bearer network. The devices of
the bearer network do not need to support 1588v2, which saves investments of operators.
In this application scenario, the bearer network devices only need to provide end-to-end Layer
3 channels with the jitter being within 20 ms to transparently transmit 1588v2 packets between
the IP clock server and NodeBs.
1588v2 1588v2
packets packets
GE2/0/0 GE1/0/0 POS6/0/0 E1/0/0 POS6/0/0
Configuration Roadmap
No configuration is needed because the bearer network devices do not need to support 1588v2.
Networking Requirements
A mobile operator runs a mobile bearer network as shown in Figure 6-11. The network is
configured with both POS interfaces and Ethernet interfaces. To meet the frequency
synchronization requirements of wireless bearer services, each device and NodeB on the bearer
network must be connected to a BITS server. The installation and maintenance are thus costly.
The operator then purchases 1588v2-aware devices and then upgrades the clock synchronization
network. After these, only one BITS server needs to be deployed on the bearer network, which
also meets the frequency synchronization requirements of the wireless bearer services.
The clock synchronization network can be deployed as follows based on different types of
interfaces. BITS clock signals are injected to Router B and then transmitted to NodeB 2 on the
right through the WAN clock, 1588v2 clock, and WAN clock in sequence and to NodeB 1 on
the left through the synchronous Ethernet clock and 1588v2 clock. 1588v2 packets are
encapsulated through UDP and then transmitted to destination nodes.
Figure 6-11 Networking diagram of synchronizing frequencies through the integration of the
1588v2 clock, synchronous Ethernet clock, and WAN clock
BITS
Ethernet sychronization
WAN
1588v2
NodeB 1 11.0.0.2/24
NodeB 2 14.0.0.2/24
NodeB1 - 0000-1111-b1b1
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Enable a link layer protocol and configure an IP address on each interface. The configuration
details are not mentioned here.
Step 2 Enable OSPF to ensure the interworking between devices. The configuration details are not
mentioned here.
Step 3 Import signals of the external BITS clock source to Router B.
[RouterB] clock bits-type bits0 2mhz
[RouterB] clock source bits0 ssm prc
[RouterB] clock source bits0 priority 1
Step 4 Synchronize clock signals of Router A with those of Router B through the synchronous Ethernet
clock.
# Enable Router B with Ethernet clock synchronization.
[RouterB] clock ethernet-synchronization enable
[RouterB] interface GigabitEthernet 1/0/0
[RouterB-GigabitEthernet1/0/0] clock synchronization enable
[RouterB-GigabitEthernet1/0/0] clock priority 2
# Enable Ethernet clock synchronization on Router A.
[RouterA] clock ethernet-synchronization enable
[RouterA] interface GigabitEthernet 1/0/0
[RouterA-GigabitEthernet1/0/0] clock synchronization enable
[RouterA-GigabitEthernet1/0/0] clock priority 2
Step 5 Synchronize clock signals of Router C with those of Router B through the WAN clock.
# Configure POS 6/0/0 of Router B as the master interface.
[RouterB] interface POS 6/0/0
[RouterB-POS 6/0/0] clock master
# Configure POS 6/0/0 of Router C as the slave interface.
[RouterC] interface POS 6/0/0
[RouterC-POS 6/0/0] clock slave
[RouterC-POS 6/0/0] quit
Step 6 Configure Router A as the BC that encapsulates 1588v2 packets through UDP and sends clock
signals to NodeB 1.
[RouterA] ptp enable
[RouterA] ptp device-type bc
[RouterA] ptp clock-source local priority1 0
[RouterA] interface gigabitethernet 2/0/0
[RouterA-GigabitEthernet2/0/0] ptp delay-mechanism delay
[RouterA-GigabitEthernet2/0/0] ptp enable
[RouterA-GigabitEthernet2/0/0] ptp udp-egress source-ip 11.0.0.1 destination-ip
11.0.0.2
[RouterA-GigabitEthernet2/0/0] ptp udp-egress destination-mac 0000-1111-b1b1
[RouterA-GigabitEthernet2/0/0] quit
# Enable NodeB 1 to receive 1588v2 packets from Router A. The configuration details are not
mentioned here.
Step 7 Configure Router C as the BC that encapsulates 1588v2 packets through UDP and sends clock
signals to Router D.
# Configure Router C as the BC that encapsulates 1588v2 packets through UDP and sends clock
signals to Router D.
<RouterC> system-view
[RouterC] ptp enable
[RouterC] ptp device-type bc
[RouterC] ptp clock-source local priority1 0
[RouterC] interface ethernet 1/0/0
[RouterC-GigabitEthernet1/0/0] ptp delay-mechanism delay
[RouterC-Ethernet1/0/0] ptp enable
[RouterC-Ethernet1/0/0] ptp udp-egress source-ip 13.0.0.1 destination-ip 13.0.0.2
[RouterC-Ethernet1/0/0] ptp udp-egress destination-mac 0000-1111-dddd
[RouterC-Ethernet1/0/0] quit
Step 8 # Configure Router D as the OC and synchronizes clock signals of Router D with those of
Router C through 1588v2 packets.
Step 9 Configure Router D and send clock signals to NodeB 2 through the WAN clock.
# Configure POS 6/0/0 of Router D as the master interface.
[RouterD] interface POS 6/0/0
[RouterD-POS 6/0/0] clock master
# Configure NodeB 2 as the slave interface. The configuration details are not mentioned here.
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
clock ethernet-synchronization enable
interface GigabitEthernet 1/0/0
clock synchronization enable
clock priority 2
ptp clock-source local priority1 0
ptp enable
ptp device-type bc
interface gigabitethernet 2/0/0
ptp delay-mechanism delay
ptp enable
ptp udp-egress source-ip 11.0.0.1 destination-ip 11.0.0.2
ptp udp-egress destination-mac 0000-1111-b1b1
#
ptp enable
ptp udp-egress source-ip 13.0.0.1 destination-ip 13.0.0.2
ptp udp-egress destination-mac 0000-1111-dddd
#
Networking Requirements
As shown in Figure 6-12, a bearer network transmits wireless services between NodeBs and all
its nodes support 1588v2. The core nodes, namely, PE1 and PE2 are connected through a POS
link and obtain clock signals from BITS servers. NodeB 2 does not support 1588v2 but supports
frequency synchronization through the synchronous Ethernet clock. NodeB 1 and NodeB 3
support 1588v2. Frequency synchronization can be achieved between wireless NodeBs and the
bearer network devices, and time synchronization can be achieved between 1588v2-aware
NodeBs and the bearer network devices.
All devices of the bearer network support 1588v2 so that they can be configured as BCs to
transmit clock information. In addition, CE2 can send E1 signals carrying frequency information
to non-1588v2-aware NodeB 2 for restoring frequency synchronization.
Figure 6-12 Networking diagram of synchronizing all clocks of an entire network through
unicast UDP-encapsulated 1588v2 packets
E1
GE1/0/1 GE1/0/1 POS6/0/0 GE1/0/1
NodeB1 - 2222-3333-1111
NodeB3 - 2222-3333-2222
Configuration Roadmap
The configuration roadmap is as follows:
1. Enable a routing protocol, that is, OSPF, to ensure the interworking between devices.
Data Preparation
To complete the configuration, you need the following data:
l 1588 link delay measurement mechanism: pdelay
l ID of the 1588v2 domain to which devices belong
l Interval for sending Announce messages and timeout period of receiving Announce
messages
l Interval for sending Sync messages
l Interval for sending PDelay messages
Procedure
Step 1 Configure the IP address of each interface and enable OSPF to ensure the interworking between
devices. The configuration details are not mentioned here.
Step 2 Configure PE1 and PE2 to import BITS clock signals through their clock interfaces.
For the detailed configurations, see the section Example for Configuring the BITS as the PTP
Clock Source.
Step 3 Configure PE1, PE2, CE1, and CE2 as BCs.
# Configure PE1.
[PE1] ptp enable
[PE1] ptp device-type bc
[PE1] ptp domain 1
[PE1] ptp clock-source local priority1 128
[PE1] interface gigabitethernet 1/0/1
[PE1-GigabitEthernet1/0/1] ptp delay-mechanism pdelay
[PE1-GigabitEthernet1/0/1] ptp udp-egress source-ip 11.0.0.1 destination-ip
11.0.0.2
[PE1-GigabitEthernet1/0/1] ptp udp-egress destination-mac 1111-2222-1111
[PE1-GigabitEthernet1/0/1] ptp enable
[PE1-GigabitEthernet1/0/1] quit
# Configure PE2.
[PE2] ptp enable
[PE2] ptp device-type bc
[PE2] ptp domain 1
[PE2] ptp clock-source local priority1 128
[PE2] interface gigabitethernet 1/0/1
[PE2-GigabitEthernet1/0/1] ptp delay-mechanism pdelay
[PE2-GigabitEthernet1/0/1] ptp udp-egress source-ip 12.0.0.1 destination-ip
12.0.0.2
[PE2-GigabitEthernet1/0/1] ptp udp-egress destination-mac 1111-2222-5555
[PE2-GigabitEthernet1/0/1] ptp enable
[PE2-GigabitEthernet1/0/1] quit
# Configure CE1.
[CE1] ptp enable
[CE1] ptp device-type bc
[CE1] ptp domain 1
[CE1] ptp clock-source local priority1 128
[CE1] clock manual source ?
# Configure the NodeBs that receives 1588v2 packets from CE1. The configuration details are
not mentioned here.
# Configure CE2.
[CE2] ptp enable
[CE2] ptp device-type oc
[CE2] ptp slaveonly
[CE2] ptp domain 1
[CE2] ptp clock-source local priority1 128
[CE2] clock manual source ?
[CE2] clock manual source ptp
[CE2] interface gigabitethernet 1/0/0
[CE2-GigabitEthernet1/0/0] ptp delay-mechanism pdelay
[CE2-GigabitEthernet1/0/0] ptp udp-egress source-ip 12.0.0.2 destination-ip
12.0.0.1
[CE2-GigabitEthernet1/0/0] ptp udp-egress destination-mac 0000-1111-2222
[CE2-GigabitEthernet1/0/0] ptp enable
[CE2-GigabitEthernet1/0/0] quit
[CE2] interface gigabitethernet 1/0/1
[CE2-GigabitEthernet1/0/1] ptp delay-mechanism pdelay
[CE2-GigabitEthernet1/0/1] ptp udp-egress source-ip 15.0.0.1 destination-ip
15.0.0.2
[CE2-GigabitEthernet1/0/1] ptp udp-egress destination-mac 2222-3333-2222
[CE2-GigabitEthernet1/0/1] ptp enable
[CE2-GigabitEthernet1/0/1] quit
# Configure CE1.
[CE1] interface gigabitethernet 1/0/0
[CE1-GigabitEthernet1/0/0] ptp announce-receipt-timeout 10
[CE1-GigabitEthernet1/0/0] ptp min-pdelayreq-interval 10
[CE1-GigabitEthernet1/0/0] quit
[CE1] interface gigabitethernet 1/0/1
[CE1-GigabitEthernet1/0/1] ptp announce-drop enable
# Configure CE2.
[CE2] interface gigabitethernet 1/0/0
[CE2-GigabitEthernet1/0/0] ptp announce-receipt-timeout 10
[CE2-GigabitEthernet1/0/0] ptp min-pdelayreq-interval 10
[CE2-GigabitEthernet1/0/0] quit
[CE2] interface gigabitethernet 1/0/1
[CE2-GigabitEthernet1/0/1] ptp announce-drop enable
# Configure PE1.
[PE1] interface gigabitethernet 1/0/0
[PE1-GigabitEthernet1/0/0] ptp announce-receipt-timeout 10
[PE1-GigabitEthernet1/0/0] quit
[PE1] interface gigabitethernet 1/0/1
# Configure PE2.
[PE2] interface gigabitethernet 1/0/0
[PE2-GigabitEthernet1/0/0] ptp announce-receipt-timeout 10
[PE2-GigabitEthernet1/0/0] quit
[PE2] interface gigabitethernet 1/0/1
[PE2-GigabitEthernet1/0/1] ptp announce-interval 8
[PE2-GigabitEthernet1/0/1] ptp min-pdelayreq-interval 10
[PE2-GigabitEthernet1/0/1] quit
Port info
Name State Delay-mech Ann-timeout Type Domain
------------------------------------------------------------------------
GigabitEthernet1/0/0 slave pdelay 10 BC 1
Time Performance Statistics(ns): Slot 1 Card 0 Port 0
------------------------------------------------------------------------
Realtime(T2-T1) :534 Pathdelay :0
Max(T2-T1) :887704804
Min(T2-T1) :512
----End
Configuration Files
l Configuration file of CE1
#
sysname CE1
#
ptp enable
ptp device-type bc
ptp domain 1
ptp clock-source local priority1 128
clock manual source ptp
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 11.0.0.2 255.255.255.0
ptp delay-mechanism pdelay
ptp announce receipt-timeout 10
ptp min-pdelayreq-interval 10
ptp udp-egress source-ip 11.0.0.2 destination-ip 11.0.0.1
ptp udp-egress destination-mac 0000-1111-1111
ptp enable
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 11.0.0.2 255.255.255.0
ptp delay-mechanism pdelay
ptp announce-receipt-timeout 10
ptp min-pdelayreq-interval 10
ptp announce-drop enable
ptp udp-egress source-ip 13.0.0.1 destination-ip 13.0.0.2
ptp udp-egress destination-mac 2222-3333-1111
ptp enable
#
ospf 1
area 0.0.0.0
network 11.0.0.0 0.0.0.255
network 13.0.0.0 0.0.0.255
#
Networking Requirements
As shown in Figure 6-13, PE1 and PE2 are core devices on a bearer network and CE1 and CE2
are edge devices on a wireless access network. PE1 and PE2, functioning as the external BITS
clock sources for BCs, advertise clock and time information to CE1 and CE2. CE1 and CE2,
functioning as BCs, synchronize clock signals with the BITS through 1588v2 and send 1588v2
packets carrying the frequency and time information to their attached NodeBs. In addition, CE2
can send E1 signals carrying frequency information to non-1588v2-aware NodeB 2 for restoring
frequency synchronization.
E1
GE1/0/1 GE1/0/1 POS6/0/0 GE1/0/1
Configuration Roadmap
The configuration roadmap is as follows:
NOTE
1588v2 packets are encapsulated in the default multicast MAC mode.
Data Preparation
To complete the configuration, you need the following data:
l ID of the 1588v2 domain to which devices belong
l Interval for sending Announce messages and timeout period of receiving Announce
messages
l Interval for sending Sync messages
l Interval for sending Delay messages
l MAC address of each NodeB
Procedure
Step 1 Configure PE1 and PE2 so that they can import BITS clock signals through their clock interfaces.
For the detailed configurations, see the section Example for Configuring the BITS as the PTP
Clock Source.
Step 2 Configure PE1 and PE2 as BCs.
# Configure PE1.
[PE1] ptp enable
[PE1] ptp device-type bc
[PE1] ptp domain 1
[PE1] ptp clock-source local priority1 128
[PE1] interface gigabitethernet 1/0/0
[PE1-GigabitEthernet1/0/0] ptp enable
[PE1-GigabitEthernet1/0/0] ptp delay-mechanism delay
[PE1-GigabitEthernet1/0/0] quit
# Configure PE2.
<PE2> system-view
[PE2] ptp enable
[PE2] ptp device-type bc
[PE2] ptp domain 1
[PE2] ptp clock-source local priority1 128
[PE2] interface gigabitethernet 1/0/0
[PE2-GigabitEthernet1/0/0] ptp enable
[PE2-GigabitEthernet1/0/0] ptp delay-mechanism delay
[PE2-GigabitEthernet1/0/0] quit
Step 3 Configure CE1 and CE2 as BCs so that they can synchronize the clock and time information
with that of PE1 and PE2 and advertise the information to NodeB 1 and NodeB 3.
# Configure CE1.
[CE1] ptp enable
[CE1] ptp device-type bc
[CE1] ptp domain 1
[CE1] ptp clock-source local priority1 128
[CE1] clock manual source ptp
[CE1] interface gigabitethernet 1/0/0
[CE1-GigabitEthernet1/0/0] ptp delay-mechanism delay
[CE1-GigabitEthernet1/0/0] ptp enable
[CE1-GigabitEthernet1/0/0] quit
[CE1] interface gigabitethernet 1/0/1
[CE1-GigabitEthernet1/0/1] ptp delay-mechanism delay
[CE1-GigabitEthernet1/0/1] ptp enable
[CE1-GigabitEthernet1/0/1] ptp announce-drop enable
[CE1-GigabitEthernet1/0/1] quit
# Configure CE2.
Port info
Name State Delay-mech Ann-timeout Type Domain
------------------------------------------------------------------------
GigabitEthernet1/0/0 slave delay 10 BC 1
Time Performance Statistics(ns): Slot 1 Card 0 Port 0
------------------------------------------------------------------------
Realtime(T2-T1) :534 Pathdelay :0
Max(T2-T1) :887704804
Min(T2-T1) :512
----End
Configuration Files
l Configuration file of CE1
#
sysname CE1
#
Networking Requirements
On the IP RAN shown in Figure 6-14, Router A functions as a clock server and is connected to
an IP CLK. Router C functions as a client, and sends a 1588 ACR Layer 3 unicast negotiation
request to the server to achieve clock synchronization.
Figure 6-14 Networking diagram of configuring 1588 ACR clock synchronization in a single-
server scenario
1588v2 ACR
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure Router A as a server.
2. Configure Router C as a client.
3. Adjust Layer 3 unicast negotiation parameters on the server and the client.
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Configure Router A as a server.
<RouterA> system-view
[RouterA] interface loopback 0
[RouterA-Loopback0] ip address 1.1.1.1 32
[RouterA-Loopback0] quit
[RouterA] ptp-adaptive enable
[RouterA] ptp-adaptive device-type server
[RouterA] ptp-adaptive local-ip 1.1.1.1
Step 3 Adjust Layer 3 unicast negotiation parameters on the client and the server.
# Configure the client.
[RouterC] ptp-adaptive request sync-interval 10
[RouterC] ptp-adaptive request announce-interval 12
Client info
Client ID Client Ip
---------------------------------------------------------------------------
1 0 2.2.2.2
----End
Configuration Files
Configuration file of Router A
#
sysname RouterA
#
ptp-adaptive enable
ptp-adaptive device-type server
ptp-adaptive local-ip 1.1.1.1
ptp-adaptive acr unicast-negotiate enable
#
interface Loopback0
ip address 1.1.1.1 255.255.255.255
#
return
Networking Requirements
On the IP RAN shown in Figure 6-15, Router A and Router B function as clock servers that
work in the master/slave mode, and are connected to an IP CLK. As a client, Router C first sends
a 1588 ACR Layer 3 unicast negotiation request to Router A that functions as the master clock
server to obtain clock synchronization information. If the link between Router C and Router A
goes Down, Router C sends a Layer 3 unicast negotiation request to Router C to ensure that its
clock is synchronized with that of the IP CLK.
Figure 6-15 Networking diagram of configuring 1588 ACR clock synchronization in a dual-
server scenario
IP CLK
RouterA
PE1
Primary-Server
PE3 RNC
IP/MPLS
Backbone
Node B RouterC
with 1588 Client
Standby-Server
PE2
RouterB
IP CLK
1588v2 ACR
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Configure Router A as server 1.
<RouterA> system-view
[RouterA] interface loopback 0
[RouterA-Loopback0] ip address 1.1.1.1 32
[RouterA-Loopback0] quit
[RouterA] ptp-adaptive enable
[RouterA] ptp-adaptive device-type server
[RouterA] ptp-adaptive local-ip 1.1.1.1
Step 4 Adjust Layer 3 unicast negotiation parameters on the client and the servers.
# Configure the client.
[RouterC] ptp-adaptive request sync-interval 10
[RouterC] ptp-adaptive request announce-interval 12
# Check the 1588 ACR configuration on the server. Take the display on Router A as an example.
<RouterA> display ptp-adaptive all
Device config info
---------------------------------------------------------------------------
Ptp adaptive state : Enable Device type : server
Sync mode : Frequency Current state : master
Packet dscp : 56 Domain value : 0
Local ip : 1.1.1.1 Server board : 1
Acr mode : One-way
VPN : None
Client info
Client ID Client Ip
---------------------------------------------------------------------------
1 0 3.3.3.3
----End
Configuration Files
Configuration file of Router A
#
sysname RouterA
#
ptp-adaptive enable
ptp-adaptive device-type server
ptp-adaptive local-ip 1.1.1.1
ptp-adaptive acr unicast-negotiate enable
#
interface Loopback0
ip address 1.1.1.1 255.255.255.255
#
return
7 NQA Configuration
This chapter describes how to configure the Network Quality Analysis (NQA) to monitor the
network operating status and collect network operation indexes in real time.
7.24 Configuring the NQA MTrace Test to Check the Multicast Path from the Multicast Source
to the Querier
This section describes how to configure an MTrace test to check the multicast path from the
multicast source to the querier.
7.25 Configuring the NQA MTrace Test to Check the RPF Path from the Multicast Source to
the Destination Host
This section describes how to configure an MTrace test to check the RPF path from the multicast
source to the destination host.
7.26 Configuring the NQA MTrace Test to Check the Multicast Path from the Multicast Source
to the Destination Host
This section describes how to configure an MTrace test to check the multicast path from the
multicast source to the destination host.
7.27 Configuring the PWE3 Ping Test to Check the One-Hop PW
This section describes how to configure a PWE3 ping test to check the connectivity of a single-
hop pseudo-wire (PW).
7.28 Configuring the PWE3 Ping Test to Check the Multi-Hop PW
This section describes how to configure a PWE3 ping test to check the connectivity of a multi-
hop PW.
7.29 Configuring the PWE3 Trace Test to Check the One-Hop PW
This section describes how to configure a PWE3 trace test to check the communications between
devices along a PW.
7.30 Configuring the PWE3 Trace Test to Check the Multi-Hop PW
This section describes how to configure a PWE3 trace test to check the communications between
devices on a PW.
7.31 Configuring the VC Trace Test to Check the Inter-AS Multi-Hop Kompella VLL
This section describes how to configure a Virtual Circuit (VC) trace test for the inter-AS multi-
hop Kompella Virtual Leased Line (VLL) to check the connectivity of the PW.
7.32 Configuring Universal NQA Test Parameters
This section describes how to set and use universal parameters for NQA test instances.
7.33 Configuring Round-Trip Delay Thresholds
This section describes how to set a round-trip delay transmission threshold in an NQA test
instance.
7.34 Configuring Uni-directional Transmission Delay Thresholds
This section describes how to set a one-way transmission delay threshold in an NQA test
instance. After a one-way transmission delay threshold is set in an NQA test instance, the test
result will contain the statistics on the test packets that exceed the set threshold. This provides
the basis for the network manager to analyze the operating status of the specified service on the
network.
7.35 Configuring the Trap Function
This section describes how to configure the trap function in an NQA test instance. After the trap
function is configured, a trap message is sent to the NMS in case of transmission success or
transmission failure.
7.36 Configuring Test Results to Be Sent to the FTP Server
This section describes how to configure the system to send test results to the FTP server to avoid
loss of test results in the event that the NMS does not poll the test result in time.
7.37 Configuring a Threshold for the NQA Alarm
This section describes how to set an alarm threshold for test results. When the number of test
results exceeds the threshold, a trap message is sent to the NMS for notification.
7.38 Configuring a VPLS MFIB Ping to Check the VPLS Network
This section describes how to configure a VPLS MFIB ping test to check the connectivity of the
VPLS network.
7.39 Configuring a MAC Ping and Trace Test
A MAC ping and trace tests can detect connectivity of a VLAN network and a VPLS network.
7.40 Configuring GMAC Ping and GMAC Trace to Detect the Connectivity of a VLAN Network
This section describes how to configure Global MAC (GMAC) ping and GMAC trace to detect
the connectivity of a VLAN network. In addition to connectivity detection and fault location,
GMAC ping and GMAC trace can provide the delay on the network.
7.41 Configuring GMAC Ping and GMAC Trace to Detect the Connectivity of a VPLS Network
This section describes how to configure GMAC ping and GMAC trace to detect the connectivity
of a VPLS network. In addition to the connectivity detection and fault location, GMAC ping
and GMAC trace can detect the delay on the network.
7.42 Configuring VPLS PW Ping and VPLS PW Trace Test Instances
7.43 Configuring a VPLS MFIB Trace to Check the VPLS Network
7.44 Configuring a VPLS MAC Ping Test
This section describes how to configure an NQA VPLS MAC ping test.
7.45 Configuring a VPLS MAC Trace Test
This section describes how to configure an NQA VPLS MAC trace test.
7.46 Maintaining NQA
This section describes how to maintain an NQA test instance. You can restart the test instance
and clear the statistics on the test result to maintain a test instance.
7.47 NQA Configuration Examples
This section provides examples for configuring NQA and illustrates the networking
requirements, configuration roadmap, and configuration notes. You can better understand the
configuration procedures with the help of the configuration flowchart.
With the development of value-added services, users and carriers demand higher Quality of
Service (QoS). After voice over IP and video over IP services are carried out, carriers and users
all tend to sign Service Level Agreements (SLAs) to realize QoS guaranteed services.
To ensure users with the committed bandwidth, network operators should collect the statistics
of delay, jitter, and packet loss of the device. This helps them to analyze the performance of the
network in time.
The NE80E/40E provides Network Quality Analysis (NQA) to meet the preceding requirements.
NQA measures the performance of each protocol running in the network and helps the network
operator to collect the network running indexes, such as the total delay of HTTP, delay of a TCP
connection, rate of file transfer, delay of an FTP connection, delay of Domain Name System
(DNS) resolution, and ratio of error DNS resolution. By controlling these indexes, network
operators provide users with services of various grades and charges users differently.
By sending an Internet Control Message Protocol (ICMP) Echo-Request packet from the local
and expecting an ICMP Echo-Reply packet from the specified destination, the Ping program can
test the round-trip time (RTT) of an ICMP packet. In addition to testing the RRT of an ICMP
packet between the local and the desination, NQA can detect whether network services, such as
TCP, UDP, DHCP, FTP, HTTP and the Simple Network Management Protocol (SNMP), are
enabled and test the response time of each service.
IP/MPLS
Network
NQA Client
In NQA, the RTT of each packet or timeout period of the packet is not displayed on the terminal
in real time, unlike the Ping program. Test results are displayed only when you run the display
nqa results command after a test is complete.
You can also configure the Network Management System (NM Station) to control each NQA
operation parameter and enable NQA tests.
You need to create NQA test instances on NQA clients. Each test instance has an administrator
name and an operation tag as unique identification.
In the test view, configure the related test parameters. Note that a part of parameters applies to
only certain test types whereas others apply to all the test types.
NQA Server
In most types of tests, you need to configure only the NQA clients. In TCP, UDP, and Jitter tests,
however, you must configure the NQA server.
An NQA server processes the test packets received from the clients. As shown in Figure 7-2,
the NQA server responds to the test request packet received from the client through the
monitoring function.
Figure 7-2 Relationship between the NQA client and the NQA server
IP/MPLS
Network
NQA Client NQA Server
You can create multiple TCP or UDP monitoring services on an NQA server. Each monitoring
service corresponds to a specific destination address and a port number. The destination address
and port number can be repeatedly specified.
After creating a test group and configuring the related parameters, you must enable the NQA
test by using the start command and the display nqa results command to view test results.
Applicable Environment
An ICMP test has a similar function with the ping command, but its output is more detailed.
Pre-configuration Tasks
Before configuring the ICMP test, configure reachable routes between the NQA client and the
tested device.
Data Preparation
To configure the ICMP test, you need the following data.
No. Data
2 Destination IP address
3 (Optional) Virtual Private Network (VPN) instance name, source interface that sends
test packets, source IP address, size of the Echo-Request packets, TTL value, ToS,
padding character, interval for sending test packets, and percentage of the failed NQA
tests
Context
Do as follows on the NQA client:
Procedure
Step 1 Run:
system-view
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type icmp
l To perform the NQA test after a certain delay period, run the start delay { seconds second
| hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } |
lifetime { seconds second | hh:mm:ss } } ] command.
The test instance is started after a certain delay.
----End
Prerequisites
The configurations of the ICMP Test function are complete.
Context
NOTE
NQA test results cannot be displayed automatically on a terminal. You must run the display nqa results
command to view test results. By the default, the command output contains the records about only the last
five test results.
Procedure
Step 1 Run the display nqa results [ test-instance admin-name test-name ] command to view the test
results on the NQA client.
----End
Example
Run the display nqa results command. If the following is displayed, it means that the test is
successful.
l testflag is inactive
l The test is finished
l Completion:success
For the ICMP test, you can also view the minimum time, maximum time, and RTT(Round Trip
Time ).
<HUAWEI> display nqa results
NQA entry(admin, test) :testflag is inactive ,testtype is icmp
1 . Test 1 result The test is finished
Send operation times: 3 Receive response times: 3
Completion:success RTD OverThresholds number: 0
Attempts number:1 Drop operation number:0
Disconnect operation number:0 Operation timeout number:0
System busy operation number:0 Connection fail number:0
Operation sequence errors number:0 RTT Stats errors number:0
Destination ip address:10.112.58.3
Min/Max/Average Completion Time: 2/5/3
Sum/Square-Sum Completion Time: 9/33
Last Good Probe Time: 2010-06-21 15:33:09.2
Lost packet ratio: 0 %
Applicable Environment
To obtain the following information, you can create an NQA DHCP test:
Pre-configuration Tasks
Before configuring the DHCP test, complete the following tasks:
Data Preparation
To configure the DHCP test, you need the following data.
No. Data
3 (Optional) Timeout period of the test packets and percentage of the failed NQA tests
Context
Do as follows on the NQA client:
Procedure
Step 1 Run:
system-view
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type dhcp
The source interface that sends the DHCP Request packet is configured.
The specified source interface can be an Ethernet interface connected with the DHCP server, an
Eth-Trunk interface, a Virtual-Ethernet interface, or a VLANIF interface.
Step 5 (Optional) Run the following commands to configure other parameters for the DHCP test. For
detailed parameter configurations, see the chapter Configuring Universal NQA Test
Parameters
l To set the timeout period of the NQA test, run the timeout time command.
NOTE
For the DHCP test, the time taken to wait for the response to the probe packet may reach 10 seconds. By
default, the timeout period is 15 seconds. You are recommended to set the timeout period longer than 10
seconds.
l To set the percentage of the failed NQA test, run the fail-percent percent command.
Step 6 Run:
start
----End
Prerequisites
The configurations of the DHCP Test function are complete.
Context
NOTE
NQA test results cannot be displayed automatically on a terminal. You must run the display nqa results
command to view test results. By the default, the command output contains the records about only the last
five tests.
Procedure
Step 1 Run the display nqa results [ test-instance admin-name test-name ] command to view the test
results on the NQA client.
----End
Example
Run the display nqa results command. If the test is successful, the following is displayed.
l testflag is inactive
l The test is finished
l Completion:success
For the DHCP test, you can also view the following statistics in the extended result:
Applicable Environment
In an FTP download test, the local device functions as an NQA FTP client, intending to download
the specified file from an FTP server.
The test result contains statistics about each FTP phase, including the time to set up an FTP
control connection and the time to transport the data.
Pre-configuration Tasks
Before configuring the FTP download test, complete the following tasks:
l Configuring the FTP user name and password and the login directory
l Configuring routes between the NQA FTP client and the FTP server
Data Preparation
To configure the FTP download test, you need the following data.
No. Data
3 (Optional) Source IP address of the FTP operation and VPN instance name and source
and destination port numbers of the FTP operation
Context
Do as follows on the NQA client (FTP client):
Procedure
Step 1 Run:
system-view
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type ftp
NOTE
During the FTP test, select a file with a relatively small size for the test. If the file is large, the test may fail
because of timeout.
Step 10 Run:
start
Select the start mode as required because the start command has several forms.
l To perform the NQA test immediately, run the start now [ end { at [ yyyy/mm/dd ]
hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ]
command.
The test instance is started immediately.
l To perform the NQA test at the specified time, run the start at [ yyyy/mm/dd ] hh:mm:ss
[ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime
{ seconds second | hh:mm:ss } } ] command.
The test instance is started at a specified time.
l To perform the NQA test after a certain delay period, run the start delay { seconds second
| hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } |
lifetime { seconds second | hh:mm:ss } } ] command.
The test instance is started after a certain delay.
----End
Prerequisites
The configurations of the FTP Download Test function are complete.
Context
NOTE
NQA test results cannot be displayed automatically on a terminal. You must run the display nqa results
command to view test results. By the default, the command output contains the records about only the last
five tests.
Procedure
Step 1 Run the display nqa results [ test-instance admin-name test-name ] command to view the test
results on the NQA client.
----End
Example
Run the display nqa results command. If the test is successful, the following is displayed.
Applicable Environment
In an FTP upload test, the local device functions as an FTP client, intending to upload the
specified file to an FTP server.
The test result contains the statistics about each FTP phase, including the time to set up an FTP
control connection and the time to transport the data.
In an FTP upload test, you can specify the file to be uploaded or the bytes to be uploaded. If
certain bytes are specified, the FTP client then automatically generates the test files for
uploading.
Pre-configuration Tasks
Before configuring the FTP upload test, complete the following tasks:
l Configuring the FTP user name and password and the login directory
l Configuring routes between the NQA client and the FTP server
Data Preparation
To configure the FTP upload test, you need the following data.
No. Data
No. Data
4 (Optional) Source IP address of the FTP operation and VPN instance name and source
and destination port numbers of the FTP operation
Context
Do as follows on the NQA client (FTP client):
Procedure
Step 1 Run:
system-view
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type ftp
Step 6 Run:
ftp-operation put
l If no file path is specified, the system searches for the file in the current path. If the specified file
name does not exist, a file is created according to the specified file name, and the size of the file is
set to 1 MB.
l The file name cannot contain characters such as ~, *, /, \, ', ", but the file path can contain these
characters.
l The file name can contain the extension name but cannot contain the extension name only, such
as .txt.
l To upload the file with a specified size, run the ftp-filesize size command. The client then
automatically creates a file name "nqa-ftp-test.txt" to upload.
NOTE
During the FTP test, select a file with a relatively small size. If the file is large, the test may fail because
of timeout.
Step 10 Run:
start
----End
Prerequisites
The configurations of the FTP Upload Test function are complete.
Context
NOTE
NQA test results cannot be displayed automatically on a terminal. You must run the display nqa results
command to view test results. By the default, the command output contains the records about only the last
five tests.
Procedure
Step 1 Run the display nqa results command to view the test results on the NQA client.
----End
Example
Run the display nqa results command. If the test is successful, the following is displayed.
l CtrlConnTime
l DataConnTime
l SumTime
<HUAWEI> display nqa results
NQA entry(admin, ftp) :testflag is inactive ,testtype is ftp
1 . Test 1 result The test is finished
SendProbe:1 ResponseProbe:1
Completion :success RTD OverThresholds number: 0
MessageBodyOctetsSum: 448 Stats errors number: 0
Operation timeout number: 0 System busy operation number:0
Drop operation number:0 Disconnect operation number: 0
CtrlConnTime Min/Max/Average: 438/438/438
DataConnTime Min/Max/Average: 218/218/218
SumTime Min/Max/Average: 656/656/656
Average RTT:380
Lost packet ratio: 0 %
Applicable Environment
Through the NQA HTTP test, you can obtain the responding speed in three phases:
l Time of DNS resolution: It is a period from the time the client sends the DNS packet to the
resolver for resolving the name of the HTTP server to an IP address to the time the DNS
resolution packets containing the IP address is returned.
l Time to set up a TCP connection: It is the time taken by the client to set up a TCP connection
with an HTTP server through three-way handshake.
l Transaction time: It is a period from the time the client sends the Get or Post packets to an
HTTP server to the time the Echo packet sent by the client reaches the HTTP server.
Pre-configuration Tasks
Before configuring the HTTP test, complete the following tasks:
Data Preparation
To configure the HTTP test, you need the following data.
No. Data
Context
Do as follows on the NQA client (HTTP client):
Procedure
Step 1 Run:
system-view
Step 2 Run:
nqa test-instance admin-name test-name
Step 3 Run:
test-type http
Step 4 Run:
destination-address ipv4 ip-address
Step 5 (Optional) Perform the following as required to configure other parameters for the HTTP test
( For detailed parameter configurations, see the chapter Configuring Universal NQA Test
Parameters ):
l To configure the VPN instance to be tested, run the vpn-instance vpn-instance-name
command.
l To configure the source IP address, run the source-address ipv4 ip-address command.
l To configure the source port, run the source-port port-number command.
l To configure the destination port, run the destination-port port-number command.
l To configure the percentage of the failed NQA HTTP tests, run the fail-percent percent
command.
l To configure the NQA test packet to be sent without searching the routing table, run the
sendpacket passroute command.
Step 6 Run:
Step 7 Run:
http-url deststring [ verstring ]
The web page to be visited and the HTTP version are configured.
NOTE
When information on the HTTP version is not configured, by default, HTTP1.0 is supported. HTTP1.1 can
be supported through your configurations.
Step 8 Run:
start
Select the start mode as required because the start command has several forms.
l To perform the NQA test immediately, run the start now [ end { at [ yyyy/mm/dd ]
hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ]
command.
The test instance is started immediately.
l To perform the NQA test at the specified time, run the start at [ yyyy/mm/dd ] hh:mm:ss
[ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime
{ seconds second | hh:mm:ss } } ] command.
The test instance is started at a specified time.
l To perform the NQA test after a certain delay period, run the start delay { seconds second
| hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } |
lifetime { seconds second | hh:mm:ss } } ] command.
The test instance is started after a certain delay.
----End
Prerequisites
The configurations of the HTTP Test function are complete.
Context
NOTE
NQA test results cannot be displayed automatically on a terminal. You must run the display nqa results
command to view test results. By the default, the command output contains the records about only the last
five tests.
Procedure
Step 1 Run the display nqa results [ test-instance admin-name test-name ] command to view the test
results on the NQA client.
----End
Example
Run the display nqa results command. If the test is successful, the following is displayed.
l DNSRTT: indicates the time when the DNS sends a query request.
l TCPConnectRTT: indicates the time when the TCP connection is established.
l TransactionRTT and RTT: indicates the durations of data transmission and HTTP test
respectively.
<HUAWEI> display nqa results
NQA entry(admin, http) :testflag is inactive ,testtype is http
1 . Test 1 result The test is finished
SendProbe:3 ResponseProbe:3
Completion:success RTD OverThresholdsnumber: 0
MessageBodyOctetsSum: 411 TargetAddress: 100.2.1.200
DNSQueryError number: 0 HTTPError number: 0
TcpConnError number : 0 System busy operation number:0
DNSRTT Sum/Min/Max:0/0/0 TCPConnectRTT Sum/Min/Max: 6/1/4
TransactionRTT Sum/Min/Max: 3/1/1
RTT Sum/Min/Max/Avg: 7/1/5/2
DNSServerTimeout:0 TCPConnectTimeout:0 TransactionTimeout: 0
Lost packet ratio:0%
Applicable Environment
The DNS test is performed to obtain the speed at which the specified domain name is resolved
to an IP address.
Pre-configuration Tasks
Before configuring the DNS test, complete the following tasks:
l Configuring the DNS server
l Configuring routes between the NQA client and the DNS server
Data Preparation
To configure the DNS test, you need the following data.
No. Data
Context
Do as follows on the NQA client (DNS client):
Procedure
Step 1 Run
system-view
An NQA test instance is created and the test instance view is displayed.
Step 4 Run:
test-type dns
Step 6 Run:
destination-address url urlstring
Prerequisites
The configurations of the DNS Test function are complete.
Context
NOTE
NQA test results cannot be displayed automatically on a terminal. You must run the display nqa results
command to view test results. By the default, the command output contains the records about only the last
five tests.
Procedure
Step 1 Run the display nqa results [ test-instance admin-name test-name ] command to view the test
results on the NQA client.
----End
Example
Run the display nqa results command. If the test is successful, the following is displayed.
<HUAWEI> display nqa results
NQA entry(t, t) :testflag is inactive ,testtype is dns
1 . Test 1 result The test is finished
Send operation times: 1 Receive response times: 1
Completion:success RTD OverThresholds number: 0
Attempts number:1 Drop operation number:0
Disconnect operation number:0 Operation timeout number:0
System busy operation number:0 Connection fail number:0
Operation sequence errors number:0 RTT Stats errors number:0
Destination ip address:10.82.55.191
Min/Max/Average Completion Time: 4/4/4
Sum/Square-Sum Completion Time: 4/16
Last Good Probe Time: 2010-06-21 15:40:12.6
Lost packet ratio: 0 %
Applicable Environment
An NQA Traceroute test can provide functions similar to those provided by the tracert
command, but outputs more detailed information.
Pre-configuration Tasks
Before configuring a traceroute test, configure reachable routes between the NQA client and the
device to be tested.
Data Preparation
To configure a traceroute test, you need the following data.
No. Data
2 Destination IP address
3 (Optional) VPN instance name, maximum hops, initial TTL and maximum TTL value
of the packet, and source IP address and destination port of the packet
Context
Do as follows on the NQA client:
Procedure
Step 1 Run
system-view
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type trace
l To configure the initial TTL and maximum TTL values of a packet, run:
tracert-livetime first-ttl first-ttl max-ttl max-ttl
destination-port port-number
l To configure a NQA test packets to be sent without searching the routing table, run:
sendpacket passroute
Step 6 Run:
start
----End
Prerequisites
The configurations of the traceroute test are complete.
Context
NOTE
NQA test results cannot be displayed automatically on the terminal. You need to run the display nqa
results command to view test results. By the default, the command output contains the records about only
the last five tests.
Procedure
Step 1 Run the display nqa results [ test-instance admin-name test-name ] command to view the test
results on the NQA client.
----End
Example
Run the display nqa results command. If the statistics about each hop are displayed, it means
that the traceroute test is successful.
<HUAWEI> display nqa results
NQA entry(t, t) :testflag is inactive ,testtype is trace
1 . Test 1 result The test is finished
Applicable Environment
Through the SNMP Query test, you can obtain the statistics of the communication between hosts
and SNMP agents.
Pre-configuration Tasks
Before configuring the SNMP Query test, complete the following tasks:
Data Preparation
To configure the SNMP query test, you need the following data.
No. Data
3 (Optional) Source IP addresses and source port numbers of test packets, interval for
sending test packets, and percentage of the failed NQA tests
Context
Do as follows on the NQA client:
Procedure
Step 1 Run:
system-view
Step 2 Run:
nqa test-instance admin-name test-name
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type snmp
Step 4 Run:
destination-address ipv4 ip-address
The destination IP address, that is, the IP address of the SNMP agent, is configured.
NOTE
The SNMP function must be enabled on the destination host; otherwise, the destination host fails to receive
Echo packets.
Step 5 (Optional) Perform the following as required to configure other parameters for the SNMP test
( For detailed parameter configurations, see the chapter Configuring Universal NQA Test
Parameters ):
l To configure the VPN instance to be tested, run the vpn-instance vpn-instance-name
command.
l To configure the source IP address, run the source-address ipv4 ip-address command.
l To configure the source port number, run the source-port port-number command.
l To configure the interval for sending test packets, run the interval seconds interval
command.
l To configure the percentage of the failed NQA tests, run the fail-percent percent command.
l To configure the NQA test packets to be sent without searching the routing table, run the
sendpacket passroute command.
Step 6 Run:
start
Select the start mode as required because the start command has several forms.
l To perform the NQA test immediately, run the start now [ end { at [ yyyy/mm/dd ]
hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ]
command.
The test instance is started immediately.
l To perform the NQA test at the specified time, run the start at [ yyyy/mm/dd ] hh:mm:ss
[ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime
{ seconds second | hh:mm:ss } } ] command.
The test instance is started at a specified time.
l To perform the NQA test after a certain delay period, run the start delay { seconds second
| hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } |
lifetime { seconds second | hh:mm:ss } } ] command.
The test instance is started after a certain delay.
----End
Prerequisites
The configurations of the SNMP Query Test function are complete.
Context
NOTE
NQA test results cannot be displayed automatically on a terminal. You must run the display nqa results
command to view test results. By the default, the command output contains the records about only the last
five tests.
Procedure
Step 1 Run the display nqa results [ test-instance admin-name test-name ] command to view the test
results on the NQA client.
----End
Example
Run the display nqa results command. If the test is successful, the following is displayed.
<HUAWEI> display nqa results
NQA entry(admin, snmp) :testflag is inactive ,testtype is snmp
1 . Test 1 result The test is finished
Send operation times: 3 Receive response times: 3
Completion:success RTD OverThresholds number: 0
Attempts number:0 Drop operation number:0
Disconnect operation number:0 Operation timeout number:0
System busy operation number:0 Connection fail number:0
Operation sequence errors number:0 RTT Stats errors number:0
Destination ip address:10.2.1.2
Min/Max/Average Completion Time: 63/172/109
Sum/Square-Sum Completion Time: 329/42389
Last Good Probe Time: 2006-8-5 15:33:49.1
Lost packet ratio: 0 %
Applicable Environment
To obtain the time for the specified port to respond to a TCP connection request, you can create
an NQA TCP test instance.
Pre-configuration Tasks
Before configuring the TCP test, configure reachable routes between the NQA client and the
TCP server.
Data Preparation
To configure the TCP test, you need the following data.
No. Data
3 (Optional) Destination port numbers of the probe packets sent by the TCP client and
source IP addresses , source port numbers of test packets, interval for sending test
packets, and percentage of the failed NQA tests
Context
Do as follows on the NQA server (TCP server):
Procedure
Step 1 Run:
system-view
Step 2 Run:
nqa-server tcpconnect [ vpn-instance vpn-instance-name ] ip-address port-number
NOTE
Note that the IP address and port number monitored by the server should be consistent with those configured
on the client.
----End
Context
Do as follows on the NQA client (TCP client):
Procedure
Step 1 Run:
system-view
Step 2 Run:
nqa test-instance admin-name test-name
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type tcp
Step 4 Run:
destination-address ipv4 ip-address
Step 5 To configure the destination port number, run the destination-port port-numbercommand.
Step 6 (Optional) Perform the following as required to configure other parameters for the TCP test ( For
detailed parameter configurations, see the chapter Configuring Universal NQA Test
Parameters ):
l To configure the VPN instance to be tested, run the vpn-instance vpn-instance-name
command.
l To configure the source IP address, run the source-address ipv4 ip-address command.
l To configure the source port number, run the source-port port-numbercommand.
l To configure the interval for sending test packets, run the interval seconds interval
command.
l To configure the percentage of the failed NQA tests, run the fail-percent percentcommand.
l To configure the NQA test packets to be sent without searching the routing table, run the
sendpacket passroute command.
Step 7 Run:
start
Select the start mode as required because the start command has several forms.
l To perform the NQA test immediately, run the start now [ end { at [ yyyy/mm/dd ]
hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ]
command.
The test instance is started immediately.
l To perform the NQA test at the specified time, run the start at [ yyyy/mm/dd ] hh:mm:ss
[ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime
{ seconds second | hh:mm:ss } } ] command.
The test instance is started at a specified time.
l To perform the NQA test after a certain delay period, run the start delay { seconds second
| hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } |
lifetime { seconds second | hh:mm:ss } } ] command.
The test instance is started after a certain delay.
The differences between the TCP Public tests and the TCP Private tests are as follows:
l The TCP Public tests do not require the destination port to be configured on the client.
Connection requests are initiated and sent to the TCP port 7 of the destination address. The
server should monitor the TCP port 7.
l The TCP Private tests require the destination port be specified and the related monitoring
services enabled on the server.
----End
Prerequisites
The configurations of the TCP Test function are complete.
Context
NOTE
NQA test results cannot be displayed automatically on a terminal. You must run the display nqa results
command to view test results. By the default, the command output contains the records about only the last
five tests.
Procedure
l Run the display nqa results [ test-instance admin-name test-name ] command to view the
test results on the NQA client.
l Run the display nqa-server command to view the information about the NQA server.
----End
Example
Run the display nqa results command. If the test is successful, the following is displayed.
<HUAWEI> display nqa results
NQA entry(admin, tcp) :testflag is inactive ,testtype is tcp
1 . Test 1 result The test is finished
Send operation times: 3 Receive response times: 3
Completion:success RTD OverThresholds number: 0
Attempts number:0 Drop operation number:0
Disconnect operation number:0 Operation timeout number:0
System busy operation number:0 Connection fail number:0
Operation sequence errors number:0 RTT Stats errors number:0
Destination ip address:10.2.1.2
Min/Max/Average Completion Time: 31/62/51
Sum/Square-Sum Completion Time: 155/8649
Last Good Probe Time: 2006-8-5 15:55:15.3
Lost packet ratio: 0 %
Run the display nqa-server command,the status of the NQA server is displayed.
<HUAWEI> display nqa-server
NQA Server Max: 5000 NQA Server Num: 1
NQA Concurrent TCP Server : 1 NQA Concurrent UDP Server: 0
NQA Concurrent ICMP Server : 0
Applicable Environment
To obtain the time for the specified port to respond to a UDP connection request, you can create
a UDP test instance.
Pre-configuration Tasks
Before configuring the UDP test, configure reachable routes between the NQA client and the
UDP server.
Data Preparation
To configure the UDP test, you need the following data.
No. Data
3 Destination IP address and the port of the probe packets sent by the UDP client
4 (Optional) Source IP addresses and source port numbers of test packets, interval for
sending test packets, and percentage of the failed NQA tests
Context
Do as follows on the NQA server (UDP server):
Procedure
Step 1 Run:
system-view
Step 2 Run:
nqa-server udpecho [ vpn-instance vpn-instance-name ] ip-address port-number
Note that the IP address and port number monitored by the server should be consistent with those
configured on the client.
----End
Context
Do as follows on the NQA client (UDP client):
Procedure
Step 1 Run:
system-view
Step 2 Run:
nqa test-instance admin-name test-name
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type udp
----End
Prerequisites
The configurations of the UDP Test function are complete.
Context
NOTE
NQA test results cannot be displayed automatically on a terminal. You must run the display nqa results
command to view test results. By the default, the command output contains the records about only the last
five tests.
Procedure
l Run the display nqa results [ test-instance admin-name test-name ] command to view the
test results on the NQA client.
l Run the display nqa-server command to view the information about the NQA server.
----End
Example
Run the display nqa results command. If the test is successful, the following is displayed.
<HUAWEI> display nqa results
NQA entry(admin, udp) :testflag is inactive ,testtype is udp
1 . Test 1 result The test is finished
Send operation times: 3 Receive response times: 3
Completion:success RTD OverThresholds number: 0
Attempts number:1 Drop operation number:0
Disconnect operation number:0 Operation timeout number:0
System busy operation number:0 Connection fail number:0
Operation sequence errors number:0 RTT Stats errors number:0
Destination ip address:10.2.1.2
Min/Max/Average Completion Time: 32/109/67
Sum/Square-Sum Completion Time: 203/16749
Last Good Probe Time: 2006-8-5 16:9:21.6
Lost packet ratio: 0 %
Run the display nqa-server command. If the status of the NQA server is displayed, it means
that the configuration succeeds.
<HUAWEI> display nqa-server
NQA Server Max: 5000 NQA Server Num: 1
NQA Concurrent TCP Server : 0 NQA Concurrent UDP Server: 1
NQA Concurrent ICMP Server : 0
Applicable Environment
The jitter time refers to the interval for sending two adjacent packets minus the interval for
receiving the two packets.
The process of a Jitter test is as follows:
1. The source sends a packet to the destination at a specified interval.
2. After receiving the packet, the destination adds a timestamp to the packet and returns them
to the source.
3. After receiving the returned packets, the source subtracts the interval for the source to send
two adjacent packets from the interval for the destination to receive the two packets and
then obtains the jitter time.
The maximum, minimum, and average jitter time calculated based on the information received
on the source can clearly show the network status.
In a Jitter test, you can set the number of packets to be sent consecutively. Through this setting,
certain traffic can be simulated within a certain period. For example, if you set 3000 UDP packets
to be sent at an interval of 20 milliseconds. Then, in one minute, G.711 traffic is simulated.
NOTE
To improve the test accuracy, you can configure the Network Time Protocol (NTP) on both the client and
the server.
Pre-configuration Tasks
Before configuring the Jitter test, configure reachable routes between the NQA client and the
NQA server.
Data Preparation
To configure the Jitter test, you need the following data.
No. Data
3 Destination IP addresses and port numbers of the probe packets sent by the UDP
client
4 (Optional) VPN instance name, source IP address and port number of the probe packet
sent by the UDP client, number of probe packets and test packets sent each time,
interval for sending probe packets and test packets, percentage of the failed NQA
tests, and version number carried in the Jitter packet
Context
Do as follows on the NQA server (Jitter server):
Procedure
Step 1 Run:
system-view
----End
Context
NOTE
The system supports the collection of the statistics about the maximum uni-directional transmission delay.
Procedure
Step 1 Run:
system-view
An NQA test instance is created and the test instance view is displayed.
Step 4 Run:
test-type jitter
The number of the Jitter tests depends on the probe-count command. The number of test packets sent
during each test depends on the jitter-packetnum command. During the actual configuration, the
product of the number of test times and the number of the test packets must be less than 3000.
l To configure the interval for sending test packets, run the interval { milliseconds interval |
seconds interval } command.
The shorter the interval for sending the Jitter test packets is, the faster the test is completed.
If the interval, however, is set to a very small value, the jitter statistics result may have a
greater error.
l To configure the percentage of the failed NQA tests, run the fail-percent percent command.
l To send the NQA test packet without searching the routing table, run the sendpacket
passroute command.
l To configure a code type for an NQA Jitter simulated voice test case, run the jitter-codec
{ g711a | g711u | g729a } command.
This command is applied only to Jitter voice test cases.
l To configure the advantage factor for simulated voice test calculation, run the adv-factor
factor-value command.
This command is applied only to Jitter voice test cases.
Step 8 Run:
start
----End
Prerequisites
The configurations of the Jitter Test function are complete.
Context
NOTE
NQA test results cannot be displayed automatically on a terminal. You must run the display nqa results
command to view test results. By the default, the command output contains the records about only the last
five tests.
Procedure
l Run the display nqa results [ test-instance admin-name test-name ] command to view the
test results on the NQA client.
l Run the display nqa-server command to view the information about the NQA server.
----End
Example
Run the display nqa results command. If the test is successful, the following is displayed.
<HUAWEI> display nqa results test-instance admin jitter
NQA entry(admin, jitter) :testflag is inactive ,testtype is jitter
1 . Test 1 result The test is finished
SendProbe:60 ResponseProbe:60
Completion:success RTD OverThresholds number:0
OWD OverThresholds SD number:0 OWD OverThresholds DS number:0
Min/Max/Avg/Sum RTT:1/1/1/60 RTT Square Sum:60
NumOfRTT:60 Drop operation number:0
Operation sequence errors number:0 RTT Stats errors number:0
Applicable Environment
Jitter time refers to the interval for receiving two consecutive packets minus the interval for
sending the two packets.
The maximum, minimum, and average jitter time and the maximum unidirectional delay of the
packets from the source to the destination and from the destination to the source are calculated
according to the information received on the source. Based on these data, the network status is
clearly presented.
In the jitter test, you can set the number of packets to be sent consecutively in each test instance.
Through this setting, the actual traffic of a kind of packet during a time period can be simulated.
For example, if the interval for sending 3000 UDP is set to 20 ms, the traffic of G.711 within 1
minute can be simulated.
After the LPU is enabled to send packets, the obtained test results become more accurate.
Pre-configuration Tasks
Before configuring the jitter test, configure a reachable route between the NQA client and the
UDP server.
Data Preparation
To configure the jitter test, you need the following data.
No. Data
1 Administrator of the NQA test instance and name of the test instance
3 Destination IP address and destination port number of the probe packets sent from
the UDP client
4 (Optional) Name of a VPN instance, source IP address and port number of the
probe packets sent from the UDP client, number of test probes sent each time,
number of test packets sent each time, interval for sending test packets, percentage
of the failed NQA tests, and version number of jitter packets
Context
Do as follows on the NQA server:
Procedure
Step 1 Run:
system-view
----End
The system supports the maximum unidirectional delay of the jitter test.
Procedure
Step 1 Run:
system-view
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type jitter
hardware-based enable
Step 7 (Optional) Run the following commands to configure other parameters for the jitter test:
l Run:
vpn-instance vpn-instance-name
The probe-count command is used to configure the number of times for the jitter test and the jitter-
packetnum command is used to configure the number of test packets sent during each test. In actual
configuration, the product of the number of times for the jitter test and the number of test packets must
be less than 3000.
l Run:
interval { milliseconds interval | seconds interval }
The NQA test is configured to send packets without searching for the routing table.
l Enter the system view.
Run:
nqa-jitter tag-version version-number
After the statistics of unidirectional packet loss is enabled, you can view the number of lost
packets on the link from the source to the destination, from the destination to the source, or
from unknown directions. Based on these statistics, the network administrator can easily
locate network faults and detect malicious attacks.
l Run:
timeout time
----End
Prerequisites
The configurations of the Jitter Test Based on the Mechanism That the LPU Sends Packets
function are complete.
NOTE
NQA test results cannot be displayed automatically on the terminal. You should run the display nqa
results command to check the test results.
Procedure
Step 1 Run the display nqa results [ test-instance admin-name test-name ] command to check test
results on the NQA client.
----End
Example
If the jitter test succeeds, you can view the following information by running the display nqa
results command.
<HUAWEI> display nqa results test-instance admin jitter
NQA entry(admin, jitter) :testflag is inactive ,testtype is jitter
1 . Test 1 result The test is finished
SendProbe:60 ResponseProbe:60
Applicable Environment
The NQA LSP Ping test can be used to test the reachability of the following types of Label
Switched Paths (LSPs) and collect statistics about Link State Advertisement (LSA).
l LSP tunnels
l MPLS TE tunnels
l MPLS CR-LSP hotstandby tunnels
After the test parameters are configured and the test is started,
1. NQA creates an MPLS Echo-Request packet and adds the address 127.0.0.0/8 to the IP
packet header as the destination IP address. The packet is forwarded along the specified
LSP in the MPLS network.
2. The egress monitors port 3503 that sends Echo packets.
3. The ingress collects the test results based on the received Echo packets.
Pre-configuration Tasks
Before configuring the LSP Ping test, you need the following configuration:
l LSP tunnel
l or an MPLS TE tunnel.
l or an MPLS CR-LSP hotstandby tunnel
Data Preparation
To configure the LSP Ping test, you need the following data.
No. Data
2 l For the LSP tunnel: destination IP address and mask of the LSP Ping test
l For the MPLS TE tunnel: interface number of the TE tunnel
l For the MPLS CR-LSP hotstandby tunnels: interface number of the TE tunnel
3 (Optional) Parameters of the LSP Ping test, including the response mode of the Echo
packet, packet size, TTL, LSP EXP value, padding character, timeout period of the
packet, probe times, test interval, and percentage of the failed NQA tests
7.14.2 Configuring the LSP Ping Test Parameters for the LDP
Tunnel
Before performing an LDP LSP ping test, you need set parameters for the LSP ping test.
Context
Do as follows on the NQA client:
Procedure
Step 1 Run:
system-view
Step 2 Run:
nqa test-instance admin-name test-name
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type lspping
Step 4 Run:
lsp-type ipv4
Step 5 Run:
destination-address ipv4 ip-address [ lsp-masklen masklen | lsp-loopback loopback-
address ]*
Step 6 (Optional) Perform the following as required to configure other parameters for the LSP Ping
test:
l To configure a protocol used by the LSP ping test, run the lsp-version { rfc4379 | draft6 }
command.
l To configure the next-hop IP address in the scenario where load balancing is enabled on the
initiator of the LSP ping test, run the lsp-nexthop nexthop-ip-address command.
NOTE
The next-hop IP address can be configured only when lsp-type is IPv4 and lsp-version is RFC 4379.
l To configure the response mode of the Echo packet, run the lsp-replymode { no-reply |
udp | udp-via-vpls | udp-router-alert | level-control-channel } command.
NOTE
In a uni-directional LSP Ping test, if the lsp-replymode no-reply command is configured, the test
result displays that the test fails regardless of whether the test, actually, is successful or fails. If the test
is successful, the test result also displays the number of the timeout packets. If the test fails, the test
result displays the number of the discarded packets.
l To configure the source IP address, run the source-address ipv4 ip-address command.
l To configure the packet size, run the datasize size command.
NOTE
The sum of datasize and the size of the packet header should be less than the MTU of the interface;
otherwise, the test may fail.
l To configure the maximum TTL value of the packet, run the ttl number command.
l To configure the LSP EXP value, run the lsp-exp exp command.
l To configure the padding character of the packet, run the datafill fillstring command.
l To configure the interval for sending test packets, run the interval seconds interval
command.
l To configure the percentage of the failed NQA tests, run the fail-percent percent command.
Step 7 Run:
start
Select the start mode as required because the startcommand has several forms.
l To perform the NQA test immediately, run the start now [ end { at [ yyyy/mm/dd ]
hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ]
command.
The test instance is started immediately.
l To perform the NQA test at the specified time, run the start at [ yyyy/mm/dd ] hh:mm:ss
[ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime
{ seconds second | hh:mm:ss } } ] command.
The test instance is started at a specified time.
l To perform the NQA test after a certain delay period, run the start delay { seconds second
| hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } |
lifetime { seconds second | hh:mm:ss } } ] command.
The test instance is started after a certain delay.
----End
7.14.3 Configuring the LSP Ping Test Parameters for the MPLS TE
Tunnel
Before performing the TE LSP ping test, you need set parameters for a TE LSP ping test.
Context
Do as follows on the NQA client:
Procedure
Step 1 Run:
system-view
Step 2 Run:
nqa test-instance admin-name test-name
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type lspping
Step 4 Run:
lsp-type te
Step 5 Run:
lsp-tetunnel tunnel interface-number
Step 6 (Optional) Perform the following as required to configure other parameters for the LSP Ping
test:
l To configure a protocol used by the LSP ping test, run the lsp-version { rfc4379 | draft6 }
command.
l To configure the response mode of the Echo packet, run the lsp-replymode { no-reply |
udp | udp-via-vpls | udp-router-alert | level-control-channel } command.
NOTE
In a uni-directional LSP Ping test, if the lsp-replymode no-reply command is configured, the test
result displays that the test fails regardless of whether the test, actually, is successful or fails. If the test
is successful, the test result also displays the number of the timeout packets. If the test fails, the test
result displays the number of the discarded packets.
l To configure the source IP address, run the source-address ipv4 ip-address command.
l To configure the packet size, run the datasize size command.
NOTE
The sum of the data size and the size of the packet header must be less than the MTU of the interface;
otherwise, the test may fail.
l To configure the maximum TTL value of the packet, run the ttl number command.
l To configure the LSP EXP value, run the lsp-exp exp command.
l To configure the padding character of the packet, run the datafill fillstring command.
l To configure the interval for sending test packets, run the interval { milliseconds interval |
seconds interval } command.
l To configure the percentage of the failed NQA tests, run the fail-percent percent command.
Step 7 Run:
start
Select the start mode as required because the start command has several forms.
l To perform the NQA test immediately, run the start now [ end { at [ yyyy/mm/dd ]
hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } ]
command.
The test instance is started immediately.
l To perform the NQA test at the specified time, run the start at [ yyyy/mm/dd ] hh:mm:ss
[ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime
{ seconds second | hh:mm:ss } ] command.
The test instance is started at a specified time.
l To perform the NQA test after a certain delay period, run the start delay { seconds second
| hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } |
lifetime { seconds second | hh:mm:ss } ] command.
The test instance is started after a certain delay.
----End
7.14.4 Configuring the LSP Ping Test Parameters for the CR-LSP
Hotstandby Tunnel
Before performing the LSP ping test, you need set LSP ping test parameters for CR-LSP tunnels
in hot standby mode.
Context
Do as follows on the NQA client:
Procedure
Step 1 Run:
system-view
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type lspping
The TE tunnel interface to be pinged is specified and the CR-LSP hotstandby tunnel is set to be
tested.
Step 6 (Optional) Perform the following as required to configure other parameters for the LSP Ping
test:
l To configure a protocol used by the LSP ping test, run the lsp-version { rfc4379 | draft6 }
command.
l To configure the response mode of the Echo packet, run the lsp-replymode { no-reply |
udp | udp-via-vpls | udp-router-alert | level-control-channel } command.
NOTE
In a uni-directional LSP Ping test, if the lsp-replymode no-reply command is configured, the test
result displays that the test fails regardless of whether the test, actually, succeeds or fails. If the test
succeeds, the test result shows the number of timeout packets. If the test fails, the test result shows the
number of discarded packets.
l To configure the source IP address, run the source-address ipv4 ip-address command.
l To configure the packet size, run the datasize size command.
NOTE
The sum of the data size and the size of the packet header must be less than the MTU of the interface;
otherwise, the test may fail.
l To configure the maximum TTL value of the packet, run the ttl number command.
l To configure the LSP EXP value, run the lsp-exp exp command.
l To configure the padding character of the packet, run the datafill fillstring command.
l To configure the interval for sending test packets, run the interval seconds interval
command.
l To configure the percentage of the failed NQA tests, run the fail-percent percent command.
Step 7 Run:
start
----End
Prerequisites
The configurations of the LSP Ping Test function are complete.
Context
NOTE
NQA test results cannot be displayed automatically on a terminal. You must run the display nqa results
[ test-instance admin-name test-name ] command to view test results. By the default, the command output
contains the records about only the last five tests.
Procedure
l Run the display nqa results command to view the test results on the NQA client.
----End
Example
Run the display nqa results command. If the test is successful, the following is displayed.
l Statistics about errors
– Number of unroutable connections
– Number of wrong sequence numbers
– Timeout times of the test packets
l History statistics of each test packet
– Timestamp added when each test packet is sent
– Timestamp added when each test packet is received
– Packets status displayed on the NQA client
l Statistics of results of each test
– Number of successful tests
– Sum of the response time of all tests
Run the display nqa results command to view the test results on the NQA client.
<HUAWEI> display nqa results test-instance admin lspping
NQA entry(admin, test) :testflag is inactive ,testtype is lspping
1 . Test 1 result The test is finished
Send operation times: 3 Receive response times: 3
Completion:success RTD OverThresholds number: 0
Attempts number:1 Drop operation number:0
Disconnect operation number:0 Operation timeout number:0
System busy operation number:0 Connection fail number:0
Operation sequence errors number:0 RTT Stats errors number:0
Destination ip address:100.1.1.200
Min/Max/Average Completion Time: 4/5/4
Sum/Square-Sum Completion Time: 13/57
Last Good Probe Time: 2007-11-19 19:46:28.8
Lost packet ratio: 0 %
Applicable Environment
The NQA LSP Jitter test is performed to check the reachability of static LSP, LDP LSP, and TE
tunnels. After receiving a packet from the source, the destination calculates the maximum,
minimum, and average jitter time of the packet transmitted from the source to the destination.
This clearly reflects the status of the MPLS network.
NOTE
Pre-configuration Tasks
Before configuring the LSP Jitter test, configure an LSP tunnel or an MPLS TE tunnel.
Data Preparation
To configure the LSP Jitter test, you need the following data.
No. Data
2 l For the LSP tunnel: destination IP address and mask of the LSP Ping test
l For the MPLS TE tunnel: interface number of the TE tunnel
3 (Optional) Parameters of the LSP Jitter test, including the response mode of the Echo
packet, packet size, TTL, LSP EXP value, padding character, timeout period of the
packet, probe times, and test interval
7.15.2 Configuring the LSP Jitter Test Parameters for the LDP
Tunnel
This part describes how to set parameters for an LDP LSP jitter test.
Context
Do as follows on the ingress of an LSP tunnel:
Procedure
Step 1 Run:
system-view
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type lspjitter
l To configure the next-hop IP address in the scenario where load balancing is enabled on the
initiator of the LSP ping test, run the lsp-nexthop nexthop-ip-address command.
NOTE
The next-hop IP address can be configured only when lsp-type is IPv4 and lsp-version is RFC 4379.
l To configure the response mode of the Echo packet, run the lsp-replymode { no-reply |
udp | udp-via-vpls | udp-router-alert | level-control-channel } command.
NOTE
In a uni-directional LSP Ping test, if the lsp-replymode no-reply command is configured, the test
result displays that the test fails regardless of whether the test, actually, is successful or fails. If the test
is successful, the test result also displays the number of the timeout packets. If the test fails, the test
result displays the number of the discarded packets.
l To configure the source IP address, run the source-address ipv4 ip-address command.
l To configure the packet size, run the datasize size command.
l To configure the maximum TTL value of the packet, run the ttl number command.
l To configure the LSP EXP value, run the lsp-exp exp command.
l To configure the padding character of the packet, run the datafill fillstring command.
l To configure the interval for sending the test packets, run the interval seconds interval
command.
NOTE
The minimum interval for sending test packets is one second and the maximum interval is 60 seconds.
l To configure the percentage of the failed NQA tests, run thefail-percent percent command.
Step 7 Run:
start
Select the start mode as required because the startcommand has several forms.
l To perform the NQA test immediately, run the start now [ end { at [ yyyy/mm/dd ]
hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ]
command.
The test instance is started immediately.
l To perform the NQA test at the specified time, run the start at [ yyyy/mm/dd ] hh:mm:ss
[ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime
{ seconds second | hh:mm:ss } } ] command.
The test instance is started at a specified time.
l To perform the NQA test after a certain delay period, run the start delay { seconds second
| hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } |
lifetime { seconds second | hh:mm:ss } } ] command.
The test instance is started after a certain delay.
----End
7.15.3 Configuring the LSP Jitter Test Parameters for the MPLS TE
Tunnel
This part describes how to set parameters for a TE LSP jitter test.
Context
Do as follows on the ingress of an MPLS TE tunnel:
Procedure
Step 1 Run:
system-view
Step 2 Run:
nqa test-instance admin-name test-name
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type lspjitter
Step 4 Run:
lsp-type te
Step 5 Run:
lsp-tetunnel tunnel interface-number
Step 6 (Optional) Perform the following as required to configure other parameters for the MPLS TE
Jitter test:
l To configure a protocol used by the LSP ping test, run the lsp-version { rfc4379 | draft6 }
command.
l To configure the response mode of the Echo packet, run the lsp-replymode { no-reply |
udp | udp-via-vpls | udp-router-alert | level-control-channel } command.
NOTE
In a uni-directional LSP Ping test, if the lsp-replymode no-reply command is configured, the test
result displays that the test fails regardless of whether the test, actually, is successful or fails. If the test
is successful, the test result also displays the number of the timeout packets. If the test fails, the test
result displays the number of the discarded packets.
l To configure the source IP address, run the source-address ipv4 ip-address command.
l To configure the packet size, run the datasize size command.
l To configure the maximum TTL value of the packet, run the ttl number command.
l To configure the LSP EXP value, run the lsp-exp exp command.
l To configure the padding character of the packet, run the datafill fillstring command.
l To configure the interval for sending the test packets, run the interval { milliseconds
interval | seconds interval } command.
NOTE
The minimum interval for sending test packets is one second and the maximum interval is 60 seconds.
l To configure the percentage of the failed NQA tests, run the fail-percent percent command.
Step 7 Run:
start
Select the start mode as required because the start command has several forms.
l To perform the NQA test immediately, run the start now [ end { at [ yyyy/mm/dd ]
hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } ]
command.
The test instance is started immediately.
l To perform the NQA test at the specified time, run the start at [ yyyy/mm/dd ] hh:mm:ss
[ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime
{ seconds second | hh:mm:ss } ] command.
The test instance is started at a specified time.
l To perform the NQA test after a certain delay period, run the start delay { seconds second
| hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } |
lifetime { seconds second | hh:mm:ss } ] command.
The test instance is started after a certain delay.
----End
Prerequisites
The configurations of the LSP Test function are complete.
Context
NOTE
NQA test results cannot be displayed automatically on a terminal. You must run the display nqa results
command to view test results. By the default, the command output contains the records about only the last
five tests.
Procedure
Step 1 Run the display nqa results command to view the test results on the NQA client.
----End
Example
For the LSP Jitter test, run the display nqa results command. If the test is successful, the
following is displayed.
Applicable Environment
The NQA LSP Trace test can be used to test the tunnel nodes of the following types of LSPs
and collect statistics about LSA.
l LSP tunnels
l MPLS TE tunnels
l MPLS CR-LSP hotstandby tunnels
After the test parameters are configured and the test is started,
l NQA creates the UDP MPLS Echo-Request packet, adds the address 127.0.0.0/8 to the IP
packet header as the destination IP address, and searches the related LSP.
Echo Request packets should contain Downstream Mapping Tag, Length, Value (TLV)
that carries the information about the downstream node of the current LSP node, such as
the IP address of the next hop and the outgoing label.
For the MPLS TE tunnel, you can specify a tunnel interface for sending the MPLS Echo-
Request packet so that the related Constraint-based Routed Label Switched Path (CR-LSP)
can be obtained.
l The TTL value of the first Trace Echo-Request packet is 1. The packet is forwarded along
with the specified LSP in the MPLS network. An MPLS Echo-Reply packet is returned if
the TTL value times out.
l The sender continues to send Echo-Request packets with the gradually increased TTL
value. When all Label Switching Routers (LSRs) along the LSP return Echo packets, the
Trace process is completed.
l The sender collects the test results based on the received Echo packets.
Pre-configuration Tasks
Before configuring the LSP Trace test, you need the following configuration:
l LSP tunnel
l or an MPLS TE tunnel.
l or an MPLS CR-LSP hotstandby tunnel
Data Preparation
To configure the LSP Trace test, you need the following data.
No. Data
2 l For the LSP tunnel: destination IP address and mask of the LSP Ping test
l For the MPLS TE tunnel: interface number of the TE tunnel
l For the MPLS CR-LSP hotstandby tunnels: interface number of the TE tunnel
3 (Optional) Parameters of the LSP Ping test, including the response mode of the Echo
packet, packet size, TTL, LSP EXP value, padding character, timeout period of the
packet, probe times, test interval, and percentage of the failed NQA tests
7.16.2 Configuring the LSP Trace Parameters for the LDP Tunnel
This part describes how to set parameters for an LDP LSP Trace test.
Context
Do as follows on the NQA client:
Procedure
Step 1 Run:
system-view
Step 2 Run:
nqa test-instance admin-name test-name
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type lsptrace
Step 4 Run:
lsp-type ipv4
Step 5 Run:
destination-address ipv4 ip-address { lsp-masklen masklen | lsp-loopback loopback-
address }*
Step 6 (Optional) Perform the following as required to configure other parameters for the LSP Trace
test:
l To configure a protocol used by the LSP ping test, run the lsp-version { rfc4379 | draft6 }
command.
l To configure the next-hop IP address in the scenario where load balancing is enabled on the
initiator of the LSP ping test, run the lsp-nexthop nexthop-ip-address command.
NOTE
The next-hop IP address can be configured only when lsp-type is IPv4 and lsp-version is RFC 4379.
l To configure the response mode of the Echo packet, run the lsp-replymode { no-reply |
udp | udp-via-vpls | udp-router-alert | level-control-channel } command.
NOTE
In a uni-directional LSP Trace test, if the lsp-replymode no-reply command is configured, the test
result displays that the test fails regardless of whether the test is successful or fails. If the test is
successful, the test result also displays the number of the timeout packets. If the test fails, the test result
displays the number of the discarded packets.
l To configure the source IP address, run the source-address ipv4 ip-address command.
l To configure the LSP EXP value, run the lsp-exp exp command.
l To configure after how many hops the test is considered failed, run the tracert-
hopfailtimes timescommand.
l To configure the initial and the maximum TTL values of the packet, run the tracert-
livetime first-ttl first-ttl max-ttl max-ttl command.
Step 7 Run:
start
Select the start mode as required because the start command has several forms.
l To perform the NQA test immediately, run the start now [ end { at [ yyyy/mm/dd ]
hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } ]
command.
The test instance is started immediately.
l To perform the NQA test at the specified time, run the start at [ yyyy/mm/dd ] hh:mm:ss
[ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime
{ seconds second | hh:mm:ss } ] command.
The test instance is started at a specified time.
l To perform the NQA test after a certain delay period, run the start delay { seconds second
| hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } |
lifetime { seconds second | hh:mm:ss } ] command.
The test instance is started after a certain delay.
----End
Context
Do as follows on the NQA client:
Procedure
Step 1 Run:
system-view
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type lsptrace
l To configure a protocol used by the LSP ping test, run the lsp-version { rfc4379 | draft6 }
command.
l To configure the source IP address, run the source-address ipv4 ip-address command.
l To configure the response mode of the Echo packet, run the lsp-replymode { no-reply |
udp | udp-via-vpls | udp-router-alert | level-control-channel } command.
NOTE
In a uni-directional LSP Trace test, if the lsp-replymode no-reply command is configured, the test
result displays that the test fails regardless of whether the test is successful or fails. If the test is
successful, the test result also displays the number of the timeout packets. If the test fails, the test result
displays the number of the discarded packets.
l To configure the LSP EXP value, run the lsp-exp exp command.
l To configure after how many hops a test is considered failed, run the tracert-hopfailtimes
times command.
l To configure the initial and the maximum TTL values of the packet, run the tracert-
livetime first-ttl first-ttl max-ttl max-ttl command.
Step 7 Run:
start
----End
7.16.4 Configuring the LSP Trace Test Parameters for the CR-LSP
Hotstandby Tunnel
This part describes how to set LSP Trace test parameters for CR-LSP hot standby tunnels.
Context
Do as follows on the NQA client:
Procedure
Step 1 Run:
system-view
Step 2 Run:
nqa test-instance admin-name test-name
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type lsptrace
Step 4 Run:
lsp-type te
Step 5 Run:
lsp-tetunnel tunnel interface-number hot-standby
The TE tunnel interface to be tracerouted is specified and the CR-LSP hotstandby tunnel is set
to be tested.
Step 6 (Optional) Perform the following as required to configure other parameters for the LSP Trace
test:
l To configure a protocol used by the LSP ping test, run the lsp-version { rfc4379 | draft6 }
command.
l To configure the source IP address, run the source-address ipv4 ip-address command.
l To configure the response mode of the Echo packet, run the lsp-replymode { no-reply |
udp | udp-via-vpls | udp-router-alert | level-control-channel } command.
NOTE
In a uni-directional LSP Ping test, if the lsp-replymode no-reply command is configured, the test
result displays that the test fails regardless of whether the test, actually, succeeds or fails. If the test
succeeds, the test result shows the number of timeout packets. If the test fails, the test result shows the
number of discarded packets.
l To configure the LSP EXP value, run the lsp-exp exp command.
l To configure after how many hops a test is considered failed, run the tracert-hopfailtimes
times command.
l To configure the initial and the maximum TTL values of the packet, run the tracert-
livetime first-ttl first-ttl max-ttl max-ttl command.
Step 7 Run:
start
The start command has several forms. You can choose one of the following forms as required:
l To perform the NQA test immediately, run the start now [ end { at [ yyyy/mm/dd ]
hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ]
command.
l To perform the NQA test at the specified time, run the start at [ yyyy/mm/dd ] hh:mm:ss
[ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime
{ seconds second | hh:mm:ss } } ] command.
l To perform the NQA test after a certain period of delay, run the start delay { seconds
second | hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second |
hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.
----End
Prerequisites
The configurations of the LSP Traceroute Test function are complete.
Context
NOTE
NQA test results cannot be displayed automatically on a terminal. You must run the display nqa results
command to view test results. By the default, the command output contains the records about only the last
five tests.
Procedure
l Run the display nqa results [ test-instance admin-name test-name ] command to view the
test results on the NQA client.
----End
Example
Run the display nqa results command. If the test is successful, the following is displayed.
l Statistics about errors
– Number of unroutable connections
– Number of wrong sequence numbers
– Timeout times of the test packets
l History statistics of each test packet
– Timestamp added when each test packet is sent
– Timestamp added when each test packet is received
– Packets status displayed on the NQA client
l Statistics of results of each test
– Number of successful tests
– Sum of the response time of all tests
– RTT square sum
– Minimum RTT and maximum RTT of the packet
– Destination IP address and the type of the destination IP address
– Number of the Echo packets and the sent packets
– Time when the last packet is received
<HUAWEI> display nqa results test-instance admin lsptrace
NQA entry(admin, lsptrace) :testflag is inactive ,testtype is lsptrace
1 . Test 1 result The test is finished
Applicable Environment
Jitter time refers to the interval for receiving two consecutive packets minus the interval for
sending these two packets.
The maximum, minimum, and average jitter time and the maximum unidirectional delay of the
packets from the source to the destination and from the destination to the source are calculated
according to the information received on the source. Based on these data, the network status is
clearly presented.
In the jitter test, you can set the number of packets to be sent consecutively in each test instance.
Through this setting, the actual traffic of a kind of packet during a time period can be simulated.
The devices at the two ends of the tested link can be both Huawei devices or not.
Pre-configuration Tasks
Before configuring an ICMP jitter test, configure a reachable route between the NQA client and
the server.
Data Preparation
To configure a jitter test, you need the following data.
No. Data
1 Administrator of the NQA test instance and name of the test instance
2 Destination IP address
3 (Optional) Name of a VPN instance, source IP address ,number of test probes sent
each time, number of test packets sent each time, interval for sending test packets,
ratio of the failed NQA tests, and version number of jitter packets
Context
Do as follows on the NQA client:
Procedure
Step 1 Run:
system-view
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type icmpjitter
The probe-count command is used to configure the number of times for the jitter test and the jitter-
packetnum command is used to configure the number of test packets to be sent during each test. In
actual configuration, the product of the number of times for the jitter test multiplied by the number of
test packets must be less than 3000.
l Run:
interval { milliseconds interval | seconds interval }
----End
Prerequisites
The configurations of the ICMP Jitter Test function are complete.
NOTE
NQA test results cannot be displayed automatically on the terminal. You should run the display nqa
results command to check the test results.
Procedure
Step 1 Run the display nqa results [ test-instance admin-name test-name ] command to check results
on the NQA client.
----End
Example
If the ICMP jitter test succeeds, you can view the following information by running the display
nqa results command.
<HUAWEI> display nqa results test-instance admin icmpjitter
NQA entry(admin, icmpjitter) :testflag is inactive ,testtype is icmpjitter
1 . Test 1 result The test is finished
SendProbe:60 ResponseProbe:60
Completion :success RTD OverThresholds number:0
OWD OverThresholds SD number:0 OWD OverThresholds DS number:0
Min/Max/Avg/Sum RTT:1/144/12/709 RTT Square Sum:61007
NumOfRTT:60 Drop operation number:0
Operation sequence errors number:0 RTT Stats errors number:0
System busy operation number:0 Operation timeout number:0
Min Positive SD:1 Min Positive DS:1
Max Positive SD:138 Max Positive DS:3
Positive SD Number:7 Positive DS Number:19
Positive SD Sum:152 Positive DS Sum:21
Positive SD Square Sum :19116 Positive DS Square Sum :27
Min Negative SD:1 Min Negative DS:1
Max Negative SD:21 Max Negative DS:4
Negative SD Number:14 Negative DS Number:19
Negative SD Sum:152 Negative DS Sum:22
Negative SD Square Sum :2796 Negative DS Square Sum :34
Min Delay SD:1 Min Delay DS:0
Max Delay SD:72 Max Delay DS:71
Delay SD Square Sum:15111 Delay DS Square Sum:14728
Packet Loss SD:0 Packet Loss DS:0
Packet Loss Unknown:0 Average of Jitter:5
Average of Jitter SD:14 Average of Jitter DS:1
jitter out value:4.7604818 jitter in value:0.5399519
NumberOfOWD:60 Packet Loss Ratio: 0%
OWD SD Sum:339 OWD DS Sum:310
ICPIF value: 0 MOS-CQ value: 0
TimeStamp unit: ms Packet Rewrite Number: 0
Packet Rewrite Ratio: 0% Packet Disorder Number: 0
Packet Disorder Ratio: 0% Fragment-disorder Number: 0
Fragment-disorder Ratio: 0%
Applicable Environment
Jitter time refers to the interval for receiving two consecutive packets minus the interval for
sending the two packets.
The process of an ICMP jitter test is as follows:
l The source sends packets to the destination at a set interval.
l After receiving a packet, the destination adds a timestamp to the packet and sends it back
to the source.
l After receiving the returned packets, the source obtains the jitter time by subtracting the
interval for sending the packets from the interval for receiving the packets.
The maximum, minimum, and average jitter time and the maximum unidirectional delay of the
packets from the source to the destination and from the destination to the source are calculated
according to the information received on the source. Based on these data, the network status is
clearly presented.
In the jitter test, you can set the number of packets to be sent consecutively in each test instance.
Through this setting, the actual traffic of a kind of packet during a time period can be simulated.
If the server is a non-Huawei device, you can configure an ICMP jitter test instance based on
the mechanism that the LPU sends packets to test the jitter of the network. After that, a more
accurate test result can be obtained.
Pre-configuration Tasks
Before configuring the ICMP jitter test, complete the following task:
Configuring a reachable route between the NQA client and the server
Data Preparation
To configure the ICMP jitter test, you need the following data.
No. Data
1 Administrator of the NQA test instance and name of the test instance
3 Destination IP address
4 (Optional) Name of a VPN instance, source IP address that sends test packets,
number of the source interface that sends test packets, number of the test probes
sent each time, number of the test packets sent each time, interval for sending test
packets, the time of timeout, percentage of the failed NQA tests, TTL value, ToS
value of the test packet.
No. Data
Context
Do as follows on the NQA server:
Procedure
Step 1 Run:
system-view
----End
Context
Do as follows on the NQA client:
Procedure
Step 1 Run:
system-view
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type icmpjitter
l To perform the NQA test at the specified time, run the start at [ yyyy/mm/dd ] hh:mm:ss
[ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime
{ seconds second | hh:mm:ss } } ] command.
The test instance is started at a specified time.
l To perform the NQA test after a certain delay period, run the start delay { seconds second
| hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } |
lifetime { seconds second | hh:mm:ss } } ] command.
The test instance is started after a certain delay.
----End
Prerequisites
The configurations of the ICMP Jitter Test Based on the Mechanism that the LPU Sends Packets
function are complete.
NOTE
NQA test results cannot be displayed automatically on the terminal. You should run the display nqa
results command to check the test results.
Procedure
Step 1 Run the display nqa results [ test-instance admin-name test-name ] command to check test
results on the NQA client.
----End
Example
If the jitter test succeeds, you can view the following information by running the display nqa
results command.
<HUAWEI> display nqa results test-instance admin icmpjitter
NQA entry(admin, icmpjitter) :testflag is inactive ,testtype is icmpjitter
1 . Test 1 result The test is finished
SendProbe:60 ResponseProbe:60
Completion :success RTD OverThresholds number:0
OWD OverThresholds SD number:0 OWD OverThresholds DS number:0
Min/Max/Avg/Sum RTT:0/1/1/14 RTT Square Sum:14
NumOfRTT:60 Drop operation number:0
Operation sequence errors number:0 RTT Stats errors number:0
System busy operation number:0 Operation timeout number:0
Min Positive SD:0 Min Positive DS:1
Max Positive SD:0 Max Positive DS:1
Positive SD Number:0 Positive DS Number:1
Positive SD Sum:0 Positive DS Sum:1
Positive SD Square Sum :0 Positive DS Square Sum :1
Min Negative SD:1 Min Negative DS:0
Max Negative SD:1 Max Negative DS:0
Negative SD Number:2 Negative DS Number:0
Negative SD Sum:2 Negative DS Sum:0
Negative SD Square Sum :2 Negative DS Square Sum :0
Min Delay SD:0 Min Delay DS:0
Max Delay SD:0 Max Delay DS:0
Delay SD Square Sum:0 Delay DS Square Sum:0
Applicable Environment
A network consists of multiple devices. The intercommunication between these devices may
traverse multiple networks. To better monitor the entire network, a path jitter test can be
performed to check the communication of each part.
Pre-configuration Tasks
Before configuring the path jitter test, configure a reachable route between the NQA client and
the ICMP server.
Data Preparation
To configure the path jitter test, you need the following data.
No. Data
1 Administrator of the NQA test instance and name of the test instance
2 Destination IP address
3 (Optional) Name of a VPN instance, source IP address , number of test probes sent
each time, number of test packets sent each time, interval for sending test packets,
ratio of the failed NQA tests, and version number of jitter packets
Context
Do as follows on the NQA client:
Procedure
Step 1 Run:
system-view
Step 2 Run:
nqa test-instance admin-name test-name
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type pathjitter
Step 4 Run:
destination-address ipv4 ip-address
Step 5 (Optional) Run the following commands to configure other parameters for the path jitter test:
l Run:
icmp-jitter-mode { icmp-echo | icmp-timestamp }
The probe-count command is used to configure the number of times for the jitter test and the jitter-
packetnum command is used to configure the number of test packets sent during each test. In actual
configuration, the product of the number of times for the jitter test and the number of test packets must
be less than 3000.
l Run:
interval seconds interval
The shorter the interval is, the sooner the test is complete. However, delays arise when the
processor sends and receives test packets. Therefore, if the interval for sending test packets
is set to a small value, a relatively greater error may occur in the statistics of the jitter test.
l Run:
fail-percent percent
----End
Prerequisites
The configurations of the Path Jitter Test function are complete.
NOTE
NQA test results cannot be displayed automatically on the terminal. You should run the display nqa
results command to check the test results.
Procedure
Step 1 Run the display nqa results [ test-instance admin-name test-name ] command to check test
results on the NQA client.
----End
Example
If the path jitter test succeeds, you can view the following information by running the display
nqa results command.
<HUAWEI> display nqa results test-instance admin pathjitter
Applicable Environment
In the network, the intercommunication between hosts may have to traverse multiple networks.
Different networks have various MTU values. The path MTU test can detect the MTU values
of paths in the network. Based on these values, you can limit the packet length on the transmitting
end and therefore effectively avoid discarding oversize packets.
Pre-configuration Tasks
Before configuring the path MTU test, configure a reachable route between the NQA client and
the destination end.
Data Preparation
To configure the path MTU test, you need the following data.
No. Data
1 Administrator of the NQA test instance and name of the test instance
2 Destination IP address
Context
Do as follows on the NQA client:
Procedure
Step 1 Run:
system-view
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type pathmtu
Step 5 (Optional) Run the following commands to configure other parameters for the path MTU test.
l Run:
discovery-pmtu-max pmtu-max
The value of the incremental step is set for the packet length in the path MTU test.
l Run:
vpn-instance vpn-instance-name
Step 6 Run:
start
Select the start mode as required because the start command has several forms.
l To perform the NQA test immediately, run the start now [ end { at [ yyyy/mm/dd ]
hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ]
command.
The test instance is started immediately.
l To perform the NQA test at the specified time, run the start at [ yyyy/mm/dd ] hh:mm:ss
[ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime
{ seconds second | hh:mm:ss } } ] command.
The test instance is started at a specified time.
l To perform the NQA test after a certain delay period, run the start delay { seconds second
| hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } |
lifetime { seconds second | hh:mm:ss } } ] command.
The test instance is started after a certain delay.
----End
Prerequisites
The configurations of the Path MTU Test function are complete.
NOTE
NQA test results cannot be displayed automatically on the terminal. You should run the display nqa
results command to check the test results.
Procedure
Step 1 Run the display nqa results [ test-instance admin-name test-name ] command to check test
results on the NQA client.
----End
Example
If the path MTU test succeeds, you can view the following information by running the display
nqa results command.
<HUAWEI> display nqa results test-instance admin pathmtu
NQA entry(admin, pathmtu) :testflag is inactive ,testtype is pathmtu
1 . Test 0 result The test is finished
Completions: success Busies: 0
Destination-address: 100.1.1.201 Discovery field min: 48 byte
Discovery field max: 1600 byte Drops: 0
MTU: 1148 Response probe: 13
Send probe: 16 Optimum first step: 0 byte
Second step: 100 byte Timeouts: 3
Applicable Environment
By setting the destination group address of the multicast Ping (MPing) to be a reserved group
address, you can check the members of the reserved multicast group on the network segment
where the outgoing interface resides.
The reserved group identifies a group of network devices (group members) that match certain
conditions. When the members of the reserved group receive the ICMP Echo-Request packets
with the destination addresses being the IP address of the reserved group, they return ICMP
Echo-Reply packets. Commonly, the following addresses are for the reserved group:
l 224.0.0.1: indicates all systems in the sub-network.
l 224.0.0.2: indicates all routers in the sub-network.
l 224.0.0.5: indicates the Open Shortest Path First (OSPF) interior gateway protocol (IGP)
routers.
l 224.0.0.13: indicates Protocol Independent Multicast (PIM) routers.
NOTE
Whether a host or a router can return an ICMP EchoReply packet is determined by the operating system
and version.
Pre-configuration Tasks
None.
Data Preparation
To configure the NQA MPing test for a reserved group address, you need the following data.
No. Data
4 (Optional) Size of the ICMP Echo-Request packet, padding character in the ICMP
Echo-Request packet, number of the sent ICMP Echo-Request packets, interval for
sending ICMP Echo-Request packets and timeout period for waiting for the ICMP
Echo-Reply packet
Context
Do as follows on the router that functions on the NQA client:
Procedure
Step 1 Run:
system-view
Step 2 Run:
nqa test-instance admin-name test-name
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type mping
Step 4 Run:
quit
----End
Context
Do as follows on the router that functions as the NQA client:
Procedure
Step 1 Run:
system-view
The IP address of a reserved group is set to be the destination group address of the MPing test,
that is, the destination address of the ICMP Echo-Request packet.
Step 4 Run:
source-interface interface-type interface-number
The outbound interface that sends the ICMP Echo-Request packet is configured.
Step 5 Run:
quit
----End
Context
Do as follows on the router that functions as the NQA client:
NOTE
Procedure
Step 1 Run:
system-view
The interval for sending ICMP Echo-Request packets is set. The interval must be longer than
the timeout period set through the timeout command.
Step 8 Run:
timeout time
The timeout period for waiting for the ICMP Echo-Reply packet is set.
Step 9 Run:
quit
----End
Context
Do as follows on the router that functions as the NQA client:
NOTE
The start mode of the MPing test is the same as that of other NQA tests. Here, take one mode as an example.
Procedure
Step 1 Run:
system-view
----End
Prerequisites
The configurations of the NQA reserved group MPing Test function are complete.
Context
NOTE
NQA test results cannot be displayed automatically on a terminal. You must run the display nqa results
command to view test results. By the default, the command output contains the records about only the last
five tests.
Procedure
l Run the display nqa-agent [ admin-name test-name ] [ verbose ] to viewthe status of the
test instance configured on the NQA client.
l Run the display nqa results command to view the test results on the NQA client.
----End
Example
Run the display nqa-agent command. If the test is successful,the following is displayed. You
can also view details about the tests.
<HUAWEI> display nqa-agent
NQA Tests Max:2000 NQA Tests Number: 1
NQA Flow Max:1000 NQA Flow Remained:1000
Run the display nqa results command. If the test is successful,the following is displayed.
<HUAWEI> display nqa results
NQA entry(admin, mping) :testflag is inactive ,testtype is mping
1 . Test 1 result The test is finished
Completion:success Timeouts number: 0
Drops number: 0 TargetAddress: 224.0.0.1
ProbeResponses number: 3 SentProbes number: 3
Busies: 0
1 . Receiver 1
CompletionTime Min/Max/Sum: 2/3/8
Sum2CompletionTime: 22
LastGoodProbe time: 2009-1-12 13:11:10.4
RecevierAddress: 6.0.206.6
Applicable Environment
You also can set the destination group address of MPing to a common group address. The
following functions then can be implemented:
l MPing simulates the multicast traffic and triggers a series of protocol processes. By viewing
the multicast routing information on a router, you can check whether the protocol runs
normally and whether the multicast distribution tree is correctly established.
l By calculating the number of ICMP Echo-Reply packets sent by the destination host, the
system checks multicast members in the network and calculates the TTL value and the
response time from the MPing initiator to multicast members (this function requires that
the host support MPing). MPing can be continually performed for several times at a certain
interval to calculate network delay and route jitter.
Pre-configuration Tasks
If the destination group address is set to a common group address, you must configure the
multicast function in the network.
Data Preparation
To configure the NQA MPing tests, you need the following data.
No. Data
No. Data
4 (Optional) Size of the ICMP Echo-Request packet, Padding character in the ICMP
Echo-Request packet, Number of the sent ICMP Echo-Request packets, Interval for
sending ICMP Echo-Request packets and Timeout period for waiting for the ICMP
Echo-Reply packet
Context
Do as follows on the router that functions as the NQA client:
Procedure
Step 1 Run:
system-view
Step 2 Run:
nqa test-instance admin-name test-name
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type mping
Step 4 Run:
quit
----End
Context
Do as follows on the router that functions as the NQA client:
Procedure
Step 1 Run:
system-view
A common group IP address is set to be the destination group address of the MPing test, that is,
the destination address of the ICMP Echo-Request packet.
When a common group IP address is set to be the destination group address of the MPing test,
source-interface does not need to be specified.
Step 4 Run:
quit
----End
Context
Do as follows on the router that functions as the NQA client:
NOTE
Procedure
Step 1 Run:
system-view
Step 4 Run:
ttl number
Step 5 Run:
tos value
Step 6 Run:
datafill fillstring
Step 7 Run:
probe-count number
Step 8 Run:
interval seconds interval
The interval for sending ICMP Echo-Request packets is set. The interval must be longer than
the timeout period set through the timeout command.
Step 9 Run:
timeout time
The timeout period for waiting for the ICMP Echo-Reply packet is set.
Step 10 Run:
quit
----End
Context
Do as follows on the router that functions as the NQA client:
NOTE
The start mode of the MPing test is the same as that of other NQA tests. Here, take one mode as an example.
Procedure
Step 1 Run:
system-view
Step 2 Run:
nqa test-instance admin-name test-name
----End
Prerequisites
The configurations of the NQA common group MPing Test function are complete.
Context
NOTE
NQA test results cannot be displayed automatically on a terminal. You must run the display nqa results
command to view test results. By the default, the command output contains the records about only the last
five tests.
Procedure
l Run the display nqa-agent [ admin-name operation-tag ] [ verbose ] to view the status of
the test instance configured on the NQA client.
l Run the display nqa results command to view the test results on the NQA client.
----End
Example
Run the display nqa-agent command. If the test is successful, the following is displayed. You
can also view details about the tests.
<HUAWEI> display nqa-agent
NQA Tests Max:2000 NQA Tests Number: 1
NQA Flow Max:1000 NQA Flow Remained: 999
Run the display nqa results command. If the test is successful,the following is displayed.
<HUAWEI> display nqa results
NQA entry(admin, mping) :testflag is inactive ,testtype is mping
1 . Test 1 result The test is finished
Completion:success Timeouts number: 0
Applicable Environment
To check the Reverse Path Forwarding (RPF) path from the multicast source to the querier, you
can perform the NQA MTrace test.
By performing the MTrace test, you can obtain the possible transmission path of multicast
packets. During the test, actual multicast data flows are not required.
The MTrace test can be used in multicast troubleshooting and routine maintenance to locate the
faulty nodes and reduce configuration errors.
Pre-configuration Tasks
Before configuring the MTrace test, you must enable the multicast function in the network.
Data Preparation
To configure the MTrace test, you need the following data.
No. Data
Context
Do as follows on the router that functions as the NQA client:
Procedure
Step 1 Run:
system-view
Step 2 Run:
nqa test-instance admin-name test-name
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type mtrace
Step 4 Run:
quit
----End
Context
Do as follows on the router with the NQA MTrace test instance being created:
Procedure
Step 1 Run:
system-view
Step 2 Run:
nqa test-instance admin-name test-name
Step 3 Run:
mtrace-source-address ipv4 ip-address
Step 5 Run:
quit
----End
Context
Do as follows on the router with the MTrace test being configured:
NOTE
Procedure
Step 1 Run:
system-view
Step 2 Run:
nqa test-instance admin-name test-name
Step 3 Run:
ttl number
The TTL value, that is, the maximum hops traced in max-hop mode is set for the IGMP Tracert-
Request packet.
Step 4 Run:
tracert-livetime first-ttl first-ttl max-ttl max-ttl
The maximum hops traced in hop-by-hop mode is set. first-ttl must be set to 1.
Step 5 Run:
mtrace-response-address ipv4 ip-address [ ttl value ]
The destination IP address and TTL of the IGMP Tracert-Response packet are set.
Step 6 Run:
timeout time
The timeout period for waiting for the IGMP Tracert-Response packet is set.
Step 7 Run:
probe-count number
The maximum number of times for the querier to initiate MTrace operations in the timeout
situations is set. The querier re-initiates an MTrace operation if receiving no IGMP Tracert-
Response packet within the timeout period.
Step 8 Run:
quit
----End
Context
Do as follows on the router that functions as the NQA client:
NOTE
The start mode of the MTrace test is the same as that of other NQA tests. Here, take one mode as an example.
Procedure
Step 1 Run:
system-view
----End
Prerequisites
The configurations of the NQA MTrace Test function are complete.
Context
NOTE
NQA test results cannot be displayed automatically on a terminal. You must run the display nqa results
command to view test results. By the default, the command output contains the records about only the last
five tests.
Procedure
l Run the display nqa-agent [ admin-name test-name ] [ verbose ] to view the status of the
test instance configured on the NQA client.
l Run the display nqa results [ test-instance admin-name test-name ] command to view the
test results on the NQA client.
l Run the display nqa history command to view the history of the NQA test.
l Run the display mtrace statistics command to view the statistics of the MTrace packets.
----End
Example
Run the display nqa-agent command. If information about the NQA test is displayed, it means
that the test is successful. For example:
<HUAWEI> display nqa-agent admin aa verbose
nqa test-instance aa aa
test-type mtrace
mtrace-source-address ipv4 11.1.1.2
nqa status : normal
Applicable Environment
To check the multicast path from the multicast source to the querier, you can perform the NQA
MTrace test.
During this test, actual multicast data flows are required in the network and the querier must
reside on the multicast distribution tree. This test can be used to trace the actual forwarding path
of packets.
l It can be used in multicast troubleshooting and routine maintenance to locate the faulty
nodes and reduce configuration errors.
l It collects traffic through cyclic path tracing and calculates the multicast traffic rate.
l It outputs the test result containing the information about the faulty nodes, based on which
the NM Station can analyzes the fault and generates alarms.
Pre-configuration Tasks
Before configuring the MTrace test, you must enable the multicast function in the network.
Data Preparation
To configure the MTrace test, you need the following data.
No. Data
4 VPN instance
Context
Do as follows on the router that functions as the NQA client:
Procedure
Step 1 Run:
system-view
Step 2 Run:
nqa test-instance admin-name test-name
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type mtrace
Step 4 Run:
quit
----End
Context
Do as follows on the router with the NQA MTrace test instance being created:
Procedure
Step 1 Run:
system-view
The multicast group address is specified. Note that the address must be a common group address.
Step 5 (Optional)The VPN instance is specified.
vpn-instance vpn-instance-name
Step 6 Run:
quit
----End
Context
Do as follows on the router with the MTrace test being configured:
NOTE
Procedure
Step 1 Run:
system-view
Step 2 Run:
nqa test-instance admin-name test-name
Step 3 Run:
ttl number
The TTL value, that is, the maximum hops traced in max-hop mode is set for the IGMP Tracert-
Request packet.
Step 4 Run:
tracert-livetime first-ttl first-ttl max-ttl max-ttl
The maximum hops traced in hop-by-hop mode is set. first-ttl must be set to 1.
Step 5 Run:
mtrace-response-address ipv4 ip-address [ ttl value ]
The destination IP address and TTL of the IGMP Tracert-Response packet are set.
Step 6 Run:
timeout time
The timeout period for waiting for the IGMP Tracert-Response packet is set.
Step 7 Run:
probe-count number
The maximum number of times for the querier to initiate MTrace operations in the timeout
situations is set. The querier re-initiates an MTrace operation if receiving no IGMP Tracert
Response packet within the timeout period.
Step 8 Run:
quit
----End
Context
Do as follows on the router that functions as the NQA client:
NOTE
The start mode of the MTrace test is the same as that of other NQA tests. Here, take one mode as an example.
Procedure
Step 1 Run:
system-view
----End
Prerequisites
The configurations of the NQA MTrace Test function are complete.
Context
NOTE
NQA test results cannot be displayed automatically on a terminal. You must run the display nqa results
command to view test results. By the default, the command output contains the records about only the last
five tests.
Procedure
l Run the display nqa-agent [ admin-name test-name ] [ verbose ] to view the status of the
test instance configured on the NQA client.
l Run the display nqa results [ test-instance admin-name test-name ] command to view the
test results on the NQA client.
l Run the display nqa history command to view the history of the NQA test.
l Run the display mtrace statistics command to view the statistics of the MTrace packets.
----End
Example
Run the display nqa-agent command. If information about the NQA test is displayed, it means
that the test is successful. For example:
<HUAWEI> display nqa-agent admin aa verbose
Applicable Environment
To check the RPF path from the multicast source to the destination host, you can perform the
NQA MTrace test.
By performing this test, you can obtain the possible transmission path of multicast packets.
During the test, actual multicast data flows are not required.
The MTrace test can be used in multicast troubleshooting and routine maintenance to locate the
faulty nodes and reduce configuration errors.
Pre-configuration Tasks
Before configuring the MTrace test, you must enable the multicast function in the network.
Data Preparation
To configure the MTrace test, you need the following data.
No. Data
4 (Optional)VPN instance
Context
Do as follows on the router that functions as the NQA client:
Procedure
Step 1 Run:
system-view
Step 2 Run:
nqa test-instance admin-name test-name
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type mtrace
Step 4 Run:
quit
----End
Context
Do as follows on the router with the MTrace test instance being created:
NOTE
Procedure
Step 1 Run:
system-view
Step 2 Run:
nqa test-instance admin-name test-name
Step 3 Run:
mtrace-source-address ipv4 ip-address
Step 4 Run:
destination-address ipv4 ip-address
Step 6 Run:
quit
----End
Context
Do as follows on the router with the MTrace test being configured:
NOTE
Procedure
Step 1 Run:
system-view
Step 2 Run:
nqa test-instance admin-name test-name
Step 3 Run:
ttl number
The TTL value, that is, the maximum hops traced in max-hop mode is set for the IGMP Tracert-
Request packet.
Step 4 Run:
tracert-livetime first-ttl first-ttl max-ttl max-ttl
The maximum hops traced in hop-by-hop mode is set. first-ttl must be set to 1.
Step 5 Run:
mtrace-response-address ipv4 ip-address [ ttl value ]
The destination IP address and TTL of the IGMP Tracert-Response packet are set.
Step 6 Run:
mtrace-query-type last-hop
The last-hop router is specified to initiate the MTrace query to the multicast source.
When multiple routers are connected to the specified host, RPF check query from different
routers may be different. Therefore, you can uniquely identify an RPF path by specifying the
last-hop router.
When running this command, you must specify the IP address of the last-hop router.
Step 7 Run:
mtrace-last-hop-address ipv4 ip-address
If non-Huawei devices are deployed in the multicast network, this step is mandatory.
Step 8 Run:
source-address ipv4 ip-address
The source IP address of the IGMP Tracert-Request packet is configured. The address must be
the address of the local interface.
If non-Huawei devices are deployed in the multicast network, this step is mandatory.
Step 9 Run:
timeout time
The timeout period for waiting for the IGMP Tracert-Response packet is set.
Step 10 Run:
probe-count number
The maximum number of times for the querier to initiate MTrace operations in the timeout
situations is set. The querier re-initiates an MTrace operation if receiving no IGMP Tracert-
Response packet within the timeout period.
Step 11 Run:
quit
----End
Context
Do as follows on the router that functions as the NQA client:
NOTE
The start mode of the MTrace test is the same as that of other NQA tests. Here, take one mode as an example.
Procedure
Step 1 Run:
system-view
----End
Context
Do as follows on the router connected to the host:
NOTE
The settings in this step take effect only on the last-hop router that receives the unicast IGMP Tracert-
Request packet.
Procedure
Step 1 Run:
system-view
NOTE
l This command takes effect only on the last hop router, and the querier is not the last hop router.
l This command filers only the IGMP-Tracert-Query packets encapsulated in unicast IP packets.
l This command is not applicable to the tracking initiated from the querier.
----End
Prerequisites
The configurations of the NQA MTrace Test function are complete.
Context
NOTE
NQA test results cannot be displayed automatically on a terminal. You must run the display nqa results
command to view test results. By the default, the command output contains the records about only the last
five tests.
Procedure
l Run the display nqa-agent [ admin-name test-name ] [ verbose ] to view the status of the
test instance configured on the NQA client.
l Run the display nqa results [ test-instance admin-name test-name ] command to view the
test results on the NQA client.
l Run the display nqa history command to view the history of the NQA test.
l Run the display mtrace statistics command to view the statistics of the MTrace packets.
----End
Example
Run the display nqa-agent command. If information about the NQA test is displayed, it means
that the test is successful. For example:
<HUAWEI> display nqa-agent admin aa verbose
nqa test-instance admin aa
test-type mtrace
destination-address ipv4 11.1.6.4
mtrace-source-address ipv4 11.1.0.1
mtrace-group-address ipv4 225.0.0.1
nqa status : normal
Applicable Environment
To check the multicast path from the multicast source to the destination host, you can perform
the NQA MTrace test.
During this test, actual multicast data flows are required in the network and the destination host
must be receiving these data flows. This test can be used to trace the actual forwarding path of
packets.
l It can be used in multicast troubleshooting and routine maintenance to locate the faulty
nodes and reduce configuration errors.
l It collects traffic through cyclic path tracing and calculates the multicast traffic rate.
l It outputs the test result containing the information about the faulty nodes, based on which
the NM Station can analyzes the fault and generates alarms.
Pre-configuration Tasks
Before configuring the MTrace test, you must enable the multicast function in the network.
Data Preparation
To configure the MTrace test, you need the following data.
No. Data
5 (Optional)VPN instance
Context
Do as follows on the router that functions as the NQA client:
Procedure
Step 1 Run:
system-view
Step 2 Run:
nqa test-instance admin-name test-name
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type mtrace
----End
Context
Do as follows on the router with the MTrace test instance being created:
NOTE
Procedure
Step 1 Run:
system-view
The multicast group address is configured. The address must be a common group address.
Step 5 Run:
destination-address ipv4 ip-address
The MTrace query type is specified. Correctly specifying the query type based on the network
situations helps to promptly and accurately trace the path. If the query type is not specified, the
system uses multicast-tree, by default.
l all-router: applies to the scenario where the current router is linked to the destination host.
l destination: applies to the scenario where the unicast routes exist between the current
router and the destination host.
l last-hop: applies to the scenario where the address of the last-hop router is specified and
unicast routes exist between the current router and the last-hop router. When last-hop is used,
you must specify the IP address of the last-hop router.
l multicast-tree: applies to the scenario where the current router is on the multicast path from
the multicast source to the destination host.
CAUTION
If the mtrace command is run on a specified multicast VPN network, mtrace-query-type all-
router cannot be configured.
The source address of the IGMP-Tracert-Query packet, which must be the address of the local
interface, is configured. If multicast-tree is applied, this step is skipped.
Step 10 Run:
quit
----End
Context
Do as follows on the router with the MTrace test being configured:
NOTE
Procedure
Step 1 Run:
system-view
The TTL value, that is, the maximum hops traced in max-hop mode is set for the IGMP Tracert-
Request packet.
Step 4 Run:
tracert-livetime first-ttl first-ttl max-ttl max-ttl
The maximum hops traced in hop-by-hop mode is set. first-ttl must be set to 1.
Step 5 Run:
mtrace-last-hop-address ipv4 ip-address
The source IP address of the IGMP Tracert-Request packet is configured. The address must be
the address of the local interface. When the query type is set to multicast-tree, this step is not
required.
Step 7 Run:
mtrace-response-address ipv4 ip-address [ ttl value ]
The destination IP address and TTL of the IGMP Tracert-Response packet are set.
Step 8 Run:
timeout time
The timeout period for waiting for the IGMP Tracert-Response packet is set.
Step 9 Run:
probe-count number
The maximum number of times for the querier to initiate MTrace operations in the timeout
situations is set. The querier re-initiates an MTrace operation if receiving no IGMP Tracert-
Response packet within the timeout period.
Step 10 Run:
quit
----End
Context
Do as follows on the router that functions as the NQA client:
NOTE
The start mode of the MTrace test is the same as that of other NQA tests. Here, take one mode as an example.
Procedure
Step 1 Run:
system-view
Step 2 Run:
nqa test-instance admin-name test-name
Step 3 Run:
start now
Step 4 Run:
quit
----End
Context
Do as follows on the router connected to the host:
NOTE
The settings in this step take effect only on the last-hop router that receives the unicast IGMP Tracert-
Request packet.
Procedure
Step 1 Run:
system-view
Step 2 Run:
mtrace query-policy [ basic-acl-number ]
basic-acl-number defines the address range for reliable queriers. Based on the specified ACL,
the last-hop router rejects the IGMP Tracert-Request packets sent by unauthorized queriers.
NOTE
l This command takes effect only on the last hop router, and the querier is not the last hop router.
l This command filers only the IGMP-Tracert-Query packets encapsulated in unicast IP packets.
l This command is not applicable to the tracking initiated from the querier.
----End
Prerequisites
The configurations of the NQA MTrace Test function are complete.
Context
NOTE
NQA test results cannot be displayed automatically on a terminal. You must run the display nqa results
command to view test results. By the default, the command output contains the records about only the last
five tests.
Procedure
l Run the display nqa-agent [ admin-name test-name ] [ verbose ] to view the status of the
test instance configured on the NQA client.
l Run the display nqa results [ test-instance admin-name test-name ] command to view the
test results on the NQA client.
l Run the display nqa history command to view the history of the NQA test.
l Run the display mtrace statistics command to view the statistics of the MTrace packets.
----End
Example
Run the display nqa-agent command. If information about the NQA test is displayed, it means
that the test is successful. For example:
<HUAWEI> display nqa-agent admin aa verbose
nqa test-instance admin aa
test-type mtrace
destination-address ipv4 11.1.6.4
mtrace-last-hop-address ipv4 11.1.5.1
mtrace-source-address ipv4 11.1.0.1
mtrace-group-address ipv4 225.0.0.1
nqa status : normal
Applicable Environment
To check the connectivity of the one-hop pseudo wire (PW) using LDP as the signaling protocol,
you can perform the PWE3 Ping test on the one-hop PW.
Pre-configuration Tasks
Before configuring the PWE3 Ping test on a one-hop PW, you must correctly configure the
dynamic one-hop PW.
Data Preparation
To configure the PWE3 Ping test on a one-hop PW, you need the following data.
No. Data
1 ID of the PW
2 Type of the PW
5 (Optional) Response mode of the Echo-Request packets, LSP EXP, maximum hops,
number of probes, TTL value, and timeout period of the packets
Context
Do as follows on the NQA client:
Procedure
Step 1 Run:
system-view
Step 2 Run:
nqa test-instance admin-name test-name
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type pwe3ping
Step 6 Run:
lsp-version { rfc4379 | draft6 }
Step 8 Run:
local-pw-id local-pw-id
Step 9 (Optional) Run the following commands to configure other parameters for the PWE3 Ping test:
l To configure the response mode of the Echo packet, run the lsp-replymode { no-reply |
udp | udp-via-vpls | udp-router-alert | level-control-channel } command.
l To configure the LSP EXP value, run the lsp-exp exp command.
Step 10 Run:
start
Select the start mode as required because the startcommand has several forms.
l To perform the NQA test immediately, run the start now [ end { at [ yyyy/mm/dd ]
hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } ]
command.
The test instance is started immediately.
l To perform the NQA test at the specified time, run the start at [ yyyy/mm/dd ] hh:mm:ss
[ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime
{ seconds second | hh:mm:ss } ] command.
The test instance is started at a specified time.
l To perform the NQA test after a certain delay period, run the start delay { seconds second
| hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } |
lifetime { seconds second | hh:mm:ss } ] command.
The test instance is started after a certain delay.
----End
Prerequisites
The configurations of the PWE3 Ping Test function are complete.
Context
NOTE
NQA test results cannot be displayed automatically on a terminal. You must run the display nqa results
command to view test results. By the default, the command output contains the records about only the last
five tests.
Procedure
Step 1 Run the display nqa results command to view the test results on the NQA client.
----End
Example
Run the display nqa results command. If the test is successful, the following is displayed.
Applicable Environment
To check the connectivity of the multi-hop PW using LDP as the signaling protocol, you can
perform the PWE3 Ping test on the multi-hop PW.
Pre-configuration Tasks
Before configuring the PWE3 Ping test on a multi-hop PW, you must correctly configure the
dynamic multi-hop PW or the static multi-hop PW.
Data Preparation
To configure the PWE3 Ping test on a multi-hop PW, you need the following data.
No. Data
3 Type of the PW
4 (Optional) Response mode of the Echo packets, LSP EXP, maximum hops, number
of probes, TTL value, and timeout period of the packets
No. Data
Context
Do as follows on the NQA client:
Procedure
Step 1 Run:
system-view
Step 2 Run:
nqa test-instance admin-name test-name
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type pwe3ping
Step 6 Run:
lsp-version { rfc4379 | draft6 }
NOTE
Step 8 Run:
local-pw-id local-pw-id
----End
Prerequisites
The configurations of the PWE3 Ping Test function are complete.
Context
NOTE
NQA test results cannot be displayed automatically on a terminal. You must run the display nqa results
command to view test results. By the default, the command output contains the records about only the last
five tests.
Procedure
Step 1 Run the display nqa results command to view the test results on the NQA client.
----End
Example
Run the display nqa results command. If the test is successful, the following is displayed.
l Statistics about errors
– Number of unroutable connections
– Number of wrong sequence numbers
– Timeout times of the test packets
l History statistics of each test packet
– Timestamp added when each test packet is sent
– Timestamp added when each test packet is received
– Packets status displayed on the NQA client
l Statistics of results of each test
– Number of successful tests
– Sum of the response time of all tests
– RTT square sum
– Minimum RTT and maximum RTT of the packet
– Destination IP address and the type of the destination IP address
– Number of the Echo packets and the sent packets
– Time when the last packet is received
<HUAWEI> display nqa results
NQA entry(admin, pwe3ping) :testflag is inactive ,testtype is pwe3ping
1 . Test 1 result The test is finished
Send operation times: 3 Receive response times: 3
Completion:success RTD OverThresholds number: 0
Attempts number:1 Drop operation number:0
Disconnect operation number:0 Operation timeout number:0
System busy operation number:0 Connection fail number:0
Operation sequence errors number:0 RTT Stats errors number:0
Destination ip address:8.1.1.2
Applicable Environment
To trace a one-hop PW using LDP as the signaling protocol, you can perform the PWE3 Trace
test on the one-hop PW.
Pre-configuration Tasks
Before configuring the PWE3 Trace test on a one-hop PW, you must correctly configure the
dynamic one-hop PW.
Data Preparation
To configure the PWE3 Trace test on a one-hop PW, you need the following data.
No. Data
1 ID of the PW
2 Type of the PW
5 (Optional) Response mode of the Echo packets, LSP EXP, maximum hops, number
of probes, TTL value, and timeout period of the packets
Context
Do as follows on the NQA client:
Procedure
Step 1 Run:
system-view
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type pwe3trace
Select the start mode as required because the startcommand has several forms.
l To perform the NQA test immediately, run the start now [ end { at [ yyyy/mm/dd ]
hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } ]
command.
The test instance is started immediately.
l To perform the NQA test at the specified time, run the start at [ yyyy/mm/dd ] hh:mm:ss
[ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime
{ seconds second | hh:mm:ss } ] command.
The test instance is started at a specified time.
l To perform the NQA test after a certain delay period, run the start delay { seconds second
| hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } |
lifetime { seconds second | hh:mm:ss } ] command.
The test instance is started after a certain delay.
----End
Prerequisites
The configurations of the PWE3 Trace Test function are complete.
Context
NOTE
NQA test results cannot be displayed automatically on a terminal. You must run the display nqa results
command to view test results. By the default, the command output contains the records about only the last
five tests.
Procedure
Step 1 Run the display nqa results command to view the test results on the NQA client.
----End
Example
Run the display nqa results command. If the PWE3 Trace test on the one-hop PW is successful,
the following information is displayed.
l Statistics about errors
– Number of unroutable connections
– Number of wrong sequence numbers
– Timeout times of the test packets
l History statistics of each test packet
– Timestamp added when each test packet is sent
– Timestamp added when each test packet is received
– Packets status displayed on the NQA client
Applicable Environment
To trace the multi-hop PW using LDP as the signaling protocol, you can perform the PWE3
Trace test on the multi-hop PW.
Pre-configuration Tasks
Before configuring the PWE3 Trace test on a multi-hop PW, you must correctly configure the
dynamic multi-hop PW or the static multi-hop PW.
Data Preparation
To configure the PWE3 Trace test on a multi-hop PW, you need the following data.
No. Data
3 Type of the PW
4 (Optional) Response mode of the Echo packets, LSP EXP, maximum hops, number
of probes, TTL value, and timeout period of the packets
Context
Do as follows on the NQA client:
Procedure
Step 1 Run:
system-view
Step 2 Run:
nqa test-instance admin-name test-name
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type pwe3trace
Step 6 Run:
lsp-version { rfc4379 | draft6 }
l When label-type is set to control-word, run the remote-pw-id remote-pw-id command to configure
the ID of the remote end of the PW.
l When label-type is set to label-alert, run the destination-address ipv4 ip-address { lsp-masklen
mask-length | lsp-loopback loopback-address }* command to configure the destination IP address of
the PWE3 Trace test.
l When label-type is set to normal, run the destination-address ipv4 ip-address { lsp-masklen mask-
length | lsp-loopback loopback-address }* command to configure the destination IP address of the
PWE3 Trace test.
Step 8 Run:
local-pw-id local-pw-id
----End
Prerequisites
The configurations of the PWE3 Trace Test function are complete.
Context
NOTE
NQA test results cannot be displayed automatically on a terminal. You must run the display nqa results
command to view test results. By the default, the command output contains the records about only the last
five tests.
Procedure
Step 1 Run the display nqa results [ test-instance admin-name test-name ] command to view the test
results on the NQA client.
----End
Example
Run the display nqa results command. If the PWE3 Trace test on the multi-hop PW is
successful, the following information is displayed.
l Statistics about errors
– Number of unroutable connections
– Number of wrong sequence numbers
– Timeout times of the test packets
l History statistics of each test packet
– Timestamp added when each test packet is sent
– Timestamp added when each test packet is received
– Packets status displayed on the NQA client
l Statistics of results of each test
– Number of successful tests
– Sum of the response time of all tests
– RTT square sum
– Minimum RTT and maximum RTT of the packet
– Destination IP address and the type of the destination IP address
– Number of the Echo packets and the sent packets
– Time when the last packet is received
<HUAWEI> display nqa results
NQA entry(admin, pwe3trace) :testflag is inactive ,testtype is pwe3trace
1 . Test 1 result The test is finished
Completion:success Attempts number:1
Disconnect operation number:0 Operation timeout number:0
System busy operation number:0 Connection fail number:0
Operation sequence errors number:0 RTT Stats errors number:0
Drop operation number:0
Last good path Time:2009-2-28 10:58:35.5
1 . Hop 1
Send operation times: 3 Receive response times: 3
Min/Max/Average Completion Time: 4/10/7
Sum/Square-Sum Completion Time: 23/197
RTD OverThresholds number: 0
Applicable Environment
To trace the Virtual Circuit (VC) of the inter-AS multi-hop Kompella Virtual Leased Line (VLL),
you can perform the VC Trace test on the inter-AS multi-hop Kompella VLL.
Pre-configuration Tasks
Before configuring the VC Trace test on an inter-AS multi-hop Kompella VLL, you must
correctly configure the Kompella VLL.
Data Preparation
To configure the VC Trace test on an inter-AS multi-hop Kompella VLL, you need the following
data.
No. Data
2 VPN target
4 AS number of the PE
No. Data
7 (Optional) Response mode of the Echo packets, LSP EXP, maximum hops, number
of probes, TTL values, and timeout period of the packets
Context
Do as follows on the NQA client:
Procedure
Step 1 Run:
system-view
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type pwe3trace
l To configure the response mode of the Echo packet, run the lsp-replymode { no-reply |
udp | udp-via-vpls | udp-router-alert | level-control-channel } command.
l To configure the LSP EXP value, run the lsp-exp exp command.
l To configure maximum hops of the VC Trace test, run the tracert-hopfailtimes command.
l To configure the initial TTL value and maximum TTL value of the packet, run the tracert-
livetime first-ttl first-ttl max-ttl max-ttl command.
Step 9 Run:
start
Select the start mode as required because the start command has several forms.
l To perform the NQA test immediately, run the start now [ end { at [ yyyy/mm/dd ]
hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ]
command.
The test instance is started immediately.
l To perform the NQA test at the specified time, run the start at [ yyyy/mm/dd ] hh:mm:ss
[ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime
{ seconds second | hh:mm:ss } } ] command.
The test instance is started at a specified time.
l To perform the NQA test after a certain delay period, run the start delay { seconds second
| hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } |
lifetime { seconds second | hh:mm:ss } } ] command.
The test instance is started after a certain delay.
----End
Prerequisites
The configurations of the VC Trace Test function are complete.
Context
NOTE
NQA test results cannot be displayed automatically on a terminal. You must run the display nqa results
command to view test results. By the default, the command output contains the records about only the last
five tests.
Procedure
Step 1 Run the display nqa results [ test-instance admin-name test-name ] command to view the test
results on the NQA client.
----End
Example
Run the display nqa results command. If the VC Trace test on the inter-AS multi-hop Kompella
VLL is successful, the following information is displayed.
Applicable Environment
NQA supports not only the configuration of the parameters for various types of tests, but also
the configuration of universal options of a test group.
Pre-configuration Tasks
Before configuring universal NQA parameters, create NQA tests correctly.
Context
Perform the following steps on the NQA client.
Procedure
Step 1 Run:
system-view
Step 2 Run:
nqa test-instance admin-name test-name
NOTE
This parameter cannot be configured for SNMP, TCP, FTP, Path MTU, DHCP, HTTP, LSPTrace,
DNS, PWE3 Trace, and MTrace test instances. In the case that the icmp-jitter-type of the ICMPJitter
or Path Jitter test instanceis is icmp-echo, this parameter can be configured for ICMPJitter and Path
Jitter.
l Run:
datasize size
This parameter cannot be configured for SNMP, TCP, FTP, ICMP Jitter, LSP Trace, Path Jitter, Path
MTU, DHCP, HTTP, DNS, PWE3 Trace, and MTrace test instances.
l Run:
description string
The destination URL address is set for the NQA test instance.
NOTE
The destination URL address can be configured for DNS and HTTP test instances.
l Run:
destination-port port-number
The destination port number is set for the NQA test instance.
NOTE
The destination port number can be configured only for UDP, Jitter, TCP, Trace, FTP, and HTTP test
instances.
l Run:
dns-server ipv4 ip-address
The DNS server address is configured for the NQA test instance.
NOTE
The DNS server address can be configured only for DNS and HTTP test instances.
l Run:
fail-percent percent
This parameter cannot be configured for Trace, FTP, DNS, LSP Trace, Path MTU, PWE3 Trace,
MPing, and MTrace test instances.
l Run:
frequency interval
The file name and file path are configured for the FTP test instance.
NOTE
The file name and file path can be configured only for the FTP test instance.
l Run:
ftp-filesize size
The size of the file is set for the FTP test instance.
NOTE
The size of the file can be configured only for the FTP test instance.
l Run:
ftp-operation { get | put }
The relative file path and version are configured for the HTTP test instance.
NOTE
The relative file path and version can be configured only for the HTTP test instance.
l Run:
interval { milliseconds interval | seconds interval }
The interval for sending packets is set for the NQA test instance.
NOTE
The interval for sending packets can be configured only for the ICMP, UDP, SNMP, Jitter, ICMP Jitter,
LSP Jitter, TCP, LSP Ping, PWE3 Ping, Path Jitter and MPing test instances.
l Run:
jitter-packetnum number
The number of test packets is set for the NQA test instance.
NOTE
The number of test packets can be configured only for all jitter type test instances ( Except
PathJitter ).
l Run:
local-pw-id local-pw-id
The LSP EXP value is set for the NQA test instance.
NOTE
This parameter can be configured only for LSP Ping, LSP Trace, LSP Jitter, PWE3 Ping, and PWE3
Trace test instances.
l Run:
lsp-replymode { no-reply | udp | udp-via-vpls | udp-router-alert | level-control-
channel }
The reply mode of LSPs is configured for the NQA test instance.
NOTE
This parameter can be configured only for LSP Ping, LSP Trace, LSP Jitter, PWE3 Ping, and PWE3
Trace test instances.
l Run:
lsp-tetunnel tunnel interface-number
The multicast source address is configured for the NQA MTrace test instance.
NOTE
This parameter can be configured only for MTrace test instances.
l Run:
mtrace-last-hop-address ipv4 ip-address
The last hop address is configured for the NQA test instance.
NOTE
This parameter can be configured only for MTrace test instances.
l Run:
mtrace-group-address ipv4 ip-address
The multicast group address is configured for the MTrace test instance.
NOTE
This parameter can be configured only for MTrace test instances.
l Run:
mtrace-response-address ipv4 ip-address [ ttl value ]
The response address, namely, the destination address of the IGMP Tracert Response
message, is configured for the MTrace test instance.
NOTE
This parameter can be configured only for MTrace test instances.
l Run:
mtrace-query-type { all-router | last-hop | destination | multicast-tree }
The query type, namely, the mode in which IGMP Tracert Response messages are sent, is
configured for the MTrace test instance.
NOTE
This parameter can be configured only for MTrace test instances.
l Run:
probe-count number
The number of permitted maximum probe failures, that is, the threshold to trigger the trap
message, is set for the NQA test instance.
NOTE
This parameter cannot be configured for Path Jitter, Path MTU, MPing and MTrace test instances.
l Run:
records history number
The maximum number of history records is set for the NQA test instance.
NOTE
This parameter cannot be configured for Path MTU, MPing,and MTrace test instances.
l Run:
records result number
The maximum number of result records is set for the NQA test instance.
l Run:
remote-pw-id remote-pw-id
The NQA test is configured to send packets without searching for the routing table.
NOTE
This parameter cannot be configured for DNS, DHCP, ICMP Jitter, Path Jitter, LSP Ping, LSP Trace,
LSP Jitter, Path MTU, PWE3 Ping, PWE3 Trace, MPing, and MTrace test instances.
l Run:
set-df
This parameter cannot be configured for Path MTU, Path Jitter MPing, and MTrace test instances.
l Run:
source-address ipv4 ip-address
This parameter cannot be configured for DNS, DHCP, MPing, PWE3 Ping, and PWE3 Trace test
instances.
l Run:
source-interface interface-type interface-number
The source port number is set for the NQA test instance.
NOTE
This parameter can be configured for UDP, SNMP, TCP, ICMP Jitter, Path Jitter, Path MTU, LSP
Jitter, FTP, and HTTP test instances.
l Run:
test-failtimes times
The trap threshold for continuous probe failures is set for the NQA test instance.
NOTE
This parameter cannot be configured for Path Jitter, Path MTU, MPing, and MTrace test instances.
l Run:
timeout time
This parameter cannot be configured for DNS, Path MTU, DHCP, and PWE3 Ping test instances.
l Run:
tos value
NOTE
This parameter cannot be configured for DNS, LSP Ping, LSP Trace, LSP Jitter, DHCP, Path MTU,
PWE3 Ping, PWE3 Trace, and MTrace test instances.
l Run:
tracert-hopfailtimes times
The hop fail times are set for the Trace test instance.
NOTE
This parameter can be configured only for Trace, LSP Trace, Path Jitter and PWE3 Trace test instances.
l Run:
tracert-livetime first-ttl first-ttl max-ttl max-ttl
This parameter can be configured only for Trace, LSP Trace, MTrace, Path Jitter and PWE3 Trace test
instances.
l Run:
vpn-instance vpn-instance-name
The VPN instance name is configured for the NQA test instance.
NOTE
This parameter cannot be configured for DHCP, DNS, LSP Ping, LSP Trace, LSP Jitter, PWE3 Ping,
MPing, and MTrace test instances. When the signaling protocol of the VC is BGP, this parameter can
be configured for PWE3 Trace test instances.
l Run:
vc-type { ldp | bgp }
----End
Prerequisites
The configurations of the Universal NQA Test Parameters function are complete.
Procedure
Step 1 Run the display nqa-agent [admin-name test-name ] [ verbose ] to view the status of the test
instance configured on the NQA client.
----End
Example
<HUAWEI> display nqa-agent
NQA Tests Max:2000 NQA Tests Number: 2
nqa test-instance a a
test-type pwe3trace
local-pw-id 1
vc-type bgp
nqa status : normal
nqa test-instance a b
test-type icmpjitter
destination-address ipv4 100.1.1.201
source-address ipv4 100.1.1.200
hardware-based enable
ttl 100
tos 100
timeout 20
nqa status : normal
Applicable Environment
If the round-trip transmission delay threshold is configured for a NQA test instance, the NQA
test result will contain the statistics on the test packets that exceed the set threshold. This provides
the basis for the network manager to analyze the operation status of the specified service.
Pre-configuration Tasks
Before configuring the round-trip transmission delay threshold, complete the following tasks:
Data Preparation
To configure the round-trip transmission delay threshold, you need the following data.
No. Data
Context
Do as follows on the router to perform the NQA test:
Procedure
Step 1 Run:
system-view
An NQA test instance is created and the NQA instance view is displayed.
Step 3 Run:
test-type { dhcp | dns | ftp | http | icmp | jitter | lspjitter | lspping |
lsptrace | snmp | tcp | trace | udp | pathmtu | pwe3trace | pwe3ping | macping |
mactrace | icmpjitter | pathjitter | mping | mtrace | vplsping | vplstrace |
vplsmping | vplsmtrace | vplspwping | vplspwtrace | gmacping | gmactrace }
----End
Prerequisites
The configurations of the Round-Trip Delay Thresholds Test function are complete.
Procedure
Step 1 Run the display nqa-agent [ admin-name test-name ] [ verbose ] to view the status of the test
instance configured on the NQA client.
----End
Example
Run the display nqa-agent verbose command. If the test is successful, the following is
displayed. For example:
<HUAWEI> display nqa-agent verbose
NQA Tests Max:2000 NQA Tests Number: 1
NQA Flow Max:1000 NQA Flow Remained:1000
Applicable Environment
In all jitter type tests ( except PathJiiter and LSPJiiter ), after the uni-directional transmission
delay threshold is configured, the test results contain statistics on the test packets that exceed
the set threshold. This provides the basis for the network manager to analyze the operation status
of the specified service.
Pre-configuration Tasks
Before configuring the uni-directional transmission delay threshold, complete the following
tasks:
Data Preparation
To configure the uni-directional transmission delay threshold, you need the following data.
No. Data
Context
Do as follows on the router to perform the NQA test:
Procedure
Step 1 Run:
system-view
Step 2 Run:
nqa test-instance admin-name test-name
An NQA test instance is created and the NQA instance view is displayed.
Step 3 Run:
test-type { jitter | icmpjitter }
Step 4 Run:
destination-address ipv4 ip-address
Step 5 (Optional)Run:
destination-port port-number
Step 6 Run:
threshold owd-sd owd-sd-value
The uni-directional transmission (from the source to the destination) delay threshold is
configured.
Step 7 Run:
threshold owd-sd owd-sd-value
The uni-directional transmission (from the destination to the source) delay threshold is
configured.
----End
Prerequisites
The configurations of the Uni-directional Transmission Delay Thresholds Test function are
complete.
Procedure
Step 1 Run the display nqa-agent [ admin-name test-name ] [ verbose ] to view the status of the test
instance configured on the NQA client.
----End
Example
Using the display nqa-agent [ admin-name test-name ] [ verbose ] command, you can view the
uni-directional transmission delay threshold configured for the NQA test. For example:
<HUAWEI> display nqa-agent verbose
NQA Tests Max:2000 NQA Tests Number: 1
NQA Flow Max:1000 NQA Flow Remained:1000
Applicable Environment
Trap messages are generated regardless of whether the NQA test is successful or fails. You can
control whether to send trap messages to the NM station by enabling or disabling the trap
function.
NQA also supports the sending of trap messages to the NM station when the uni-directional
transmission delay or the round-trip transmission delay exceeds the threshold.
l For all tests supporting traps, if the round-trip transmission delay exceeds the threshold and
the trap function is enabled, trap messages are sent to the NM station with the specified IP
address.
l For all the Jitter tests ( LSPJitter and PathJitter not included ), if the uni-directional
transmission delay exceeds the threshold and the trap function is enabled, trap messages
are sent to the NM station with the specified IP address.
Trap messages carry information such as destination IP address, operation status, destination IP
address of the test packet, minimum RTT, maximum RTT and total RTT, number of sent probe
packets, number of received packets, RTT square sum, and time of the last successful probe.
Pre-configuration Tasks
Before configuring the trap function, complete the following tasks:
Data Preparation
To configure the trap function, you need the following data.
No. Data
No. Data
Procedure
Step 1 Run:
system-view
Step 2 Run:
nqa test-instance admin-name test-name
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type { jitter | icmpjitter }
Step 4 Run:
destination-address ipv4 ip-address
Step 5 (Optional)Run:
destination-port port-number
Step 6 Run:
send-trap testfailure
Step 7 Run:
test-failtimes times
The number of test failures that trigger sending a trap message is configured.
----End
Procedure
Step 1 Run:
system-view
Step 2 Run:
nqa test-instance admin-name test-name
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type { jitter | icmpjitter }
Step 4 Run:
destination-address ipv4 ip-address
Step 5 (Optional)Run:
destination-port port-number
Step 6 Run:
send-trap probefailure
Step 7 Run:
probe-failtimes times
The number probe failures that trigger sending a Trap message is configured.
----End
Context
Do as follows on the NQA client:
Procedure
Step 1 Run:
system-view
Step 2 Run:
nqa test-instance admin-name test-name
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type { jitter | icmpjitter }
----End
Context
Do as follows on the NQA client:
Procedure
Step 1 Run:
system-view
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type { jitter | icmpjitter }
Step 5 (Optional)Run:
destination-port port-number
Step 6 Run:
send-trap { owd-ds | owd-sd | rtd }*
Sending trap messages when the transmission delay exceeds the threshold is enabled.
----End
Prerequisites
The configurations of the Trap function are complete.
Procedure
Step 1 Run the display trapbuffer [ size value ] to view the trap messages sent in an NQA test.
----End
Example
Run the display trapbuffer [ size value ] command. If information about the trap messages is
displayed, it means that the configuration succeeds.
For example:
<HUAWEI> display trapbuffer size 2
Trapping buffer configuration and contents:enabled
Allowed max buffer size : 1024
Actual buffer size : 256
Channel number : 3 , channel name : trapbuffer
Dropped messages : 0
Overwritten messages : 0
Current messages : 11
#May 6 2009 12:54:17 CBB6-PE3 SINDEX/4/INDEXMAP:OID
1.3.6.1.4.1.2011.5.25.110.2.0.1 ShortIFIndexMapTable changed.
#May 6 2009 11:02:37 CBB6-PE3 SRM_BASE/4/ENTITYREGSUCCESS: OID
1.3.6.1.4.1.2011.5.25.129.2.1.18 Physical entity register succeeded.
(EntityPhysicalIndex=17367040, BaseTrapSeverity=2, BaseTrapProbableCause=70144,
BaseTrapEventType=5, EntPhysicalContainedIn=1677721
6, EntPhysicalName="SRU slot 9", RelativeResource="", ReasonDescription="MPU9")
Applicable Environment
In the test, the latest five test results can be saved by the system and earlier ones are overlapped.
Therefore, if the NM station does not perform result polling timely, test results are lost. You can
send the statistics on the test results that reach the capacity of the local storage or periodically
send the statistics to the FTP server for storage through FTP. This can effectively prevent the
loss of test results and facilitate the network management based on the analysis of test results at
different times.
Pre-configuration Tasks
Before configuring test results to be sent to the FTP server, complete the following tasks:
l Configuring the FTP server
l Configuring a reachable route between the NQA client and the NM station
l Configuring a test instance
Data Preparation
To configure test results to be sent to the FTP server, you need the following data.
No. Data
2 User name and password used for logging into the FTP server
Context
Do as follows on the NQA client:
Procedure
Step 1 Run:
system-view
Step 2 Run:
nqa-ftp-record ip-address ip-address
or
nqa-ftp-record vpn-instance vpn-instance
Step 3 Run:
nqa-ftp-record username username
The user name for logging into the FTP server is configured.
Step 4 Run:
nqa-ftp-record password { password | cipher password }
Step 5 Run:
nqa-ftp-record filename filename
----End
Context
Do as follows on the NQA client:
Procedure
Step 1 Run:
system-view
Step 2 Run:
nqa-ftp-record enable
----End
Context
Do as follows on the NQA client:
Procedure
Step 1 Run:
system-view
Step 2 Run:
nqa-ftp-record item-num item-number
The number of test results to be saved on the FTP server through FTP is configured.
----End
Context
Do as follows on the NQA client:
Procedure
Step 1 Run:
system-view
Step 2 Run:
nqa-ftp-record time time
The duration of saving test results to the FTP server through FTP is configured.
----End
Context
Do as follows on the NQA client:
Procedure
Step 1 Run:
system-view
Step 2 Run:
nqa-ftp-record trap-enable
Alarms are configured to be sent to the NM station after the FTP transmission succeeds.
When the FTP transmission succeeds at the first time, no alarm message is generated. From the
second time on, each time when the FTP transmission succeeds, an alarm message is generated.
----End
Context
Do as follows on the NQA client:
Procedure
Step 1 Run:
system-view
Step 2 Run the nqa test-instance admin-name test-name command, enter the NQA test instance view.
Step 3 Run:
test-type { dhcp | dns | ftp | http | icmp | jitter | lspjitter | lspping |
lsptrace | snmp | tcp | trace | udp | pathmtu | pwe3trace | pwe3ping | macping |
mactrace | icmpjitter | pathjitter | mping | mtrace | vplsping | vplstrace |
vplsmping | vplsmtrace | vplspwping | vplspwtrace | gmacping | gmactrace }
Step 4 Run:
destination-address ipv4 ip-address
Step 5 (Optional)Run:
destination-port port-number
Step 6 Run:
start
Select the start mode as required because the start command has several forms.
l To perform the NQA test immediately, run the start now [ end { at [ yyyy/mm/dd ]
hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ]
command.
The test instance is started immediately.
l To perform the NQA test at the specified time, run the start at [ yyyy/mm/dd ] hh:mm:ss
[ end { at [ yyyy/mm | dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime
{ seconds second | hh:mm:ss } } ] command.
The test instance is started at a specified time.
l To perform the NQA test after a certain delay period, run the start delay { seconds second
| hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } |
lifetime { seconds second | hh:mm:ss } } ] command.
The test instance is started after a certain delay.
----End
Prerequisites
The configurations of the Test Results to Be Sent to the FTP Server function are complete.
Procedure
Step 1 Run the display nqa-ftp-record configuration command to Check the configuration for saving
NQA test results.
----End
Example
Run the display nqa-ftp-record configuration command to check the configuration for saving
NQA test results.
<HUAWEI> display nqa-ftp-record configuration
---------------NQA FTP SAVE RECORD CONFIGURATION---------------
FUNCTION: ENABLE TRAP: DISABLE
IP-ADDRESS:11.1.1.8
VPN-INSTANCE:
USERNAME:wang
PASSWORD:%$%$gw1.QU~4M1I@ESF>b/VP,@7.%$%$
FILENAME:icmp
ITEM-NUM:10010
TIME:2
LAST FINISHED FILENAME:icmp20080605-150350.txt
Applicable Environment
The user can monitor the network by configuring an alarm threshold. After monitoring
conditions are configured, when the monitored item in the test result exceeds the configured
upper or lower threshold, the device sends alarms to the NM station. Therefore, the user can
monitor the real-time operation status of the network.
Pre-configuration Tasks
Before configuring the threshold for the NQA alarm, complete the following task:
l Configuring a test instance
Data Preparation
To configure the threshold for the NQA alarm, you need the following data.
No. Data
3 Upper threshold
4 Lower threshold
Context
Do as follows on the NQA client:
Procedure
Step 1 Run:
system-view
----End
Context
Do as follows on the NQA client:
Procedure
Step 1 Run:
system-view
NOTE
At present, only the absolute statistics function rather than the relative statistics function is supported.
----End
Procedure
Step 1 Run:
system-view
Step 2 Run:
nqa test-instance admin-name test-name
Step 3 Run:
start
Select the start mode as required because the start command has several forms.
l To perform the NQA test immediately, run the start now [ end { at [ yyyy/mm/dd ]
hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ]
command.
The test instance is started immediately.
l To perform the NQA test at the specified time, run the start at [ yyyy/mm/dd ] hh:mm:ss
[ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime
{ seconds second | hh:mm:ss } } ] command.
The test instance is started at a specified time.
l To perform the NQA test after a certain delay period, run the start delay { seconds second
| hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } |
lifetime { seconds second | hh:mm:ss } } ] command.
The test instance is started after a certain delay.
----End
Prerequisites
The configurations of the Threshold for the NQA Alarm function are complete.
Procedure
l Run the display nqa event command to check the maximum number of events that can be
configured and the number of events that are configured.
l Run the display nqa alarm command in the NQA view to check the maximum number of
alarms that can be configured and the number of alarms that are configured.
l Run the display nqa-agent [ admin-name test-name ] [ verbose ] command to Check the
status of the test instance configured on the NQA client.
----End
Example
Run the display nqa event command to check the maximum number of events that can be
configured and the number of events that are configured.
<HUAWEI> display nqa event
NQA event information:
------------------------------------------------------
NQA Event Max: 5 NQA Event Number: 1
------------------------------------------------------
Run the display nqa-alarm command to check the maximum number of alarms that can be
configured and the number of alarms that are configured.
[HUAWEI-nqa-admin-icmp] display nqa alarm
NQA alarm information:
------------------------------------------------------
NQA Alarm Max: 5 NQA Alarm Number: 2
------------------------------------------------------
Run the display nqa-agent command to check the status of the test instance configured on the
NQA client.
<HUAWEI> display nqa-agent
NQA Tests Max:2000 NQA Tests Number: 1
NQA Flow Max:1000 NQA Flow Remained:1000
nqa test-instance admin icmp
test-type icmp
destination-address ipv4 11.1.1.32
frequency 5
alarm 10 rtt-average 2 rising-threshold 200 10 falling-threshold 0 10
alarm 20 lost-packet-ratio 2 rising-threshold 10 10 falling-threshold 1 10
nqa status : normal
Applicable Environment
With the NQA VPLS MFIB ping, the following performance indexes of the VPLS network can
be checked:
l Multicast connectivity of PEs belonging to a specified VSI in the VPLS domain
l IGMP snooping of the egress belonging to a specified VSI in the VPLS domain
Pre-configuration Tasks
Before configuring a VPLS MFIB ping to check the VPLS network, complete the following
tasks:
Data Preparation
To configure a VPLS MFIB ping to check the VPLS network, you need the following data.
No. Data
3 (Optional) multicast source IP address, pad string, length of the payload in the
Echo Request packet, timeout period during which an Echo Reply packet is
awaited, time waiting for the next Echo Request packet, reply mode, priority of
the Echo Request packet, and permitted failure percentage
4 Multicast IP address
Context
Do as follows on the NQA client:
Procedure
Step 1 Run:
system-view
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type vplsmping
The name of the VSI to which the test instance corresponds is configured.
Step 5 Run:
destination-address ipv4 ip-address
Step 6 (Optional) Run the following commands to configure other parameters for the VPLS MFIB ping.
l Run the lsp-replymode { no-reply | udp | udp-via-vpls | udp-router-alert | level-control-
channel } command to configure the reply mode of the Echo Reply packet.
NOTE
The lsp-replymode no-reply command can start a uni-directional test. No matter whether the test is
successful or not, the test result shows that the test fails. If the test is successful, the number of timeout
packets in the test is displayed in the test result; if the test is failed, the number of discarded packets in
the test is displayed in the test result.
l Run the source-address ipv4 ip-address command to configure the source IP address.
l Run the datasize size command to set the size of the test packet.
NOTE
The sum of datasize and the size of the packet header should be less than the MTU of the interface;
otherwise, the test fails.
l Run the ttl number command to set the TTL value.
l Run the lsp-exp exp command to configure the LSP EXP value.
l Run the datafill fillstring command to configure the pad string.
l Run the interval seconds interval command to set the interval for sending test packets.
l Run the fail-percent percent command to set the permitted maximum percentage of the failed
NQA tests.
Step 7 Run:
start
The start command has several forms. You can choose one of the following forms as required:
l Run the start now [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second |
hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command to start the test instance
immediately.
l Run the start at [ yyyy/mm/dd ] hh:mm:ss [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay
{ seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command to start
the test instance at a specified time.
l Run the start delay { seconds second | hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss |
delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command
to start the test instance after a certain delay.
----End
Prerequisites
All the configurations of the VPLS MFIB pingare complete.
NOTE
NQA test results are not displayed automatically on the terminal. You must run the display nqa results
command to view test results. By default, the command only shows the results of the latest five tests.
Procedure
Step 1 Run:
display nqa results [ test-instance admin-name test-name ]
----End
Example
Run the display nqa results command, and the following information is displayed.
<HUAWEI> display nqa results
NQA entry(admin, vplsmping) :testflag is inactive ,testtype is vplsmping
1 . Test 1 result The test is finished
Completions: success Timeouts number: 0
Drops number: 0 TargetAddress: 225.1.1.1
ProbeResponses number: 6 SentProbes number: 3
Busies: 0 SequenceError number: 0
Lost packet ratio: 0%
1 . Receiver 1
CompletionTime Min/Max/Sum/Avg: 15/26/59/19
Sum2CompletionTime: 1225
LastGoodProbe time: 2009-4-23 11:48:9.4
RecevierAddress: 2.2.2.2
Fib hit: Hit
Applicable Environment
NQA MAC Ping and MAC trace test instances are similar to the Ping and trace command in
terms of providing functions to detect the connectivity of VLAN and VPLS networks, but output
more detailed test information. To detect the connectivity of a VLAN network, it is required that
devices on the VLAN network be enabled with basic Ethernet Connectivity Fault Management
(CFM) functions; to detect the connectivity of a VPLS network, it is required that PEs on the
VPLS network be enabled with VPLS-based Ethernet CFM.
Pre-configuration Tasks
Before configuring a MAC Ping and MAC trace test instance, complete the following tasks:
l In the case of a VLAN MAC Ping and MAC trace test instance, configuring a VLAN
network and enabling basic Ethernet CFM functions on the VLAN network
l In the case of a VPLS MAC Ping and MAC trace test instance, configuring a VPLS network,
ensuring that the VSI is in the Up state, and enabling basic Ethernet CFM functions on PEs
Data Preparation
To configure a MAC Ping and MAC trace test instance, you need the following data.
No. Data
2 Names of the MD and MA, local MEP ID, and destination MAC address
3 (Optional): Packet size, number of probes for one NQA test instance, interval at
which packets are sent, source address where packets are sent, TTL, test failure
conditions, historical records and result records, and aging time.
Context
Do as follows on the NQA client where the NQA MAC trace test instance is to be initiated:
Procedure
Step 1 Run:
system-view
An NQA test instance is created and the test instance view is displayed.
2. Run:
test-type mactrace
2. Run:
md md-name ma ma-name
The MD and MA that send the NQA test packets are configured.
Step 4 Choose one of the following procedures to configure a destination MAC address for the MAC
Trace test.
1. Run:
destination-address mac mac-address
The destination MAC address is configured for the MAC trace test.
2. Run:
destination-address remote-mep mep-id remote-mep
NOTE
If the destination MAC address is the remote-mep type, you must configure mapping between remote-
mep and the destination MAC address on the CFM module before the destination MAC address is
configured.
Step 5 (Optional) Configure optional parameters to transmit test packets in an actual network.
1. Run:
ttl number
The NQA test instance is configured to send a trap message to the NMS when the number
of continuous test failures reaches the specified value.
3. Run:
send-trap rtd
The maximum numbers of historical records and result records that can be saved for the NQA
test instance are set.
Step 8 (Optional) Run:
agetime hh:mm:ss
The default aging time is 0, indicating that the test instance will not age.
Step 9 Schedule the NQA test instance.
1. (Optional) Run:
frequency interval
----End
Prerequisites
The configurations of the MAC ping and MAC trace test instance are complete.
NOTE
NQA test results are not displayed automatically on the terminal. You must run the display nqa results
command to view test results. By default, the command output only shows the results of the latest five
tests.
Procedure
l Run the display nqa results [ test-instance admin-name test-name ] command to view test
results.
----End
Example
If a MAC Ping test instance is successfully performed, run the display nqa results command,
and the following information is displayed.
If a MAC trace test instance is successfully performed, run the display nqa results command,
and you can view the statistics about each hop.
<HUAWEI> isplay nqa results test-instance admin mactrace
NQA entry(admin, mactrace) :testflag is inactive ,testtype is mactrace
1 . Test 1 result The test is finished
Completion:success Attempts number:1
Drop operation number:0 Operation timeout number:0
System busy operation number:0
Last good path Time:2000-01-05 02:35:35.0
Applicable Environment
On a VLAN network where the MD, MA, and MEP are not configured, GMAC ping and GMAC
trace can detect the connectivity, packet loss percentage, and delay between any two devices.
By means of sending Request packets with a destination MAC address from the source and
parsing information in Reply packets, GMAC ping and GMAC trace detect the link connectivity
and locate the fault. Compared with MAC ping and MAC trace, GMAC ping and GMAC trace
do not require the configuration of the MD, MA, and MEP.
Pre-configuration Tasks
Before configuring GMAC ping and GMAC trace to detect the connectivity of a VLAN network,
complete the following tasks:
l In the case of GMAC ping, enabling GMAC ping globally on both the source and the
destination
l In the case of GMAC trace, enabling GMAC trace globally on both the source and the
destination
Data Preparation
To configure GMAC ping and GMAC trace to detect the connectivity of a VLAN network, you
need the following data.
No. Data
1 VLAN ID
Context
Do as follows on the NQA client where the NQA GMAC ping test instance is to be initiated:
Procedure
Step 1 Run:
system-view
Step 2 Run:
nqa test-instance admin-name test-name
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type gmacping
Step 4 Run:
destination-address mac mac-address
The destination MAC address is configured, which can be a bridge MAC address or the MAC
address of an interface.
Step 5 Run:
vlan vlan-id
NOTE
To view more optional parameters, you can run the display nqa-parameter command in the test instance view
after the NQA test instance type is configured.
Step 6 Run:
start
----End
Context
Do as follows on the NQA client where the NQA GMAC trace test instance is to be initiated:
Procedure
Step 1 Run:
system-view
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type gmactrace
The destination MAC address is configured, which can be a bridge MAC address or the MAC
address of an interface.
Step 5 Run:
vlan vlan-id
NOTE
To view more optional parameters, you can run the display nqa-parameter command in the test instance view
after the NQA test instance type is configured.
Step 6 Run:
start
----End
Prerequisites
The configurations of GMAC ping and GMAC trace are complete.
NOTE
NQA test results are not displayed automatically on the terminal. You must run the display nqa results
command to view test results. By default, the command output only shows the results of the latest five
tests.
Procedure
Step 1 Run the display nqa results [ test-instance admin-name test-name ] command on the NQA
client to view test results.
----End
Example
Run the display nqa results command. If the following information is displayed, it means that
the configuration of a GMAC ping test instance succeeds.
<HUAWEI> display nqa results test-instance gmacping
NQA entry(test,gmacping) :testflag is inactive ,testtype is gmacping
1 . Test 1 result The test is finished
SendProbe:3 ResponseProbe:0
Completion:failed RTD OverThresholds number:0
OWD OverThresholds SD number:0 OWD OverThresholds DS number:0
Min/Max/Avg/Sum RTT:0/0/0/0 RTT Square Sum:0
NumOfRTT:0 Drop operation number:3
Operation sequence errors number:0 RTT Stats errors number:0
System busy operation number:0 Operation timeout number:0
Min Positive SD:0 Min Positive DS:0
Max Positive SD:0 Max Positive DS:0
Positive SD Number:0 Positive DS Number:0
Positive SD Sum:0 Positive DS Sum:0
Positive SD Square Sum:0 Positive DS Square Sum:0
Min Negative SD:0 Min Negative DS:0
Max Negative SD:0 Max Negative DS:0
Negative SD Number:0 Negative DS Number:0
Negative SD Sum:0 Negative DS Sum:0
Negative SD Square Sum:0 Negative DS Square Sum:0
Min Delay SD:0 Min Delay DS:0
Avg Delay SD:0 Avg Delay DS:0
Max Delay SD:0 Max Delay DS:0
Delay SD Square Sum:0 Delay DS Square Sum:0
Packet Loss SD:0 Packet Loss DS:0
Packet Loss Unknown:3 Average of Jitter:0
Average of Jitter SD:0 Average of Jitter DS:0
Jitter out value:0.0000000 Jitter in value:0.0000000
NumberOfOWD:0 Packet Loss Ratio: 100%
OWD SD Sum:0 OWD DS Sum:0
ICPIF value: 0 MOS-CQ value: 0
TimeStamp unit: ms Packet Rewrite Number: 0
Packet Rewrite Ratio: 0% Packet Disorder Number: 0
Packet Disorder Ratio: 0% Fragment-disorder Number: 0
Fragment-disorder Ratio: 0%
Run the display nqa results command. If the following information is displayed, it means that
the configuration of a GMAC trace test instance succeeds.
<HUAWEI> display nqa results test-instance gmactrace
NQA entry(test,gmactrace) :testflag is active ,testtype is gmactrace
1 . Test 1 result The test is finished
Completion:success Attempts number:1
Drop operation number:0 Operation timeout number:0
System busy operation number:0
Last good path Time:2011-06-20 17:50:18.2
Applicable Environment
NQA GMAC ping and GMAC trace test instances can detect the connectivity, packet loss
percentage, and delay of the VPLS network between PEs, between PEs and CEs, and between
CEs. Compared with MAC ping and MAC trace, GMAC ping and GMAC trace do not need to
configure parameters for the MD, MA, and MEP.
Pre-configuration Tasks
Before configuring GMAC ping and GMAC trace to detect the connectivity of a VPLS network,
complete the following tasks:
l In the case of a VPLS GMAC ping test instance, enabling GMAC ping globally on both
the source and the destination
l In the case of a VPLS GMAC trace test instance, enabling GMAC trace globally enabled
on both the source and the destination
Data Preparation
To configure GMAC ping and GMAC trace to detect the connectivity of a VPLS network, you
need the following data.
No. Data
1 Name of a VSI
Context
Do as follows on the NQA client where the NQA GMAC ping test instance is to be initiated:
Procedure
Step 1 Run:
system-view
Step 2 Run:
nqa test-instance admin-name test-name
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type gmacping
Step 4 Run:
destination-address mac mac-address
The destination MAC address is configured, which can be a bridge MAC address or the MAC
address of an interface.
Step 5 Run:
vsi vsi-name
Step 6 Run:
start
l Run:
start now [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second |
hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ]
For details on parameters and options of the start command, refer to the Command Reference.
----End
Context
Do as follows on the NQA client where the NQA GMAC trace test instance is to be initiated:
Procedure
Step 1 Run:
system-view
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type gmactrace
The destination MAC address is configured, which can be a bridge MAC address or the MAC
address of an interface.
Step 5 Run:
vsi vsi-name
----End
Prerequisites
The configurations of VPLS GMAC ping and VPLS GMAC trace test instances are complete.
NOTE
NQA test results are not displayed automatically on the terminal. You must run the display nqa results
command to view test results. By default, the command output only shows the results of the latest five
tests.
Procedure
Step 1 Run the display nqa results [ test-instance admin-name test-name ] command on the NQA
client to view test results.
----End
Example
Run the display nqa results command. If the following information is displayed, it means that
the configuration of a VPLS GMAC ping test instance succeeds.
<HUAWEI> display nqa results test-instance gmacping
NQA entry(test,gmacping) :testflag is inactive ,testtype is gmacping
1 . Test 1 result The test is finished
SendProbe:3 ResponseProbe:0
Completion:failed RTD OverThresholds number:0
OWD OverThresholds SD number:0 OWD OverThresholds DS number:0
Min/Max/Avg/Sum RTT:0/0/0/0 RTT Square Sum:0
NumOfRTT:0 Drop operation number:3
Operation sequence errors number:0 RTT Stats errors number:0
System busy operation number:0 Operation timeout number:0
Min Positive SD:0 Min Positive DS:0
Max Positive SD:0 Max Positive DS:0
Positive SD Number:0 Positive DS Number:0
Positive SD Sum:0 Positive DS Sum:0
Positive SD Square Sum:0 Positive DS Square Sum:0
Min Negative SD:0 Min Negative DS:0
Max Negative SD:0 Max Negative DS:0
Negative SD Number:0 Negative DS Number:0
Negative SD Sum:0 Negative DS Sum:0
Negative SD Square Sum:0 Negative DS Square Sum:0
Min Delay SD:0 Min Delay DS:0
Avg Delay SD:0 Avg Delay DS:0
Max Delay SD:0 Max Delay DS:0
Delay SD Square Sum:0 Delay DS Square Sum:0
Packet Loss SD:0 Packet Loss DS:0
Packet Loss Unknown:3 Average of Jitter:0
Average of Jitter SD:0 Average of Jitter DS:0
Jitter out value:0.0000000 Jitter in value:0.0000000
NumberOfOWD:0 Packet Loss Ratio: 100%
OWD SD Sum:0 OWD DS Sum:0
ICPIF value: 0 MOS-CQ value: 0
TimeStamp unit: ms Packet Rewrite Number: 0
Packet Rewrite Ratio: 0% Packet Disorder Number: 0
Packet Disorder Ratio: 0% Fragment-disorder Number: 0
Fragment-disorder Ratio: 0%
Run the display nqa results command. If the following information is displayed, it means that
the configuration of a VPLS GMAC trace test instance succeeds.
<HUAWEI> display nqa results test-instance gmactrace
NQA entry(test,gmactrace) :testflag is active ,testtype is gmactrace
1 . Test 1 result The test is finished
Completion:success Attempts number:1
Drop operation number:0 Operation timeout number:0
Applicable Environment
As a main technology for setting up a metropolitan area network (MAN), Virtual Private LAN
Service (VPLS) has been widely applied globally. VPLS, however, is poor in terms of service
management and monitoring. In this case, an optimized VPLS OAM mechanism is required.
On a VPLS network, the performance of PWs affects the entire network performance. For
example, the connectivity of PWs determines whether traffic can be normally forwarded between
users, and the forwarding performance of PWs determines whether the forwarding capacity of
the network complies with the Service Level Agreement (SLA) signed with users. NQA VPLS
PW ping and NQA VPLS PW trace test instances can detect a specific PW and provide data
such as jitter and delay for network analysis.
Pre-configuration Tasks
Before configuring VPLS PW ping and VPLS PW trace test instances, configure a VPLS
network correctly to ensure that the VSI is in the Up state.
Data Preparation
To configure VPLS PW ping and VPLS PW trace test instances, you need the following data.
No. Data
2 Destination address in the case of an LDP VPLS network; local site ID and remote
site ID in the case of a BGP VPLS network
3 (Optional) Local PW, test period, number of sent packets, interval at which packets
are sent, packet size, padding, and failure percentage
Context
Do as follows on the NQA client:
Procedure
Step 1 Run:
system-view
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type vplspwping
packets are is set to a small value, a relatively greater error may occur in the statistics of the
test result.
l Run:
fail-percent percent
The address of the remote end of the multi-hop PW formed by connecting a VPLS PW to a
VLL PW is configured.
NOTE
To view more optional parameters, you can enter the test instance view after the NQA test instance type is
configured and then run the display nqa-parameter command.
Step 7 Run:
start
l Run:
start now [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second |
hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ]
For details on parameters of the start command, refer to the Command Reference.
----End
Context
Do as follows on the NQA client:
Procedure
Step 1 Run:
system-view
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type vplspwtrace
probe-count number
The address of the remote end of the multi-hop PW formed by connecting a VPLS PW to a
VLL PW is configured.
l Run
:
lsp-path full-display
All P nodes along the LSP path are displayed in the NQA test result.
NOTE
To view more optional parameters, you can enter the test instance view after the NQA test instance type is
configured and then run the display nqa-parameter command.
Step 7 Run:
start
l Run:
----End
Prerequisites
All the configurations of the VPLS PW ping and VPLS PW trace test instances are complete.
NOTE
NQA test results are not displayed automatically on the terminal. You must run the display nqa results
command to view test results. By default, the command output only shows the results of the latest five
tests.
Procedure
Step 1 Run the display nqa results [ test-instance admin-name test-name ] command on the NQA
client to view test results.
----End
Example
Run the display nqa results command. If the following information is displayed, it means that
the VPLS PW ping test succeeds.
<HUAWEI> display nqa results test-instance vplspwping
NQA entry(vplspw,ping) :testflag is inactive ,testtype is vplspwping
1 . Test 1 result The test is finished
SendProbe:3 ResponseProbe:3
Completion:success RTD OverThresholds number:0
OWD OverThresholds SD number:0 OWD OverThresholds DS number:0
Min/Max/Avg/Sum RTT:1/30/14/41 RTT Square Sum:1001
NumOfRTT:3 Drop operation number:0
Operation sequence errors number:0 RTT Stats errors number:0
System busy operation number:0 Operation timeout number:0
Min Positive SD:10 Min Positive DS:40
Max Positive SD:10 Max Positive DS:40
Positive SD Number:1 Positive DS Number:1
Positive SD Sum:10 Positive DS Sum:40
Positive SD Square Sum:100 Positive DS Square Sum:1600
Min Negative SD:20 Min Negative DS:30
Max Negative SD:20 Max Negative DS:30
Negative SD Number:1 Negative DS Number:1
Negative SD Sum:20 Negative DS Sum:30
Run the display nqa results command. If the following information is displayed, it means that
the VPLS PW trace test succeeds.
<HUAWEI> display nqa results test-instance vplspwtrace
NQA entry(t, t) :testflag is inactive ,testtype is vplspwtrace
1 . Test 1 result The test is finished
Completion:success Attempts number:1
Disconnect operation number:0 Operation timeout number:0
System busy operation number:0 Connection fail number:0
Operation sequence errors number:0 RTT Stats errors number:0
Drop operation number:0
Last good path Time:2010-07-23 14:23:20.4
1 . Hop 1
Send operation times: 3 Receive response times: 3
Min/Max/Average Completion Time: 70/140/93
Sum/Square-Sum Completion Time: 280/29400
RTD OverThresholds number: 0
Last Good Probe Time: 2010-07-23 14:23:20.4
Destination ip address:3.3.3.3
Lost packet ratio: 0 %
Applicable Environment
With the NQA VPLS MFIB trace, the IGMP snooping of the egress can be checked and trouble
PEs can be found. The NQA VPLS MFIB trace is applicable to the following networks:
l Kompella VPLS
l Martini VPLS
Pre-configuration Tasks
Before configuring a VPLS MFIB trace to check the VPLS network, complete the following
tasks:
Data Preparation
To configure a VPLS MFIB Trace to check the Martini VPLS network, you need the following
data.
No. Data
3 (Optional) multicast source IP address, pad string, length of the payload in the
Echo Request packet, timeout period during which an Echo Reply packet is
awaited, time waiting for the next Echo Request packet, reply mode, priority of
the Echo Request packet, and permitted failure percentage
Context
Do as follows on the NQA client:
Procedure
Step 1 Run:
system-view
Step 2 Run:
nqa test-instance admin-name test-name
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type vplsmtrace
Step 4 Run:
vsi vsi-name
The name of the VSI to which the test instance corresponds is configured.
Step 5 Run:
destination-address ipv4 ip-address
Step 6 Run:
remote-address ipv4 remote-ip-address
The remote IP address is configured for the VPLS MFIB trace test instance.
Step 7 (Optional) Run the following commands to configure other parameters for the VPLS MFIB trace
test instance as required.
l To configure the aging time of the NQA test instance, run the agetime hh:mm:ss command.
l To set the description of the NQA test instance, run the description string (NQA view)
command.
l To set the test period of the NQA test instance, run the frequency interval command.
l To set the timeout period of the NQA test instance, run the timeout time command.
l To set the source IP address of the NQA test instance, run the source-address ipv4 ip-
address command.
l To set after how many hops a VPLS MFIB trace test instance is considered failed, run the
tracert-hopfailtimes times command.
l To set the lifetime of a VPLS MFIB trace test instance, run the tracert-livetime first-ttl
first-ttl max-ttl max-ttl command.
l To set the number of probe failures, that is, the threshold to trigger the sending of traps, run
the probe-failtimes times command.
l To set the maximum number of times of continuous test failures, run the test-failtimes
times command so that a trap message is sent when the maximum number of times is
exceeded.
l To configure the LSP EXP value, run the lsp-exp exp command.
l To configure the response mode of the Echo packet, run the lsp-replymode { no-reply |
udp | udp-via-vpls | udp-router-alert | level-control-channel } command.
NOTE
The lsp-replymode no-reply command can start a uni-directional test. No matter whether the test is
successful or not, the test result shows that the test fails. If the test is successful, the number of timeout
packets in the test is displayed in the test result; if the test is failed, the number of discarded packets in
the test is displayed in the test result.
Step 8 Run:
start
The start command has several forms. You can choose one of the following forms as required:
l Run the start now [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second |
hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command to start the test instance
immediately.
l Run the start at [ yyyy/mm/dd ] hh:mm:ss [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay
{ seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command to start
the test instance at a specified time.
l Run the start delay { seconds second | hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss |
delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command
to start the test instance after a certain delay.
----End
Prerequisites
All the configurations of the VPLS MFIB trace are complete.
NOTE
NQA test results are not displayed automatically on the terminal. You must run the display nqa results
command to view test results. By default, the command only shows the results of the latest five tests.
Procedure
Step 1 Run:
display nqa results [ test-instance admin-name test-name ]
----End
Example
Run the display nqa results command, and the following information is displayed.
<HUAWEI> display nqa results
NQA entry(admin, test) :testflag is inactive ,testtype is vplsmtrace
1 . Test 1 result The test is finished
Completion:success Attempts number:1
Disconnect operation number:0 Operation timeout number:0
System busy operation number:0 Connection fail number:0
Operation sequence errors number:0 RTT Stats errors number:0
Drop operation number:0
Last good path Time:2009-12-03 09:42:46.7
1 . Hop 1
Send operation times: 1 Receive response times: 1
Min/Max/Average Completion Time: 0/0/0
Sum/Square-Sum Completion Time: 0/0
RTD OverThresholds number: 0
Last Good Probe Time: 2009-12-03 09:42:43.9
Destination ip address:7.7.7.9
Lost packet ratio: 0 %
2 . Hop 2
Send operation times: 1 Receive response times: 1
Min/Max/Average Completion Time: 0/0/0
Sum/Square-Sum Completion Time: 0/0
RTD OverThresholds number: 0
Last Good Probe Time: 2009-12-03 09:42:46.7
Destination ip address:6.6.6.6
Lost packet ratio: 0 %
Pre-configuration Tasks
Before configuring a VPLS MAC ping test, complete the following tasks:
1. Configuring a VPLS network
2. Ensuring that the VSI is in the Up state
Data Preparation
To configure a VPLS MAC ping test, you need the following data.
No. Data
2 (Optional) VLAN ID
Context
Do as follows on the NQA client:
Procedure
Step 1 Run:
system-view
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type vplsping
Step 7 Run:
start
l To perform the NQA test immediately, run the start now [ end { at [ yyyy/mm/dd ]
hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ]
command.
The test instance is started immediately.
l To perform the NQA test at the specified time, run the start at [ yyyy/mm/dd ] hh:mm:ss
[ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime
{ seconds second | hh:mm:ss } } ] command.
The test instance is started at a specified time.
l To perform the NQA test after a certain delay period, run the start delay { seconds second
| hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } |
lifetime { seconds second | hh:mm:ss } } ] command.
The test instance is started after a certain delay.
For details about parameters in the start command, refer to the Command Reference.
----End
Prerequisites
All the configurations of the VPLS MAC Ping are complete.
NOTE
NQA test results are not displayed automatically on the terminal. You must run the display nqa results
command to view test results. By default, the command output only shows the results of the latest five
tests.
Procedure
Step 1 Run the display nqa results [ test-instance admin-name test-name] command on the NQA
client to display test results.
<HUAWEI> display nqa results
NQA entry (1, 1) :testflag is inactive ,testtype is vplsping
1 . Test 1 result The test is finished
Send operation times: 3 Receive response times: 3
Completion:success RTD OverThresholds number: 0
Attempts number:1 Drop operation number:0
Disconnect operation number:0 Operation timeout number:0
System busy operation number:0 Connection fail number:0
Operation sequence errors number:0 RTT Stats errors number:0
Destination ip address:168.1.1.1
Min/Max/Average Completion Time: 21/30/24
Sum/Square-Sum Completion Time: 74/1870
Last Good Probe Time: 2009-4-21 9:49:50.1
Lost packet ratio: 0 %
----End
Example
Run the display nqa results command. If the following information is displayed, it means that
the VPLS MAC Ping test is successful.
l Statistics on errors:
Number of unroutable connections
Number of incorrect sequences
Timeout times of the test packets
l History statistics of each test packet:
Timestamp added when each test packet is sent
Timestamp added when each test packet is received
Status of each packet that is displayed on the NQA client
l Statistics on the result of each test instance:
Number of successful tests
Sum of the response time of tests
RTT square sum (lower 32 bits and higher 32 bits)
Minimum and maximum RTT of the packet
Destination IP address type and destination IP address
Number of received Response packets and sent packets
Time when the last packet is received
Pre-configuration Tasks
Before configuring a VPLS MAC trace test, complete the following tasks:
1. Configuring a VPLS network
2. Ensuring that the VSI is in the Up state
Data Preparation
To configure a VPLS MAC trace test, you need the following data.
No. Data
No. Data
2 (Optional) VLAN ID
3 Start and end modes of the NQA VPLS MAC trace test
Context
Do as follows on the NQA client:
Procedure
Step 1 Run:
system-view
An NQA test instance is created and the test instance view is displayed.
Step 3 Run:
test-type vplstrace
For details about parameters in the start command, refer to the Command Reference.
----End
Prerequisites
All the configurations of the VPLS MAC Trace test are complete.
NOTE
NQA test results are not displayed automatically on the terminal. You must run the display nqa results
command to view test results. By default, the command output only shows the results of the latest five
tests.
Procedure
Step 1 Run the display nqa results [ test-instance admin-name operation-tag] command on the NQA
client to display test results.
<HUAWEI> display nqa results
NQA entry (1, 1) :testflag is inactive ,testtype is vplsping
1 . Test 1 result The test is finished
Send operation times: 3 Receive response times: 3
Completion:success RTD OverThresholds number: 0
Attempts number:1 Drop operation number:0
Disconnect operation number:0 Operation timeout number:0
System busy operation number:0 Connection fail number:0
Operation sequence errors number:0 RTT Stats errors number:0
Destination ip address:168.1.1.1
Min/Max/Average Completion Time: 21/30/24
Sum/Square-Sum Completion Time: 74/1870
Last Good Probe Time: 2009-4-21 9:49:50.1
Lost packet ratio: 0 %
----End
Example
Run the display nqa results command. If the following information is displayed, it means that
the VPLS MAC Trace test is successful.
l Statistics on errors:
Number of unroutable connections
Number of incorrect sequences
Timeout times of the test packets
l History statistics of each test packet:
Timestamp added when each test packet is sent
Timestamp added when each test packet is received
Status of each packet that is displayed on the NQA client
Destination IP address
l Statistics on the result of each test instance:
Number of successful tests
Number of received Response packets and sent packets
Time when the last packet is received
Prerequisites
To restart an NQA test instance, run the following command in the NQA instance view.
Context
CAUTION
Restarting an NQA test instance interrupts the running of tests.
Procedure
Step 1 Run the system-view command, enter the system view.
Step 2 Run the nqa test-instance admin-name test-name command, enter the NQA test instance view.
Step 3 Run the restart command in the NQA instance view to restart an NQA test instance.
----End
Prerequisites
NQA statistics cannot be restored after you clear them. So, confirm the action before you use
the command.
Context
NOTE
Procedure
Step 1 Run the reset mtrace statistics command, statistics about MTrace packets are cleared.
Step 3 Run the nqa test-instance admin-name test-name command, enter the NQA test instance view.
Step 4 Run the clear-records command in the NQA view to clear history statistics on NQA tests and
test results.
----End
Context
NOTE
This document takes interface numbers and link types of the NE40E-X8 as an example. In working
situations, the actual interface numbers and link types may be different from those used in this document.
Networking Requirements
As shown in Figure 7-3, Router A functions as an NQA client. It is required to test whether
Router B is routable.
Router A Router B
POS1/0/0 POS1/0/0
10.1.1.1/24 10.1.1.2/24
NQA agent
Configuration Roadmap
The configuration roadmap is as follows:
1. Perform the NQA ICMP test to test whether the packet sent by Router A can reach Router
B.
2. Perform the NQA ICMP test to obtain the RTT of the packet.
Data Preparation
To complete the configuration, you need the IP address of Router B.
Procedure
Step 1 Configure the IP address. (The detailed procedure is not mentioned here.)
Step 2 Enable the NQA client and create an NQA ICMP test.
<RouterA> system-view
[RouterA] nqa test-instance admin icmp
[RouterA-nqa-admin-icmp] test-type icmp
[RouterA-nqa-admin-icmp] destination-address ipv4 10.1.1.2
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 10.1.1.1 255.255.255.0
#
nqa test-instance admin icmp
test-type icmp
destination-address ipv4 10.1.1.2
#
return
Networking Requirements
As shown in Figure 7-4, Router B functions as a DHCP server. It is required to perform an NQA
DHCP test to obtain the time taken by the DHCP server to allocate an IP address to the client.
Router A Router B
GE1/0/0 GE1/0/0
10.1.1.1/24 10.1.1.2/24
NQA agent DHCP Server
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure Router A as the NQA client.
2. Create and perform the DHCP test on Router A to check whether Router A can set up a
connection with Router B and obtain an IP address from Router B.
Data Preparation
To complete the configuration, you need the following data:
l IP address of the DHCP server
l Source interface
l Timeout period
Procedure
Step 1 Configure the IP address. (The detailed procedure is not mentioned here.)
Step 2 Enable the NQA client and create an NQA DHCP test.
<RouterA> system-view
[RouterA] nqa test-instance admin dhcp
[RouterA-nqa-admin-dhcp] test-type dhcp
[RouterA-nqa-admin-dhcp] source-interface gigabitethernet 1/0/0
[RouterA-nqa-admin-dhcp] timeout 20
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
interface GigabitEthernet1/0/0
ip address 10.1.1.1 255.255.255.0
#
nqa test-instance admin dhcp
test-type dhcp
timeout 20
source-interface GigabitEthernet1/0/0
#
return
Networking Requirements
As shown in Figure 7-5, Router B functions as an FTP server.
A user named user1 intends to log in to the FTP server by entering the password hello to
download the file named test.txt.
POS1/0/0 POS1/0/0
10.1.1.1/24 10.1.1.2/24
FTP Client FTP Server
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure Router A as the NQA client.
2. Create and perform an FTP download test on Router A to check whether Router A can set
up a connection with the FTP server and to obtain the time taken by Router A to download
the file from the FTP server.
Data Preparation
To complete the configuration, you need the following data:
l IP address of the FTP server
l Source IP address for the test
l FTP user name and password
l Operation file of the FTP test
Procedure
Step 1 Configure IP addresses of Router A and Router B. (The detailed procedure is not mentioned
here.)
Step 2 Configure Router B as the FTP server.
<RouterB> system-view
[RouterB] ftp-server enable
[RouterB] aaa
[RouterB-aaa] local-user user1 password cipher hello
[RouterB-aaa] local-user user1 service-type ftp
[RouterB-aaa] local-user user1 ftp-directory flash:/
[RouterB-aaa] quit
Average RTT:656
Lost packet ratio:0 %
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 10.1.1.1 255.255.255.0
#
nqa test-instance admin ftp
test-type ftp
destination-address ipv4 10.1.1.2
source-address ipv4 10.1.1.1
ftp-operation get
ftp-filename test.txt
ftp-username user1
ftp-password %$%$gw1.QU~4M1I@ESF>b/VP,@7.%$%$
#
return
Networking Requirements
As shown in Figure 7-6, it is required to test the speed of uploading a file from Router A to an
FTP server.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure Router A as an NQA client as well as an FTP client. Create and perform an FTP
test on Router A to check whether Router A can set up a connection with the FTP server
and to obtain the time taken by Router A to upload a file to the FTP server.
2. A user named user1 logs in to the FTP server by entering the password hello to upload a
file with the size being 10 KB.
Data Preparation
To complete the configuration, you need the following data:
l IP address of the FTP server
l Source IP address for the test
l FTP user name and password
l Size of the uploaded file
Procedure
Step 1 Configure reachable routes between Router A, Router B, and Router C. (The detailed procedure
is not mentioned here.)
Step 2 Configure Router C as the FTP server.
<RouterC> system-view
[RouterC] ftp-server enable
[RouterC] aaa
[RouterC-aaa] local-user user1 password cipher hello
[RouterC-aaa] local-user user1 service-type ftp
[RouterC-aaa] local-user user1 ftp-directory flash:
[RouterC-aaa] quit
Step 3 Create an NQA FTP test on Router A and create a file with the size being 10 KB for uploading.
<Router> system-view
[RouterA] nqa test-instance admin ftp
[RouterA-nqa-admin-ftp] test-type ftp
[RouterA-nqa-admin-ftp] destination-address ipv4 10.2.1.2
[RouterA-nqa-admin-ftp] source-address ipv4 10.1.1.1
[RouterA-nqa-admin-ftp] ftp-operation put
[RouterA-nqa-admin-ftp] ftp-username user1
[RouterA-nqa-admin-ftp] ftp-password hello
[RouterA-nqa-admin-ftp] ftp-filesize 10
# On Router C, you can view that a file named nqa-ftp-test.txt is added.(Part of the configuration
file is displayed.)
<RouterC> dir
Directory of flash:/
0 -rw- 331 Jul 06 2007 18:34:34 private-data.txt
1 -rw- 1024000 Jul 06 2007 18:37:06 nqa-ftp-test.txt
2540 KB total (1536 KB free)
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 10.1.1.1 255.255.255.0
#
nqa test-instance admin ftp
test-type ftp
destination-address ipv4 10.2.1.2
source-address ipv4 10.1.1.1
ftp-operation put
ftp-filesize 10
ftp-username user1
ftp-password %$%$gw1.QU~4M1I@ESF>b/VP,@7.%$%$
#
ip route-static 10.2.1.0 255.255.255.0 10.1.1.2
#
return
#
aaa
local-user user1 password cipher 3MQ*TZ,O3KCQ=^Q`MAF4<1!!
local-user user1 service-type ftp
local-user user1 ftp-directory flash:
#
ip route-static 10.1.1.0 255.255.255.0 10.2.1.1
#
return
Networking Requirements
As shown in Figure 7-7, Router A is connected with the HTTP server through a WAN.
HTTP Server
10.2.1.1/24
Router A
IP Network
POS1/0/0
10.1.1.1/24
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure Router A as an NQA client.
2. Create and perform an HTTP test on Router A to check whether Router A can set up a
connection with the HTTP server and to obtain the time of file transferring between Router
A and the HTTP server.
Data Preparation
To complete the configuration, you need the following data:
l IP address of the HTTP server
l HTTP operation type
Procedure
Step 1 Configure the IP address. (The detailed procedure is not mentioned here.)
Step 2 Enable the NQA client and create an NQA HTTP test.
<RouterA> system-view
----End
Configuration Files
The configuration file of Router A is as follows:
#
sysname RouterA
#
interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 10.1.1.1 255.255.255.0
#
nqa test-instance admin http
test-type http
destination-address ipv4 10.2.1.1
http-operation get
http-url www.huawei.com
#
return
Networking Requirements
As shown in Figure 7-8, Router A functions as a DNS client to access the host 10.2.1.1/24, using
a domain name server.com.
Router A
IP Network
POS1/0/0
10.1.1.1/24
DNS Server
10.3.1.1/24
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure Router A as an NQA client.
2. Create and perform a DNS test on Router A to check whether Router A can set up a
connection with the DNS server and to obtain the speed of responding an address resolution
request.
Data Preparation
To complete the configuration, you need the following data:
l IP address of the DNS server
l Name of the host to be accessed
Procedure
Step 1 Configure reachable routes between Router A, the DNS server, and the host to be accessed. (The
detailed procedure is not mentioned here.)
Step 2 Create an NQA DNS test.
<Router> system-view
[RouterA] dns resolve
[RouterA] dns server 10.3.1.1
[RouterA] nqa test-instance admin dns
[RouterA-nqa-admin-dns] test-type dns
[RouterA-nqa-admin-dns] dns-server ipv4 10.3.1.1
[RouterA-nqa-admin-dns] destination-address url server.com
----End
Configuration Files
The configuration file of Router A is as follows:
#
sysname RouterA
#
dns resolve
dns server 10.3.1.1
#
interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 10.1.1.1 255.255.255.0
#
nqa test-instance admin dns
test-type dns
destination-address url server.com
dns-server ipv4 10.3.1.1
#
ip route-static 10.3.1.0 255.255.255.0 10.1.1.2
ip route-static 10.2.1.0 255.255.255.0 10.1.1.2
#
#
return
Networking Requirements
As shown in Figure 7-9, perform the Traceroute test on Router A to trace the IP address of
POS 1/0/0 on Router C.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the Traceroute test, you need to configure the destination IP address to be tested.
Procedure
Step 1 Configure reachable routes between Router A, Router B, and Router C. (The detailed procedure
is not mentioned here.)
Step 2 Create an NQA Traceroute test on Router A and configure the destination IP address to be tested
to 10.2.1.2.
<Router > system-view
[RouterA] nqa test-instance admin trace
[RouterA-nqa-admin-trace] test-type trace
[RouterA-nqa-admin-trace] destination-address ipv4 10.2.1.2
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 10.1.1.1 255.255.255.0
#
nqa test-instance admin trace
test-type trace
destination-address ipv4 10.2.1.2
#
ip route-static 10.2.1.0 255.255.255.0 10.1.1.2
#
return
Networking Requirements
As shown in Figure 7-10, Router C functions as an SNMP agent. It is required to perform an
NQA SNMP Query test to obtain the time from when routerA sends an SNMP query packet to
when Router A receives an Echo packet.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure Router A as an NQA client.
2. Create and perform an SNMP Query test on Router A.
3. Enable SNMP agent on Router C.
Data Preparation
To complete the configuration, you need to configure the IP address of the SNMP agent.
Procedure
Step 1 Configure reachable routes between Router A, Router B, and Router C. (The detailed procedure
is not mentioned here.)
Step 2 Enable SNMP agent on Router C.
<RouterC> system-view
[RouterC] snmp-agent
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 10.1.1.1 255.255.255.0
#
nqa test-instance admin snmp
test-type snmp
Networking Requirements
As shown in Figure 7-11, it is required to perform an NQA TCP Private test to obtain the time
taken by Router A to set up a TCP connection with Router C.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
l IP address of the NQA server
l TCP port number monitored by the server
Procedure
Step 1 Configure reachable routes between Router A, Router B, and Router C. (The detailed procedure
is not mentioned here.)
Step 2 Configure Router C as the NQA server.
# Configure the IP address and port number monitored by the NQA server.
<RouterC> system-view
[RouterC] nqa-server tcpconnect 10.2.1.2 9000
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 10.1.1.1 255.255.255.0
#
nqa test-instance admin tcp
test-type tcp
destination-address ipv4 10.2.1.2
destination-port 9000
#
ip route-static 10.2.1.0 255.255.255.0 10.1.1.2
#
return
Networking Requirements
As shown in Figure 7-12, it is required to perform an NQA UDP Public test to obtain the RTT
of a UDP packet transmitted between Router A and Router C.
Configuration Roadmap
The configuration roadmap is as follows:
1. Router A functions as an NQA client; Router C functions as an NQA server.
2. Configure the port number monitored by the NQA server and create an NQA UDP test on
the NQA client.
Data Preparation
To complete the configuration, you need the following data:
l IP address of the NQA server
l UDP port number monitored by the server
Procedure
Step 1 Configure reachable routes between Router A, Router B, and Router C. (The detailed procedure
is not mentioned here.)
Step 2 Configure Router C as the NQA server.
# Configure the IP address and UDP port number monitored by the NQA server.
<RouterC> system-view
[RouterC] nqa-server udpecho 10.2.1.2 6000
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 10.1.1.1 255.255.255.0
#
nqa test-instance admin udp
test-type udp
destination-address ipv4 10.2.1.2
destination-port 6000
#
ip route-static 10.2.1.0 255.255.255.0 10.1.1.2
#
return
Networking Requirements
As shown in Figure 7-13, it is required to perform an NQA Jitter test to obtain the jitter time of
the packet transmitted from Router A to Router C.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure Router A as an NQA client, with Router C being its server.
2. Configure the monitoring service types and the port number to be monitored on the NQA
server.
3. Create Jitter tests on the NQA clients.
Data Preparation
To complete the configuration, you need the following data:
l IP address of the NQA server
l UDP port number monitored by the server
Procedure
Step 1 Configure reachable routes between Router A, Router B, and Router C. (The detailed procedure
is not mentioned here.)
Step 2 Configure Router C as the NQA server.
# Configure the IP address and UDP port number monitored by the NQA server.
<RouterC> system-view
[RouterC] nqa-server udpecho 10.2.1.2 9000
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 10.1.1.1 255.255.255.0
#
nqa test-instance admin jitter
test-type jitter
destination-address ipv4 10.2.1.2
destination-port 9000
#
ip route-static 10.2.1.0 255.255.255.0 10.1.1.2
#
return
#
return
Networking Requirements
As shown in Figure 7-14, the NQA jitter function is used to test the jitter time of transmitting
packets from Router A to Router C. The accuracy of the test can be improved by enabling the
LPU to send packets.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure Router A as the NQA client and Router C as the NQA server.
2. Configure the type of the service to be monitored and number of the monitoring port on
the NQA server.
3. Configure a Jitter NQA test instance on the NQA client.
4. Enable the LPU to send packets on the NQA client.
Data Preparation
To complete the configuration, you need the following data:
l Host address on the server
l Number of the port used for monitoring UDP services on the server
Procedure
Step 1 Configure reachable routes among Router A, Router B, and Router C.
The configuration details are not mentioned here.
Step 2 Configure an NQA server for Router C.
# Configure the IP address and number of the port used for monitoring UDP services on the
NQA server.
<RouterC> system-view
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
#
7.47.13 Example for Configuring the LSP Ping Test for the LDP
Tunnel
This part provides examples for configuring an LSP ping test to check the operating status of
the LSP.
Networking Requirements
As shown in Figure 7-15,
l Run OSPF on Router A, Router B and Router C, enabling the three Routers to advertise
host routes of loopback interfaces to each other.
l Enable MPLS and MPLS LDP on Router A, Router B, and Router C.
l Enable MPLS and MPLS LDP on the POS interfaces connecting Router A, Router B, and
Router C to trigger the setup of an LDP tunnel.
It is required to perform an NQA LSP Ping test to check the connectivity of the LSP between
Router A and Router C.
area 0
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure Router A as an NQA client.
2. Configure Router C as an NQA server.
3. Create an LSP Ping test on Router A.
Data Preparation
To complete the configuration, you need to configure the IP address and mask of the NQA server.
Procedure
Step 1 Configure reachable routes between Router A, Router B, and Router C. (The detailed procedure
is not mentioned here.)
Step 2 Configure Router A.
# Enable the NQA client and create an LSP Ping test for the LDP tunnel.
<RouterA> system-view
[RouterA] nqa test-instance admin lspping
[RouterA-nqa-admin-lspping] test-type lspping
[RouterA-nqa-admin-lspping] lsp-type ipv4
[RouterA-nqa-admin-lspping] destination-address ipv4 3.3.3.9 lsp-masklen 32
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
mpls lsr-id 1.1.1.9
mpls
#
mpls ldp
#
interface Pos1/0/0
link-protocol ppp
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 10.1.1.0 0.0.0.255
network 1.1.1.9 0.0.0.0
#
nqa test-instance admin lspping
test-type lspping
destination-address ipv4 3.3.3.9 lsp-masklen 32
#
return
7.47.14 Example for Configuring the LSP Jitter Test for the LDP
Tunnel
This part provides examples for configuring an LSP jitter test to measure jitter in the LSP during
the packet transmission.
Networking Requirements
As shown in Figure 7-13,
l Run OSPF on Router A, Router B, and Router C, and enable the three Routers to advertise
host routes of loopback interfaces to each other.
l Enable MPLS and MPLS LDP on Router A, Router B, and Router C.
l Enable MPLS and MPLS LDP on the POS interfaces connecting Router A, Router B, and
Router C to trigger the setup of an LDP tunnel.
It is required to perform an NQA LSP Jitter test to check the connectivity of the LSP between
Router A and Router C.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure Router A as an NQA client.
2. Configure Router C as an NQA server.
3. Create an LSP Jitter test on Router A.
Data Preparation
To complete the configuration, you need to configure the IP address and mask of the NQA server.
Procedure
Step 1 Configure routes between Router A, Router B, and Router C. (The detailed procedure is not
mentioned here.)
Step 2 Configure LDP on Router A, Router B, and Router C. (The detailed procedure is not mentioned
here.)
For the configuration of LDP, refer to the HUAWEI NetEngine80E/40E Router Configuration
Guide - MPLS.
Step 3 Configure Router A as the NQA client.
# Enable the NQA client and create an LSP Jitter test for the LDP tunnel.
<RouterA> system-view
[RouterA] nqa test-instance admin lspjitter
[RouterA-nqa-admin-lspjitter] test-type lspjitter
[RouterA-nqa-admin-lspjitter] lsp-type ipv4
[RouterA-nqa-admin-lspjitter] destination-address ipv4 3.3.3.9 lsp-masklen 32 lsp-
loopback 127.0.0.1
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
mpls lsr-id 1.1.1.9
mpls
#
mpls ldp
#
interface Pos1/0/0
link-protocol ppp
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 10.1.1.0 0.0.0.255
network 1.1.1.9 0.0.0.0
#
nqa test-instance admin lspjitter
test-type lspjitter
destination-address ipv4 3.3.3.9 lsp-masklen 32 lsp-loopback 127.0.0.1
#
return
7.47.15 Example for Configuring the LSP Jitter Test for the MPLS
TE Tunnel
This part provides examples for configuring an LSP jitter test to measure jitter in the TE LSP
during the packet transmission.
Networking Requirements
As shown in Figure 7-13,
l Run OSPF on Router A, Router B, and Router C, and enable the three Routers to advertise
host routes of loopback interfaces to each other.
l Enable MPLS, MPLS TE, and MPLS RSVP-TE on Router A, Router B, and Router C.
l Enable MPLS, MPLS TE, and MPLS RSVP-TE on the POS interfaces connecting Router
A, Router B and Router C to set up a TE tunnel from Router A and Router C.
It is required to perform an NQA LSP Jitter test to check the connectivity of the TE tunnel from
Router A to Router C.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the number of the MPLS TE tunnel interface.
Procedure
Step 1 Configure routes between Router A, Router B, and Router C. (The detailed procedure is not
mentioned here.)
Step 2 Configure MPLS RSVP-TE on Router A, Router B, and Router C. (The detailed procedure is
not mentioned here.)
For the configuration of MPLS RSVP-TE, refer to the HUAWEI NetEngine80E/40E Router
Configuration Guide - MPLS.
Step 3 Set up a TE tunnel from Router A to Router C. (The detailed procedure is not mentioned here.)
# Enable the NQA client and create an LSP Jitter test for the TE tunnel.
<RouterA> system-view
[RouterA] nqa test-instance admin lspjitter
[RouterA-nqa-admin-lspjitter] test-type lspjitter
[RouterA-nqa-admin-lspjitter] lsp-type te
[RouterA-nqa-admin-lspjitter] lsp-tetunnel tunnel 1/0/0
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
mpls lsr-id 1.1.1.9
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
interface Pos1/0/0
link-protocol ppp
ip address 10.1.1.1 255.255.255.0
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 10000
mpls te bandwidth bc0 5000
mpls rsvp-te
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
#
interface Tunnel1/0/0
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.9
mpls te tunnel-id 100
mpls te bandwidth ct0 3000
mpls te commit
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 10.1.1.0 0.0.0.255
network 1.1.1.9 0.0.0.0
mpls-te enable
#
nqa admin test-instance admin lsptrace
test-type lsptrace
lsp-type te
lsp-tetunnel Tunnel1/0/0
#
return
#
sysname RouterB
#
mpls lsr-id 2.2.2.9
mpls
mpls te
mpls rsvp-te
#
interface Pos1/0/0
link-protocol ppp
ip address 10.1.1.2 255.255.255.0
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 10000
mpls te bandwidth bc0 5000
mpls rsvp-te
#
interface Pos2/0/0
link-protocol ppp
ip address 10.2.1.1 255.255.255.0
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 10000
mpls te bandwidth bc0 5000
mpls rsvp-te
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 2.2.2.9 0.0.0.0
network 10.1.1.0 0.0.0.255
network 10.2.1.0 0.0.0.255
mpls-te enable
#
return
Networking Requirements
As shown in Figure 7-16,
Router A serves as the NQA client to test the jitter of the network between Router A and Router
B.
GE1/0/0
10.1.1.1/24
10.1.1.2/24
GE1/0/0
RouterA RouterB
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure Router A as the NQA client and create an ICMP jitter test instance on Router
A.
2. Configure Router B as the NQA server.
Data Preparation
To complete the configuration, you need the following data:
l IP address of Router B
Procedure
Step 1 Configure a reachable route between Router A and Router B.
The configuration details are not mentioned here.
Step 2 Configure an NQA test instance for Router A.
# Enable the NQA client and configure the ICMP jitter test instance.
<Router A> system-view
[Router A] nqa test-instance admin icmpjitter
[Router A-nqa-admin-icmpjitter] test-type icmpjitter
[Router A-nqa-admin-icmpjitter] destination-address ipv4 10.1.1.2
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
#
nqa test-instance admin icmpjitter
test-type icmpjitter
destination-address ipv4 10.1.1.2
#
return
#
sysname RouterB
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
#
return
Networking Requirements
As shown in Figure 7-17,
Router A serves as the NQA client to test the jitter of the network between Router A and Router
B. The accuracy of the test can be improved by enabling the LPU to send packets.
Figure 7-17 Networking diagram of an ICMP jitter test based on the mechanism in which the
LPU sends packets
GE1/0/0 GE1/0/0
10.1.1.1/24 10.1.1.2/24
RouterA RouterB
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure Router A as the NQA client and create an ICMP jitter test instance on Router
A.
2. Configure Router B as the NQA server.
3. Enable the LPU to send packets on Router A.
Data Preparation
To complete the configuration, you need the following data:
l IP addresses of Router A and Router B
Procedure
Step 1 Configure a reachable route between Router A and Router B.
The configuration details are not mentioned here.
Step 2 Configure Router B as the ICMP server.
# Assign an IP address to the ICMP server.
<RouterB> system-view
[RouterB] nqa-server icmp-server 10.1.1.2
----End
Configuration Files
Configuration file of Router A
#
sysname RouterB
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
#
nqa-server icmp-server 10.1.1.2
#
return
Networking Requirements
As shown in Figure 7-18,
Router A serves as the NQA client to test the jitter of the network between Router A and Router
C.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure Router A as the NQA client and create a path jitter test instance on Router A.
2. Configure Router C as the NQA server.
Data Preparation
To complete the configuration, you need the following data:
l IP addresses of Router A, Router B, and Router C
Procedure
Step 1 Configure a reachable route between Router A and Router C.
The configuration details are not mentioned here.
Step 2 Configure an NQA test instance for Router A.
# Enable the NQA client and configure the path jitter test instance.
<RouterA> system-view
[RouterA] nqa test-instance admin pathjitter
[RouterA-nqa-admin-pathjitter] test-type pathjitter
[RouterA-nqa-admin-pathjitter] destination-address ipv4 11.1.1.2
[RouterA-nqa-admin-pathjitter] icmp-jitter-mode icmp-echo
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
#
ospf 2
area 0.0.0.0
network 10.1.1.0 0.0.0.255
#
nqa test-instance admin pathjitter
test-type pathjitter
destination-address ipv4 11.1.1.2
icmp-jitter-mode icmp-echo
#
return
#
sysname RouterB
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 11.1.1.1 255.255.255.0
#
ospf 11
area 0.0.0.0
network 10.1.1.0 0.0.0.255
network 11.1.1.0 0.0.0.255
#
return
#
sysname RouterC
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 11.1.1.2 255.255.255.0
#
ospf 1
area 0.0.0.0
network 11.1.1.0 0.0.0.255
#
return
Networking Requirements
As shown in Figure 7-19,
Router A serves as the NQA client to test the MUT of the path between Router A and Router
C.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure Router A as the NQA client and create a path MTU test instance on Router A.
2. Configure Router C as the NQA server.
Data Preparation
To complete the configuration, you need the following data:
l IP addresses of Router A, Router B, and Router C
Procedure
Step 1 Configure a reachable route between Router A and Router C
The configuration details are not mentioned here.
Step 2 Configure an NQA test instance for Router A.
# Enable the NQA client and configure the path MTU test instance.
<RouterA> system-view
[RouterA] nqa test-instance admin pathmtu
[RouterA-nqa-admin-pathmtu] test-type pathmtu
[RouterA-nqa-admin-pathmtu] destination-address ipv4 11.1.1.2
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
#
ospf 2
area 0.0.0.0
network 10.1.1.0 0.0.0.255
#
nqa test-instance admin pathmtu
test-type pathmtu
destination-address ipv4 11.1.1.2
#
return
#
sysname RouterB
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 11.1.1.1 255.255.255.0
#
ospf 11
area 0.0.0.0
network 10.1.1.0 0.0.0.255
network 11.1.1.0 0.0.0.255
#
return
#
sysname RouterC
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 11.1.1.2 255.255.255.0
#
ospf 1
area 0.0.0.0
network 11.1.1.0 0.0.0.255
#
return
7.47.20 Example for Configuring the LSP Trace Test for the MPLS
TE Tunnel
This part provides examples for configuring an LSP Trace test to check the connectivity between
LSRs along the TE LSP.
Networking Requirements
As shown in Figure 7-20,
l Run OSPF on Router A, Router B and Router C, enabling them to advertise host routes of
loopback interfaces.
l Enable MPLS and MPLS RSVP-TE on Router A, Router B and Router C.
l Enable MPLS, MPLS TE and MPLS RSVP-TE on the POS interfaces connecting Router
A, Router B and Router C. A TE tunnel then is set up between Router A and Router C.
Use the NQA LSP Trace function to test the TE tunnel.
area 0
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure Router A as the NQA client. Create an LSP Trace on Router A.
2. Configure Router C as the NQA server.
Data Preparation
To complete the configuration, you need the TE tunnel ID.
Procedure
Step 1 Configure reachable routes Router A, Router B and Router C. (The detailed procedure is not
mentioned here.)
Step 2 Configure MPLS RSVP-TE on Router A, Router B and Router C. (The detailed procedure is
not mentioned here.)
For the configuration of MPLS RSVP-TE, refer to the HUAWEI NetEngine80E/40E Router
Configuration Guide - MPLS.
Step 3 Set up a TE tunnel between Router A and Router C. (The detailed procedure is not mentioned
here.)
Step 4 Create an NQA test on Router A.
# Enable the NQA client and create an LSP Trace test for the TE tunnel.
<RouterA> system-view
[RouterA] nqa test-instance admin lsptracert
[RouterA-nqa-admin-lsptracert] test-type lsptrace
[RouterA-nqa-admin-lsptracert] lsp-type te
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
mpls lsr-id 1.1.1.9
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
interface Pos1/0/0
link-protocol ppp
ip address 10.1.1.1 255.255.255.0
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 10000
mpls te bandwidth 5000
mpls rsvp-te
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
#
interface Tunnel1/0/0
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.9
mpls te tunnel-id 100
mpls te bandwidth ct0 3000
mpls te commit
#
ospf 1
opaque-capability enable
area 0.0.0.0
Networking Requirements
As shown in Figure 7-21, RouterA, Router B, and Router F are connected to the same network
segment. Open Shortest Path First (OSPF) runs on the interface that connects Router A to the
network segment. You are required to check the other OSPF IGP Routers existing on this network
segment.
In this scenario, the reserved multicast group 224.0.0.5 represents all OSPF IGP Routers on the
network segment. You can perform an MPing test to check whether some members of the
reserved multicast group reside on the specified network segment.
Figure 7-21 Networking diagram of configuring an NQA MPing test to check the members of
a reserved group address
GE3/0/0
11.1.6.1/24
RouterF
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure Router A as an NQA client and create an NQA MPing test on it. Set the
destination group address to 224.0.0.5 and the outbound interface to GE 3/0/0.
2. Checking the configuration.
3. Start the NQA MPing test and view test results. If RouterA receives responses from Router
B and Router F, it indicates that OSPF is also run on the interfaces on Router B and Router
F.
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Create an NQA MPing test on Router A.
<HUAWEI> sysname RouterA
[RouterA] nqa test-instance admin mping
[RouterA-nqa-admin-mping] test-type mping
[RouterA-nqa-admin-mping] destination-address ipv4 224.0.0.5
[RouterA-nqa-admin-mping] source-interface gigabitethernet 3/0/0
From the display, you can view that the multicast group 224.0.0.5 has two members, Router B
11.1.6.2 and Router F 11.1.6.1, on the network segment. This indicates that the corresponding
interfaces of Router B and Router F all run OSPF.
[RouterA-nqa-admin-mping] display nqa results test-instance admin mping
NQA entry(admin, mping) :testflag is inactive ,testtype is mping
1 . Test 1 result The test is finished
Completion:success Timeouts number: 0
Drops number: 0 TargetAddress: 224.0.0.5
ProbeResponses number: 3 SentProbes number: 3
Busies: 0
1 . Receiver 1
CompletionTime Min/Max/Sum: 10/80/100
Sum2CompletionTime: 6600
LastGoodProbe time: 2007-1-20 11:6:40.2
RecevierAddress: 11.1.6.2
2 . Receiver 2
CompletionTime Min/Max/Sum: 10/80/130
Sum2CompletionTime: 8100
LastGoodProbe time: 2007-1-20 11:6:40.2
RecevierAddress: 11.1.6.1
----End
Configuration Files
The configuration file on Router A is as follows:
#
sysname RouterA
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 11.1.6.3 255.255.255.0
#
ospf 1
area 0.0.0.0
network 11.1.6.0 0.0.0.255
#
nqa admin mping
test-type mping
destination-address ipv4 224.0.0.5
source-interface GigabitEthernet3/0/0
#
return
Networking Requirements
As shown in Figure 7-22, routers all run OSPF and unicast routes are correctly configured for
normal communication between them. Protocol Independent Multicast-Dense Mode (PIM-DM)
is deployed on the network. You are required to check whether each router is capable of correctly
processing multicast data.
In such a scenario, perform an MPing operation for a common group address to generate the
multicast traffic and hence trigger the setup of the multicast distribution tree. Through MPing,
you can also check whether each router contains correct multicast routing entries. To implement
MPing, you can create an NQA MPing test.
RouterE Loopback0
GE3/0/0 2.2.2.2/32
11.1.2.2/24 GE3/0/0
11.1.2.1/24
GE2/0/0 GE1/0/0
11.1.4.2/24 11.1.5.2/24
GE2/0/0 RouterD GE1/0/0
11.1.4.1/24 11.1.5.1/24
GE3/0/0
11.1.6.1/24 GE1/0/0
11.1.7.2/24
RouterF
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure GE 1/0/0 on Router F to statically join multicast group 225.0.0.1.
2. Configure Router E as an NQA client and create an MPing test on Router E, with the group
address being 225.0.0.1.
3. Verify the configuration.
4. Start the NQA MPing test and view test results.
5. Check whether the multicast routing entries are correctly generated on each router and
whether the multicast distribution tree is successfully set up.
Data Preparation
To complete the configuration, you need the following data:
l Test instance name, admin mping
l Common group address, 225.0.0.1
Procedure
Step 1 Configure GE 1/0/0 on Router F to statically join the multicast group 225.0.0.1.
<RouterF> system-view
[RouterF] interface gigabitethernet 1/0/0
[RouterF-GigabitEthernet1/0/0] igmp enable
[RouterF-GigabitEthernet1/0/0] igmp static-group 225.0.0.1
# Check the multicast forwarding entries on Router D. You can find that Router D has generated
the entry (11.1.0.2, 225.0.0.1) after receiving the multicast packets from the upstream interface
GE 3/0/0.
<RouterD> display pim routing-table
Vpn-instance: public net
Total 0 (*, G) entry; 1 (S, G) entry
(11.1.0.2, 225.0.0.1)
Protocol: pim-dm, Flag:
UpTime: 00:06:23
Upstream interface: GigabitEthernet3/0/0
Upstream neighbor: 11.1.2.2
RPF prime neighbor: 11.1.2.2
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet2/0/0
Protocol: pim-dm, UpTime: 00:05:09, Expires: never
# Check the multicast forwarding entries on Router A. You can find that Router A has generated
the entry (11.1.0.2, 225.0.0.1) after receiving the multicast packet from the upstream interface
GE 3/0/0.
<Router A> display pim routing-table
Vpn-instance: public net
Total 0 (*, G) entry; 1 (S, G) entry
(11.1.0.2, 225.0.0.1)
Protocol: pim-dm, Flag: ACT
UpTime: 00:01:23
Upstream interface: GigabitEthernet3/0/0
Upstream neighbor: 11.1.6.2
RPF prime neighbor: 11.1.6.2
Downstream interface(s) information: None
# Check the multicast forwarding entries on Router B. You can find that Router B has generated
the entry (11.1.0.2, 225.0.0.1) after receiving the multicast packets from the upstream interface
GE 2/0/0.
<RouterB> display pim routing-table
Vpn-instance: public net
Total 0 (*, G) entry; 1 (S, G) entry
(11.1.0.2, 225.0.0.1)
Protocol: pim-dm, Flag: ACT
UpTime: 00:02:44
Upstream interface: GigabitEthernet2/0/0
Upstream neighbor: 11.1.4.2
RPF prime neighbor: 11.1.4.2
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet3/0/0
Protocol: pim-dm, UpTime: 00:01:30, Expires: never
# Check the multicast forwarding entries on Router F. You can find that Router F has generated
the entry (11.1.0.2, 225.0.0.1) after receiving the multicast packets from the upstream interface
GE 3/0/0.
<RouterF> display pim routing-table
Vpn-instance: public net
Total 1 (*, G) entry; 1 (S, G) entry
(*, 225.0.0.1)
Protocol: pim-dm, Flag: WC
UpTime: 00:06:36
Upstream interface: NULL
Upstream neighbor: NULL
RPF prime neighbor: NULL
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet1/0/0
Protocol: igmp, UpTime: 00:06:36, Expires: never
(11.1.0.2, 225.0.0.1)
Protocol: pim-dm, Flag: ACT
UpTime: 00:03:40
Upstream interface: GigabitEthernet3/0/0
Upstream neighbor: 11.1.6.2
RPF prime neighbor: 11.1.6.2
Downstream interface(s) information:
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
multicast routing-enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 11.1.5.1 255.255.255.0
pim dm
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 11.1.6.3 255.255.255.0
pim dm
#
pim
#
ospf 1
area 0.0.0.0
network 11.1.5.0 0.0.0.255
network 11.1.6.0 0.0.0.255
#
return
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 11.1.4.2 255.255.255.0
pim dm
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 11.1.2.1 255.255.255.0
pim dm
#
pim
#
ospf 1
area 0.0.0.0
network 11.1.2.0 0.0.0.255
network 11.1.4.0 0.0.0.255
network 11.1.6.0 0.0.0.255
#
return
l Configuration file of Router E
#
sysname RouterE
#
multicast routing-enable
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 11.1.2.2 255.255.255.0
pim dm
#
interface LoopBack0
undo shutdown
ip address 2.2.2.2 255.255.255.255
pim dm
#
ospf 1
area 0.0.0.0
network 11.1.2.2 0.0.0.255
network 2.2.2.2 0.0.0.0
#
nqa admin mping
test-type mping
destination-address ipv4 225.0.0.1
#
return
l Configuration file of Router F
#
sysname RouterF
#
multicast routing-enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 11.1.7.2 255.255.255.0
pim dm
igmp static-group 225.0.0.1
igmp enable
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 11.1.6.2 255.255.255.0
pim dm
#
pim
#
ospf 1
area 0.0.0.0
7.47.23 Example for Checking the RPF Path from the Multicast
Source to the Current router Through the MTrace Test
This part provides examples for configuring an MTrace test to check the RPF path from the
multicast source to the querier.
Networking Requirements
As shown in Figure 7-23, in a PIM-SM network, the Receiver joins the multicast group 225.1.1.1
and can normally receive multicast packets from the Source. You are required to obtain the RPF
path along which the multicast packets are transmitted from the Source to Router B.
In such a scenario, you can perform MTrace on Router B to detect the RPF path from the multicast
source to the current Router .
The MTrace function can be implemented by performing an NQA MTrace test.
Source
11.1.0.2/24
GE1/0/0
11.1.0.1/24
GE2/0/0 GE3/0/0
11.1.1.2/24 11.1.2.2/24
Loopback0 RouterE
1.1.1.1/32 GE3/0/0
11.1.2.1/24
GE2/0/0 GE2/0/0
11.1.1.1/24 11.1.4.2/24 RouterD
RouterC
GE1/0/0 GE1/0/0
11.1.3.2/24 11.1.5.2/24
RouterB
GE1/0/0 GE1/0/0
GE2/0/0 11.1.5.1/24
11.1.3.1/24 11.1.4.1/24
GE3/0/0
11.1.6.2/24 GE3/0/0 RouterA
11.1.6.3/24
Receiver
11.1.6.4/24
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
l Test instance name, admin mtrace
l IP address of the multicast source, 11.1.0.2/24
Procedure
Step 1 Create an NQA MTrace test on Router B.
<RouterB> system-view
[RouterB] nqa test-instance admin mtrace
[RouterB-nqa-admin-mtrace] test-type mtrace
[RouterB-nqa-admin-mtrace] mtrace-source-address ipv4 11.1.0.2
Step 2 View the multicast routes on Router C. You can find that Router C generates the entry (11.1.0.2,
225.1.1.1) after receiving the multicast packets. Its upstream and downstream interfaces are
respectively GE 2/0/0 and GE 1/0/0.
[RouterC]display pim routing-table
VPN-Instance: public net
Total 1 (*, G) entries; 1 (S, G) entries
(*, 225.1.1.1)
RP: 1.1.1.1 (local)
Protocol: pim-sm, Flag: WC
UpTime: 00:28:52
Upstream interface: Register
Upstream neighbor: NULL
RPF prime neighbor: NULL
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet1/0/0
Protocol: pim-sm, UpTime: 00:28:52, Expires: 00:02:38
(11.1.0.2, 225.1.1.1)
RP: 1.1.1.1 (local)
Protocol: pim-sm, Flag: SPT 2MSDP ACT
UpTime: 00:12:33
Upstream interface: GigabitEthernet2/0/0
Upstream neighbor: 11.1.1.2
RPF prime neighbor: 11.1.1.2
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet1/0/0
Protocol: pim-sm, UpTime: 00:12:33, Expires: -
Step 4 View the test result. You can find that the RPF path from the multicast source to Router B is
Router E → Router D → Router B.
[RouterB-nqa-admin-mtrace] display nqa results test-instance admin mtrace
NQA entry(admin, mtrace) : testflag is inactive ,testtype is mtrace
1 . Test 4 result The test is finished
Completions: success Query Mode: max-hop
Current Hop:3 Current Probe:1
SendProbe:1 ResponseProb:1
Timeout Count:0 Busy Count:0
Drop Count:0 Max Path Ttl:4
Responser:11.1.2.2 Response Rtt: 57
mtrace start time: 2007-2-7 17:26:17.3
Last Good Probe Time: 2007-2-7 17:26:17.4
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
multicast routing-enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 11.1.5.1 255.255.255.0
pim sm
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 11.1.6.3 255.255.255.0
igmp enable
pim sm
#
ospf 1
area 0.0.0.0
network 11.1.3.0 0.0.0.255
network 11.1.6.0 0.0.0.255
#
pim
#
return
interface GigabitEthernet1/0/0
undo shutdown
ip address 11.1.3.1 255.255.255.0
pim sm
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 11.1.4.1 255.255.255.0
pim sm
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 11.1.6.2 255.255.255.0
igmp enable
pim sm
#
ospf 1
area 0.0.0.0
network 11.1.3.0 0.0.0.255
network 11.1.4.0 0.0.0.255
network 11.1.6.0 0.0.0.255
#
pim
#
nqa admin mtrace
test-type mtrace
mtrace-source-address ipv4 11.1.0.2
#
Return
l Configuration file of Router C
#
sysname RouterC
#
multicast routing-enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 11.1.3.2 255.255.255.0
pim sm
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 11.1.1.1 255.255.255.0
pim sm
#
interface Loopback0
ip address 1.1.1.1 255.255.255.255
igmp enable
pim sm
#
pim
c-bsr LoopBack0
c-rp LoopBack0
#
ospf 1
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 11.1.1.0 0.0.0.255
network 11.1.3.0 0.0.0.255
#
Return
l Configuration file of Router D
#
sysname RouterD
#
multicast routing-enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 11.1.5.2 255.255.255.0
pim sm
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 11.1.4.2 255.255.255.0
pim sm
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 11.1.2.1 255.255.255.0
igmp enable
pim sm
#
ospf 1
area 0.0.0.0
network 11.1.3.0 0.0.0.255
network 11.1.4.0 0.0.0.255
network 11.1.6.0 0.0.0.255
#
pim
#
Return
7.47.24 Example for Checking the Multicast Path from the Multicast
Source to the Current router Through the MTrace Test
This part provides examples for configuring an MTrace test to check the multicast path from the
multicast source to the querier.
Networking Requirements
As shown in Figure 7-24, in a PIM-SM network, the Receiver joins the multicast group 225.1.1.1
and can normally receive multicast packets from the Source. Router B can correctly receive
multicast packets. You are required to obtain the multicast path along which multicast packets
are sent from the Source to Router B.
In such a scenario, you can perform an NQA MTrace test on Router B to detect the multicast
path from the multicast source to the current router.
Source
11.1.0.2/24
GE1/0/0
11.1.0.1/24
GE2/0/0 GE3/0/0
11.1.1.2/24 11.1.2.2/24
Loopback0 RouterE
1.1.1.1/32 GE3/0/0
11.1.2.1/24
GE2/0/0 GE2/0/0
11.1.1.1/24 11.1.4.2/24 RouterD
RouterC
GE1/0/0 GE1/0/0
11.1.3.2/24 11.1.5.2/24
RouterB
GE1/0/0 GE1/0/0
GE2/0/0 11.1.5.1/24
11.1.3.1/24 11.1.4.1/24
GE3/0/0
11.1.6.2/24 GE3/0/0 RouterA
11.1.6.3/24
Receiver
11.1.6.4/24
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Create an NQA MTrace test on Router B.
<RouterB> system-view
[RouterB] nqa test-instance admin mtrace
[RouterB-nqa-admin-mtrace] test-type mtrace
[RouterB-nqa-admin-mtrace] mtrace-source-address ipv4 11.1.0.2
[RouterB-nqa-admin-mtrace] mtrace-group-address ipv4 225.1.1.1
Step 3 View the test result. You can find that the multicast path from the multicast source to Router B
is Router E → Router D → Router B.
[RouterB-nqa-admin-mtrace] display nqa results test-instance admin mtrace
NQA entry(admin, mtrace) :testflag is inactive ,testtype is mtrace
1 . Test 1 result The test is finished
Completions: success Query Mode: max-hop
Current Hop:3 Current Probe:1
SendProbe:1 ResponseProb:1
Timeout Count:0 Busy Count:0
Drop Count:0 Max Path Ttl:4
Responser:11.1.2.2 Response Rtt: 64
mtrace start time: 2007-2-7 17:9:11.1
Last Good Probe Time: 2007-2-7 17:9:11.1
Last Good Path Time: 2007-2-7 17:9:11.1
1 . Hop 1
Outgoing Interface Address: 0.0.0.0
Incoming Interface Address: 11.1.4.1
Prehop Router Address: 11.1.4.2
Protocol : PIM Forward Code:NO_ERROR
Forward Ttl:1 Current Path Ttl:4
SG Packet Count:62 Hop Time Delay(ms):1
Input Packet Count:8845 Output Packet Count: 0xffffffff
Input Rate(pps): 0xffffffff Output Rate(pps): 0xffffffff
Input Loss Rate: 0xffffffff SG Loss Rate: 0xffffffff
2 . Hop 2
Outgoing Interface Address: 11.1.4.2
Incoming Interface Address: 11.1.2.1
Prehop Router Address: 11.1.2.2
Protocol : PIM Forward Code:NO_ERROR
Forward Ttl:1 Current Path Ttl:3
SG Packet Count:65 Hop Time Delay(ms): 0xffffffff
Input Packet Count:9264 Output Packet Count:8792
Input Rate(pps): 0xffffffff Output Rate(pps): 0xffffffff
Input Loss Rate: 0xffffffff SG Loss Rate: 0xffffffff
3 . Hop 3
Outgoing Interface Address: 11.1.2.2
Incoming Interface Address: 11.1.0.1
Prehop Router Address: 0.0.0.0
Protocol : PIM Forward Code:NO_ERROR
Forward Ttl:1 Current Path Ttl:2
SG Packet Count:0 Hop Time Delay(ms): 0xffffffff
Input Packet Count:98002 Output Packet Count:15987
Input Rate(pps): 0xffffffff Output Rate(pps): 0xffffffff
Input Loss Rate: 0xffffffff SG Loss Rate: 0xffffffff
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
multicast routing-enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 11.1.5.1 255.255.255.0
pim sm
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 11.1.6.3 255.255.255.0
igmp enable
pim sm
#
ospf 1
area 0.0.0.0
network 11.1.3.0 0.0.0.255
network 11.1.6.0 0.0.0.255
#
pim
#
return
l Configuration file of Router B
#
sysname RouterB
#
multicast routing-enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 11.1.3.1 255.255.255.0
pim sm
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 11.1.4.1 255.255.255.0
pim sm
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 11.1.6.2 255.255.255.0
igmp enable
pim sm
#
ospf 1
area 0.0.0.0
network 11.1.3.0 0.0.0.255
network 11.1.4.0 0.0.0.255
network 11.1.6.0 0.0.0.255
#
nqa admin mtrace
test-type mtrace
mtrace-source-address ipv4 11.1.0.2
mtrace-group-address ipv4 225.1.1.1
#
return
l Configuration file of Router C
#
sysname RouterC
#
multicast routing-enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 11.1.3.2 255.255.255.0
pim sm
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 11.1.1.1 255.255.255.0
pim sm
#
interface Loopback0
ip address 1.1.1.1 255.255.255.255
igmp enable
pim sm
#
pim
c-bsr LoopBack0
c-rp LoopBack0
#
ospf 1
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 11.1.1.0 0.0.0.255
network 11.1.3.0 0.0.0.255
#
Return
l Configuration file of Router D
#
sysname RouterD
#
multicast routing-enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 11.1.5.2 255.255.255.0
pim sm
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 11.1.4.2 255.255.255.0
pim sm
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 11.1.2.1 255.255.255.0
igmp enable
pim sm
#
ospf 1
area 0.0.0.0
network 11.1.3.0 0.0.0.255
network 11.1.4.0 0.0.0.255
network 11.1.6.0 0.0.0.255
#
pim
#
Return
l Configuration file of Router E
#
sysname RouterE
#
multicast routing-enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 11.1.0.1 255.255.255.0
pim sm
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 11.1.1.2 255.255.255.0
pim sm
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 11.1.2.2 255.255.255.0
igmp enable
pim sm
#
ospf 1
area 0.0.0.0
network 11.1.3.0 0.0.0.255
network 11.1.4.0 0.0.0.255
network 11.1.6.0 0.0.0.255
#
pim
#
Return
7.47.25 Example for Checking the RPF Path from the Multicast
Source to the Destination Host Through the MTrace Test
This part provides examples for configuring an MTrace test to check the RPF path from the
multicast source to the destination host.
Networking Requirements
As shown in Figure 7-25, in a PIM-SM network, the Receiver joins the multicast group 225.1.1.1
and can normally receive multicast packets from the Source. Router B is the last hop. You are
required to obtain the RPF along which the multicast packets are transmitted from the Source
to the Receiver.
In such a scenario, you can perform an NQA MTrace test on Router C to detect the RPF path
from the multicast source to the destination host.
Source
11.1.0.2/24
GE1/0/0
11.1.0.1/24
GE2/0/0 GE3/0/0
11.1.1.2/24 11.1.2.2/24
Loopback0 RouterE
1.1.1.1/32 GE3/0/0
11.1.2.1/24
GE2/0/0 GE2/0/0
11.1.1.1/24 11.1.4.2/24 RouterD
RouterC
GE1/0/0 GE1/0/0
11.1.3.2/24 11.1.5.2/24
RouterB
GE1/0/0 GE1/0/0
GE2/0/0 11.1.5.1/24
11.1.3.1/24 11.1.4.1/24
GE3/0/0
11.1.6.2/24 GE3/0/0 RouterA
11.1.6.3/24
Receiver
11.1.6.4/24
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure Router C as an NQA client and create an MTrace test on it.
2. Start the NQA MTrace test and view test results.
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Create an NQA MTrace test on Router C.
<RouterC> system-view
[RouterC] nqa test-instance admin mtrace
[RouterC-nqa-admin-mtrace] test-type mtrace
[RouterC-nqa-admin-mtrace] mtrace-source-address ipv4 11.1.0.2
Step 3 View the test result. You can find that the RPF path from the multicast source to Router C is
Router E → Router D → Router B.
[RouterC-nqa-admin-mtrace] display nqa results test-instance admin mtrace
NQA entry(admin, mtrace) :testflag is inactive ,testtype is mtrace
1 . Test 1 result The test is finished
Completions: success Query Mode: max-hop
Current Hop:3 Current Probe:1
SendProbe:1 ResponseProb:1
Timeout Count:0 Busy Count:0
Drop Count:0 Max Path Ttl:4
Responser:11.1.0.2 Response Rtt: 62
mtrace start time: 2007-2-7 17:36:18.1
Last Good Probe Time: 2007-2-7 17:36:18.1
Last Good Path Time: 2007-2-7 17:36:18.1
1 . Hop 1
Outgoing Interface Address: 11.1.6.2
Incoming Interface Address: 11.1.4.1
Prehop router Address: 11.1.4.2
Protocol : PIM Forward Code:NO_ERROR
Forward Ttl:1 Current Path Ttl:4
SG Packet Count:0xffffffff Hop Time Delay(ms):65
Input Packet Count:9424 Output Packet Count:9815
Input Rate(pps): 0xffffffff Output Rate(pps): 0xffffffff
Input Loss Rate: 0xffffffff SG Loss Rate: 0xffffffff
2 . Hop 2
Outgoing Interface Address: 11.1.4.2
Incoming Interface Address: 11.1.2.1
Prehop router Address: 11.1.2.2
Protocol : PIM Forward Code:NO_ERROR
Forward Ttl:1 Current Path Ttl:3
SG Packet Count:0xffffffff Hop Time Delay(ms):1
Input Packet Count:9849 Output Packet Count:9429
Input Rate(pps): 0xffffffff Output Rate(pps): 0xffffffff
Input Loss Rate: 0xffffffff SG Loss Rate: 0xffffffff
3 . Hop 3
Outgoing Interface Address: 11.1.2.2
Incoming Interface Address: 11.1.0.1
Prehop router Address: 0.0.0.0
Protocol : PIM Forward Code:NO_ERROR
Forward Ttl:1 Current Path Ttl:2
SG Packet Count:0xffffffff Hop Time Delay(ms): 0xffffffff
Input Packet Count:109581 Output Packet Count:16529
Input Rate(pps): 0xffffffff Output Rate(pps): 0xffffffff
Input Loss Rate: 0xffffffff SG Loss Rate: 0xffffffff
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
multicast routing-enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 11.1.5.1 255.255.255.0
pim sm
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 11.1.6.3 255.255.255.0
igmp enable
pim sm
#
ospf 1
area 0.0.0.0
network 11.1.3.0 0.0.0.255
network 11.1.6.0 0.0.0.255
#
pim
#
return
l Configuration file of Router B
#
sysname RouterB
#
multicast routing-enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 11.1.3.1 255.255.255.0
pim sm
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 11.1.4.1 255.255.255.0
pim sm
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 11.1.6.2 255.255.255.0
igmp enable
pim sm
#
ospf 1
area 0.0.0.0
network 11.1.3.0 0.0.0.255
network 11.1.4.0 0.0.0.255
network 11.1.6.0 0.0.0.255
#
return
l Configuration file of Router C
#
sysname RouterC
#
multicast routing-enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 11.1.3.2 255.255.255.0
pim sm
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 11.1.1.1 255.255.255.0
pim sm
#
interface LoopBack0
ip address 1.1.1.1 255.255.255.255
pim sm
#
ospf 1
area 0.0.0.0
network 11.1.3.0 0.0.0.255
network 11.1.1.0 0.0.0.255
network 1.1.1.1 0.0.0.0
#
pim
c-bsr LoopBack0
c-rp LoopBack0
#
nqa admin mtrace
test-type mtrace
destination-address ipv4 11.1.6.4
mtrace-query-type last-hop
mtrace-last-hop-address ipv4 11.1.6.2
mtrace-source-address ipv4 11.1.0.2
#
return
#
l Configuration file of Router D
#
sysname RouterD
#
multicast routing-enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 11.1.5.2 255.255.255.0
pim sm
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 11.1.4.2 255.255.255.0
pim sm
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 11.1.2.1 255.255.255.0
igmp enable
pim sm
#
ospf 1
area 0.0.0.0
network 11.1.3.0 0.0.0.255
network 11.1.4.0 0.0.0.255
network 11.1.6.0 0.0.0.255
#
pim
#
Return
l Configuration file of Router E
#
sysname RouterE
#
multicast routing-enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 11.1.0.1 255.255.255.0
pim sm
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 11.1.1.2 255.255.255.0
pim sm
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 11.1.2.2 255.255.255.0
igmp enable
pim sm
#
ospf 1
area 0.0.0.0
network 11.1.3.0 0.0.0.255
network 11.1.4.0 0.0.0.255
network 11.1.6.0 0.0.0.255
#
pim
#
Return
7.47.26 Example for Checking the Multicast Path from the Multicast
Source to the Destination Host Through the MTrace Test
This part provides examples for configuring an MTrace test to check the multicast path from the
multicast source to the destination host.
Networking Requirements
As shown in Figure 7-26, in a PIM-SM network, the Receiver joins the multicast group 225.1.1.1
and can normally receive multicast packets from the Source. You are required to obtain the
multicast path along which multicast packets are sent from the Source to the Receiver.
In such a scenario, you can perform an NQA MTrace test on Router C to detect the multicast
path from the multicast source to the destination host.
Source
11.1.0.2/24
GE1/0/0
11.1.0.1/24
GE2/0/0 GE3/0/0
11.1.1.2/24 11.1.2.2/24
Loopback0 RouterE
1.1.1.1/32 GE3/0/0
11.1.2.1/24
GE2/0/0 GE2/0/0
11.1.1.1/24 11.1.4.2/24 RouterD
RouterC
GE1/0/0 GE1/0/0
11.1.3.2/24 11.1.5.2/24
RouterB
GE1/0/0 GE1/0/0
GE2/0/0 11.1.5.1/24
11.1.3.1/24 11.1.4.1/24
GE3/0/0
11.1.6.2/24 GE3/0/0 RouterA
11.1.6.3/24
Receiver
11.1.6.4/24
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure Router C as an NQA client and create an MTrace test on Router C.
2. Start the NQA MTrace test and view test results.
Data Preparation
To complete the configuration, you need the following data:
l Test instance name, admin mtrace
l IP address of the multicast source, 11.1.0.2
l Multicast group address, 225.1.1.1
l IP address of the destination host, 11.1.6.4
Procedure
Step 1 Create an NQA MTrace test on Router C.
<RouterC> system-view
[RouterC] nqa test-instance admin mtrace
[RouterC-nqa-admin-mtrace] test-type mtrace
[RouterC-nqa-admin-mtrace] mtrace-source-address ipv4 11.1.0.2
[RouterC-nqa-admin-mtrace] mtrace-group-address ipv4 225.1.1.1
[RouterC-nqa-admin-mtrace] destination-address ipv4 11.1.6.4
[RouterC-nqa-admin-mtrace] mtrace-query-type destination
Step 3 View the test result. You can find that the multicast path from the multicast source to Router C
is Router E → Router D → Router B.
[RouterC-nqa-admin-mtrace] display nqa results test-instance admin mtrace
NQA entry(admin, mtrace) :testflag is inactive ,testtype is mtrace
1 . Test 1 result The test is finished
Completions: success Query Mode: max-hop
Current Hop:3 Current Probe:1
SendProbe:1 ResponseProb:1
Timeout Count:0 Busy Count:0
Drop Count:0 Max Path Ttl:4
Responser:11.1.2.2 Response Rtt: 83
mtrace start time: 2007-2-7 17:26:53.2
Last Good Probe Time: 2007-2-7 17:26:53.3
Last Good Path Time: 2007-2-7 17:26:53.3
1 . Hop 1
Outgoing Interface Address: 11.1.6.2
Incoming Interface Address: 11.1.4.1
Prehop router Address: 11.1.4.2
Protocol : PIM Forward Code:NO_ERROR
Forward Ttl:1 Current Path Ttl:4
SG Packet Count:255 Hop Time Delay(ms): 0xffffffff
Input Packet Count:9207 Output Packet Count:9596
Input Rate(pps): 0xffffffff Output Rate(pps): 0xffffffff
Input Loss Rate: 0xffffffff SG Loss Rate: 0xffffffff
2 . Hop 2
Outgoing Interface Address: 11.1.4.2
Incoming Interface Address: 11.1.2.1
Prehop router Address: 11.1.2.2
Protocol : PIM Forward Code:NO_ERROR
Forward Ttl:1 Current Path Ttl:3
SG Packet Count:255 Hop Time Delay(ms):327
Input Packet Count:9627 Output Packet Count:9222
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
multicast routing-enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 11.1.5.1 255.255.255.0
pim sm
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 11.1.6.3 255.255.255.0
igmp enable
pim sm
#
ospf 1
area 0.0.0.0
network 11.1.3.0 0.0.0.255
network 11.1.6.0 0.0.0.255
#
pim
#
return
Networking Requirements
As shown in Figure 7-27, CE-A and CE-B respectively access PE-A and PE-B through Virtual
Local Area Network (VLANs). PE-A and PE-B are linked through the Multi-Protocol Label
Switch (MPLS) backbone network. A dynamic PW is set up between PE-A and PE-B through
an LSP.
In such a scenario, you can perform a PWE3 Ping test to check the connectivity of the one-hop
PW.
MPLS
Backbone
Loopback0 Loopback0 Loopback0
192.2.2.2/32 192.4.4.4/32 192.3.3.3/32
POS2/0/0 POS2/0/0
10.1.1.1/24 10.2.2.2/24
POS1/0/0 POS2/0/0
PE-A GE1/0/0.1 10.1.1.2/24 P 10.2.2.1/24 GE1/0/0.1 PE-B
PW
VLAN1 GE1/0/0.1 GE1/0/0.1 VLAN2
100.1.1.1/24 100.1.1.2/24
CE-A CE-B
Configuration Roadmap
The configuration roadmap is as follows:
1. Run IGP on the backbone network to implement communication between routers.
2. Configure the basic MPLS functions on the backbone network and set up an LSP. Establish
MPLS LDP peer relationship between the PEs on both ends of the PW.
3. Set up an MPLS L2VC between the PEs.
4. On PE-A, configure a PWE3 Ping test for the one-hop PW.
Data Preparation
To complete the configuration, you need the following data.
Procedure
Step 1 Configure a dynamic PW.
For details about configuring a one-hop PW on the MPLS backbone network, refer to the chapter
"PWE3 Configuration" in the HUAWEI NetEngine80E/40E Router Configuration Guide -
VPN.
Step 2 Configure a PWE3 Ping test on the one-hop PW.
# Configure PE-A.
<PE-A> system-view
[PE-A] nqa test-instance test pwe3ping
[PE-A-nqa-test-pwe3ping] test-type pwe3ping
[PE-A-nqa-test-pwe3ping] local-pw-id 100
[PE-A-nqa-test-pwe3ping] local-pw-type vlan
[PE-A-nqa-test-pwe3ping] label-type control-word
----End
Configuration Files
l Configuration file of CE-A
#
sysname CE-A
#
interface GigabitEthernet1/0/0.1
undo shutdown
vlan-type dot1q 1
ip address 100.1.1.1 255.255.255.0
#
return
mpls
mpls ldp
#
interface LoopBack0
ip address 192.2.2.2 255.255.255.255
#
nqa test-instance test pwe3ping
test-type pwe3ping
local-pw-id 100
local-pw-type vlan
#
ospf 1
area 0.0.0.0
network 192.2.2.2 0.0.0.0
network 10.1.1.0 0.0.0.255
#
return
l Configuration file of P
#
sysname P
#
mpls lsr-id 192.4.4.4
mpls
#
mpls ldp
#
interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 10.1.1.2 255.255.255.0
mpls
mpls ldp
#
interface Pos2/0/0
link-protocol ppp
undo shutdown
ip address 10.2.2.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack0
ip address 192.4.4.4 255.255.255.255
#
ospf 1
area 0.0.0.0
network 192.4.4.4 0.0.0.0
network 10.1.1.0 0.0.0.255
network 10.2.2.0 0.0.0.255
#
return
l Configuration file of PE-B
#
sysname PE-B
#
mpls lsr-id 192.3.3.3
mpls
#
mpls l2vpn
#
mpls ldp
#
mpls ldp remote-peer 192.2.2.2
remote-ip 192.2.2.2
#
interface Pos2/0/0
link-protocol ppp
undo shutdown
ip address 10.2.2.2 255.255.255.0
mpls
mpls ldp
#
pw-template wwt
peer-address 192.2.2.2
control-word
interface GigabitEthernet 1/0/0.1
undo shutdown
mpls l2vc 192.2.2.2 pw-template wwt 100
#
interface LoopBack0
ip address 192.3.3.3 255.255.255.255
#
ospf 1
area 0.0.0.0
network 192.3.3.3 0.0.0.0
network 10.2.2.0 0.0.0.255
#
return
Networking Requirements
As shown in Figure 7-28, CE-A and CE-B respectively access U-PE1 and U-PE2 through the
Point-to-Point Protocol (PPP). U-PE1 and U-PE2 are linked through the MPLS backbone
network. A dynamic multi-hop PW is set up between U-PE1 and U-PE2 through the LSP, with
S-PE being a switching node.
In such a scenario, you can perform a PWE3 Ping test to check the connectivity of the multi-
hop PW.
CE-A CE-B
Configuration Roadmap
The configuration roadmap is as follows:
1. Run IGP on the backbone network to implement communication between routers.
2. Configure the basic MPLS functions on the backbone network and set up an LSP. Establish
MPLS LDP peer relationship between U-PE1 and S-PE and between U-PE2 and S-PE.
3. Set up an MPLS L2VC between the U-PEs.
4. Set up a switching PW on S-PE.
5. On U-PE1, configure a PWE3 Ping test for the multi-hop PW.
Data Preparation
To complete the configuration, you need the following data.
l L2VC IDs of U-PE1 and U-PE2
l MPLS LSR IDs of U-PE1, S-PE, and U-PE2
l IP address of the peer
l Encapsulation type of the switching PW
l Name of the PW template configured on the U-PEs and parameters of the PW template
NOTE
Procedure
Step 1 Configure a multi-hop PW.
For details about configuring a multi-hop PW on the MPLS backbone network, refer to the
chapter "PWE3 Configuration" in the HUAWEI NetEngine80E/40E Router Configuration
Guide - VPN.
Step 2 Configure a PWE3 Ping test on the multi-hop PW.
# Configure U-PE1.
<U-PE1> system-view
[U-PE1] nqa test-instance test pwe3ping
[U-PE1-nqa-test-pwe3ping] test-type pwe3ping
[U-PE1-nqa-test-pwe3ping] local-pw-id 100
[U-PE1-nqa-test-pwe3ping] local-pw-type ppp
[U-PE1-nqa-test-pwe3ping] label-type control-word
[U-PE1-nqa-test-pwe3ping] remote-pw-id 200
----End
Configuration Files
l Configuration file of CE-A
#
sysname CE-A
#
interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 100.1.1.1 255.255.255.0
#
return
pw-template wwt
peer-address 3.3.3.9
control-word
interface Pos1/0/0
undo shutdown
link-protocol ppp
mpls l2vc 3.3.3.9 pw-template wwt 100
#
interface Pos2/0/0
link-protocol ppp
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack0
ip address 1.1.1.9 255.255.255.255
#
nqa test-instance test pwe3ping
test-type pwe3ping
local-pw-id 100
local-pw-type ppp
remote-pw-id 200
#
ospf 1
area 0.0.0.0
network 10.1.1.0 0.0.0.255
network 1.1.1.9 0.0.0.0
#
return
l Configuration file of P1
#
sysname P1
#
mpls lsr-id 2.2.2.9
mpls
#
mpls ldp
#
interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 10.1.1.2 255.255.255.0
mpls
mpls ldp
#
interface Pos2/0/0
link-protocol ppp
undo shutdown
ip address 20.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack0
undo shutdown
ip address 2.2.2.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 2.2.2.9 0.0.0.0
network 10.1.1.0 0.0.0.255
network 20.1.1.0 0.0.0.255
#
return
l Configuration file of S-PE
#
sysname S-PE
#
Networking Requirements
As shown in Figure 7-29, CE-A and CE-B respectively access PE-A and PE-B through VLANs.
PE-A and PE-B are linked through the MPLS backbone network. A dynamic PW is set up
between PE-A and PE-B through an LSP.
In such a scenario, you can perform a PWE3 Trace test to check the connectivity of the one-hop
PW.
MPLS
Backbone
Loopback0 Loopback0 Loopback0
192.2.2.2/32 192.4.4.4/32 192.3.3.3/32
POS2/0/0 POS2/0/0
10.1.1.1/24 10.2.2.2/24
POS1/0/0 POS2/0/0
PE-A GE1/0/0.1 10.1.1.2/24 P 10.2.2.1/24 GE1/0/0.1 PE-B
PW
VLAN1 GE1/0/0.1 GE1/0/0.1 VLAN2
100.1.1.1/24 100.1.1.2/24
CE-A CE-B
Configuration Roadmap
The configuration roadmap is as follows:
1. Run IGP on the backbone network to implement communication between routers.
2. Configure the basic MPLS functions on the backbone network and set up an LSP. Establish
MPLS LDP peer relationship between the PEs on both ends of the PW.
3. Set up an MPLS L2VC between the PEs.
4. On PE-A, configure a PWE3 Trace test for the one-hop PW.
Data Preparation
To complete the configuration, you need the following data.
Procedure
Step 1 Configure a one-hop PW.
For details about configuring a one-hop PW on the MPLS backbone network, refer to the chapter
"PWE3 Configuration" in the HUAWEI NetEngine80E/40E Router Configuration Guide -
VPN.
# Configure PE-A.
<PE-A> system-view
[PE-A] nqa test-instance test pwe3trace
[PE-A -nqa-test-pwe3trace] test-type pwe3trace
[PE-A -nqa-test-pwe3trace] local-pw-type vlan
[PE-A -nqa-test-pwe3trace] local-pw-id 100
Running the display nqa history command on PEs, you can find that the test status is "Success".
[PE-A-nqa-test-pwe3trace] display nqa history
NQA entry(test, pwe3trace) history:
Index T/H/P Response Status Address Time
1 1/1/1 4 success 10.1.1.2 2006-9-30 9:33:3.301
2 1/1/2 5 success 10.1.1.2 2006-9-30 9:33:3.307
3 1/1/3 3 success 10.1.1.2 2006-9-30 9:33:3.311
4 1/2/1 6 success 3.3.3.9 2006-9-30 9:33:3.318
5 1/2/2 6 success 3.3.3.9 2006-9-30 9:33:3.324
6 1/2/3 6 success 3.3.3.9 2006-9-30 9:33:3.331
Running the display nqa results command on PEs, you can find that the test succeeds.
[PE-A-nqa-test- pwe3trace] display nqa results
NQA entry(test, pwe3trace) :testflag is inactive ,testtype is pwe3trace
1 . Test 1 result The test is finished
Completion:success Attempts number:1
Disconnect operation number:0 Operation timeout number:0
System busy operation number:0 Connection fail number:0
Operation sequence errors number:0 RTT Stats errors number:0
Drop operation number:0
Last good path Time:2006-9-24 11:22:21.2
1 . Hop 1
Send operation times: 3 Receive response times: 3
Min/Max/Average Completion Time: 1050/1090/1053
Sum/Square-Sum Completion Time: 3160/3331000
OverThresholds number: 0
Last Good Probe Time: 2006-9-24 11:22:17.2
Destination ip address:10.1.1.2
Lost packet ratio: 0 %
2 . Hop 2
Send operation times: 3 Receive response times: 3
Min/Max/Average Completion Time: 1050/1490/1323
Sum/Square-Sum Completion Time: 3970/5367500
OverThresholds number: 0
Last Good Probe Time: 2006-8-24 11:22:21.2
Destination ip address:10.2.2.2
Lost packet ratio: 0 %
----End
Configuration Files
l Configuration file of CE-A
#
sysname CE-A
#
interface GigabitEthernet1/0/0.1
vlan-type dot1q 1
ip address 100.1.1.1 255.255.255.0
#
return
l Configuration file of PE-A
#
sysname PE-A
#
mpls lsr-id 192.2.2.2
mpls
#
mpls l2vpn
#
mpls ldp
#
mpls ldp remote-peer 192.3.3.3
remote-ip 192.3.3.3
#
pw-template wwt
peer-address 192.3.3.3
control-word
interface GigabitEthernet1/0/0.1
vlan-type dot1q 1
mpls l2vc 192.3.3.3 pw-template wwt 100
#
interface Pos2/0/0
undo shutdown
link-protocol ppp
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack0
ip address 192.2.2.2 0.0.0.0
#
nqa test-instance test pwe3trace
test-type pwe3trace
local-pw-type vlan
local-pw-id 100
#
ospf 1
area 0.0.0.0
network 192.2.2.2 0.0.0.0
network 10.1.1.0 0.0.0.255
#
return
l Configuration file of P
#
sysname P
#
mpls lsr-id 192.4.4.4
mpls
#
mpls ldp
#
interface Pos1/0/0
undo shutdown
link-protocol ppp
ip address 10.1.1.2 255.255.255.0
mpls
mpls ldp
#
interface Pos2/0/0
link-protocol ppp
undo shutdown
ip address 10.2.2.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack0
undo shutdown
Networking Requirements
As shown in Figure 7-30, CE-A and CE-B respectively access U-PE1 and U-PE2 through the
PPP. U-PE1 and U-PE2 are linked through the MPLS backbone network. A dynamic multi-hop
PW is set up between U-PE1 and U-PE2 through the LSP, with S-PE being a switching node.
In such a scenario, you can perform a PWE3 Trace test to check the connectivity of the multi-
hop PW.
CE-A CE-B
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data.
Procedure
Step 1 Configure a multi-hop PW.
For details about configuring a dynamic multi-hop PW on the MPLS backbone network, refer
to the chapter "PWE3 Configuration" in the HUAWEI NetEngine80E/40E Router Configuration
Guide - VPN.
Step 2 Configure the PWE3 Trace test on a multi-hop PW.
# Configure U-PE1.
<U-PE1> system-view
[U-PE1] nqa test-instance test pwe3trace
[U-PE1-nqa-test-pwe3trace] test-type pwe3trace
[U-PE1-nqa-test-pwe3trace] local-pw-id 100
[U-PE1-nqa-test-pwe3trace] local-pw-type ppp
[U-PE1-nqa-test-pwe3trace] label-type control-word
[U-PE1-nqa-test-pwe3trace] remote-pw-id 200
Running the display nqa results command on PEs, you can find that the test succeeds.
[U-PE1-nqa-test-pwe3trace] display nqa results
NQA entry(test, pwe3trace) :testflag is inactive ,testtype is pwe3trace
1 . Test 1 result The test is finished
Completion:success Attempts number:1
Disconnect operation number:0 Operation timeout number:0
System busy operation number:0 Connection fail number:0
Operation sequence errors number:0 RTT Stats errors number:0
Drop operation number:0
Last good path Time:2006-9-24 11:22:21.2
1 . Hop 1
Send operation times: 3 Receive response times: 3
Min/Max/Average Completion Time: 1050/1090/1053
Sum/Square-Sum Completion Time: 3160/3331000
OverThresholds number: 0
----End
Configuration Files
l Configuration file of CE-A
#
sysname CE-A
#
interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 100.1.1.1 255.255.255.0
#
return
#
mpls ldp
#
mpls ldp remote-peer 3.3.3.9
remote-ip 3.3.3.9
#
interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 40.1.1.2 255.255.255.0
mpls
mpls ldp
#
pw-template wwt
peer-address 3.3.3.9
control-word
interface Pos2/0/0
undo shutdown
link-protocol ppp
mpls l2vc 3.3.3.9 pw-template wwt 100
#
interface LoopBack0
ip address 5.5.5.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 5.5.5.9 0.0.0.0
network 40.1.1.0 0.0.0.255
#
return
Networking Requirements
As shown in Figure 7-31, OSPF runs on the routers in the MPLS backbone network to implement
communication within an Autonomous System (AS). Configure a Kompella VLL by using the
inter-AS multi-hop VCs.
In such a scenario, you can perform a VC Trace test to check the connectivity of the multi-hop
Kompella VLL.
Figure 7-31 Networking diagram of configuring the VC Trace test on an inter-AS multi-hop
Kompella VLL
B G P /M P L S B G P /M P L S
B ackbone B ackbone
A S 100 A S 200
L o o p b a ck 1 L o o p b a ck 1 L o o p b a ck 1 L o o p b a ck 1
1 .1 .1 .1 /3 2 2 .2 .2 .2 /3 2 3 .3 .3 .3 /3 2 4 .4 .4 .4 /3 2
P O S 2 /0 /0 P O S 2 /0 /0 P O S 1 /0 /0 P O S 2 /0 /0
2 0 .1 .1 .1 /3 0 3 0 .1 .1 .1 /3 0 3 0 .1 .1 .2 /3 0 4 0 .1 .1 .1 /3 0
P O S 1 /0 /0 P O S 1 /0 /0
PE1 2 0 .1 .1 .2 /3 0 A S B R -P E 1 A S B R -P E 2 4 0 .1 .1 .2 /3 0 PE2
P O S 1 /0 /0 P O S 2 /0 /0
P O S 1 /0 /0 P O S 1 /0 /0
1 0 .1 .1 .1 /2 4 1 0 .1 .1 .2 /2 4
CE1 CE2
Configuration Roadmap
The configuration roadmap is as follows:
1. Run IGP on the backbone network to ensure the connectivity between routers in the same
AS.
2. Enable MPLS on the backbone network and establish dynamic LSPs between the PEs and
the ASBR-PEs.
3. Establish IBGP peer relationship between the PEs and the ASBR PEs in the same AS and
the EBGP peer relationship between the ASBR-PEs.
4. Configure the routing policies on the ASBR-PEs and enable label-based routing.
5. Establish MP EBGP peer relationship between PE1 and PE2.
6. Create a Kompella VLL between PE1 and PE2. Create L2VPN instances on ASBRs.
7. Configure a VC Trace test on an inter-AS multi-hop Kompella VLL.
Data Preparation
To complete the configuration, you need the following data.
Procedure
Step 1 Configure an inter-AS multi-hop Kompella VLL.
For details about configuring an inter-AS multi-hop Kompella VLL, refer to the chapter "MPLS
L2VPN Configuration" in the HUAWEI NetEngine80E/40E Router Configuration Guide -
VPN.
Running the display nqa results command on PEs, you can find that the test is successful.
[U-PE1-nqa-test-pwe3trace] display nqa results
NQA entry(test, pwe3trace) :testflag is inactive ,testtype is pwe3trace
1 . Test 1 result The test is finished
Completion:success Attempts number:1
Disconnect operation number:0 Operation timeout number:0
System busy operation number:0 Connection fail number:0
Operation sequence errors number:0 RTT Stats errors number:0
Drop operation number:0
Last good path Time:2009-3-2 9:26:57.0
1 . Hop 1
Send operation times: 3 Receive response times: 3
Min/Max/Average Completion Time: 4/12/8
Sum/Square-Sum Completion Time: 25/241
RTD OverThresholds number: 0
Last Good Probe Time: 2009-3-2 9:26:56.9
Destination ip address:20.1.1.2
Lost packet ratio: 0 %
2 . Hop 2
Send operation times: 3 Receive response times: 3
Min/Max/Average Completion Time: 10/37/20
Sum/Square-Sum Completion Time: 60/1638
RTD OverThresholds number: 0
Last Good Probe Time: 2009-3-2 9:26:57.0
Destination ip address:30.1.1.2
Lost packet ratio: 0 %
----End
Configuration Files
l Configuration file of CE1
#
sysname CE2
#
interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 10.1.1.1 255.255.255.0
#
return
#
mpls ldp
#
interface Pos1/0/0
undo shutdown
link-protocol ppp
ip address 30.1.1.2 255.255.255.252
mpls
#
interface Pos2/0/0
link-protocol ppp
undo shutdown
ip address 40.1.1.1 255.255.255.252
mpls
mpls ldp
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
#
bgp 200
Peer 4.4.4.4 as-number 200
Peer 4.4.4.4 connect-interface LoopBack1
Peer 30.1.1.1 as-number 100
Peer 30.1.1.1 connect-interface Pos1/0/0
#
ipv4-family unicast
undo synchronization
network 4.4.4.4 255.255.255.255
Peer 4.4.4.4 enable
Peer 4.4.4.4 route-policy policy1 export
Peer 4.4.4.4 label-route-capability
Peer 30.1.1.1 enable
Peer 30.1.1.1 route-policy policy1 export
Peer 30.1.1.1 label-route-capability
#
l2vpn-family
policy vpn-target
Peer 4.4.4.4 enable
#
ospf 1
area 0.0.0.0
network 3.3.3.3 0.0.0.0
network 40.1.1.0 0.0.0.255
#
route-policy policy1 permit node 1
apply mpls-label
#
return
l Configuration file of PE2
#
sysname PE2
#
mpls lsr-id 4.4.4.4
mpls
mpls l2vpn
#
mpls ldp
#
interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 40.1.1.2 255.255.255.252
mpls
mpls ldp
#
interface Pos2/0/0
link-protocol ppp
undo shutdown
#
interface LoopBack1
undo shutdown
ip address 4.4.4.4 255.255.255.255
#
bgp 200
Peer 1.1.1.1 as-number 100
Peer 1.1.1.1 ebgp-max-hop 255
Peer 1.1.1.1 connect-interface LoopBack1
Peer 3.3.3.3 as-number 200
Peer 3.3.3.3 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
Peer 1.1.1.1 enable
Peer 3.3.3.3 enable
Peer 3.3.3.3 label-route-capability
#
l2vpn-family
policy vpn-target
Peer 1.1.1.1 enable
#
ospf 1
area 0.0.0.0
network 40.1.1.0 0.0.0.255
network 4.4.4.4 0.0.0.0
#
mpls l2vpn vpn1 encapsulation ppp
route-distinguisher 200:1
vpn-target 1:1 import-extcommunity
vpn-target 1:1 export-extcommunity
ce ce2 id 2 range 10 default-offset 0
connection ce-offset 1 interface Pos2/0/0
#
#
return
Networking Requirements
Create a Jitter test based on the networking diagram shown in Figure 7-32. Configure a
transmission delay threshold and enable the trap function. After the Jitter test is completed, if
the test result shows that the delay of some test packets from Router A to Router C (or from
Router C to Router A) exceeds the uni-directional transmission delay, or the round-trip
transmission delay threshold, Router A sends a trap message to the NM station. Based on the
received trap message, the NM station can clearly find the cause of the fault.
Figure 7-32 Networking diagram of enabling the trap function when the transmission delay
exceeds the threshold
NM Station
20.1.1.2/24
GE2/0/0
20.1.1.1/24 RouterB RouterC
GE1/0/0
30.1.1.2/24
GE1/0/0 GE1/0/0 GE2/0/0
RouterA 10.1.1.1/24 10.1.1.2/24 30.1.1.1/24 NQA Server
NOTE
For clock synchronization, refer to the chapter "NTP" in the HUAWEI NetEngine80E/40E Router Feature
Description - System Management.
Configuration Roadmap
The configuration roadmap is as follows:
1. Set a transmission delay threshold.
2. Enable the trap function.
3. Enable sending trap messages to the NM station.
Data Preparation
To complete the configuration, you need the following data:
l IP address and port number of the NQA server
l Monitoring service type and the port number to be monitored
l Uni-directional transmission delay and round-trip transmission delay
l IP address of the NM station
Procedure
Step 1 Configure routes between Router A, Router B, and Router C. (The detailed procedure is not
mentioned here.)
Step 2 Create a Jitter test.
# Configure Router C as an NQA server and set the IP address and UDP port number monitored
by the NQA server.
<RouterC> system-view
[RouterC] nqa-server udpecho 30.1.1.2 9000
# Configure the uni-directional transmission (from the destination to the source) delay threshold
on Router A.
[RouterA -nqa-test-jitter] threshold owd-ds 100
# Configure the uni-directional transmission (from the source to the destination) delay threshold
on Router A.
[RouterA -nqa-test-jitter] threshold owd-sd 100
----End
Configuration Files
l Configuration files of Router A
#
sysname RouterA
#
interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 10.1.1.1 255.255.255.0
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 20.1.1.1 255.255.255.0
#
ospf 1
area 0.0.0.0
network 10.1.1.0 0.0.0.255
#
nqa test-instance test jitter
test-type jitter
destination-address ipv4 30.1.1.2
destination-port 9000
threshold rtd 20
threshold owd-sd 100
threshold owd-ds 100
send-trap rtd
send-trap owd-sd
send-trap owd-ds
#
snmp-agent
snmp-agent local-engineid 000007DB7F00000100007B29
snmp-agent sys-info version v2c
snmp-agent target-host trap address udp-domain 20.1.1.2 params securityname
public v2c
#
return
link-protocol ppp
undo shutdown
ip address 10.1.1.2 255.255.255.0
#
interface Pos2/0/0
link-protocol ppp
undo shutdown
ip address 30.1.1.1 255.255.255.0
#
ospf 1
area 0.0.0.1
network 10.1.1.0 0.0.0.255
network 30.1.1.0 0.0.0.255
#
return
Networking Requirements
As shown in Figure 7-33, Router A serves as the client to perform the ICMP test and send test
results to the FTP server through FTP.
Figure 7-33 Networking diagram of sending test results to the FTP server
FTP server
GE2/0/0 11.1.2.8/24
11.1.2.1/24
GE1/0/0
11.1.1.10/24
GE1/0/0
RouterA 11.1.1.11/24 RouterB
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure parameters for connecting the FTP server.
2. Enable the FTP server to save test results through FTP.
3. Configure the number of test results saved through FTP.
4. Configure the duration of saving test results through FTP.
5. Configure test results to be sent.
6. Start the test instance.
7. Verify the configuration.
Data Preparation
To complete the configuration, you need the following data:
l User name and password used for logging into the FTP server
l Number of test results saved through FTP
l Duration of saving test results through FTP
Procedure
Step 1 Configure parameters for connecting the FTP server.
# Configure the IP address of the client that is connected to the FTP server.
<RouterA> system-view
[RouterA] nqa-ftp-record ip-address 11.1.2.8
# Configure the user name for logging into the FTP server.
[RouterA] nqa-ftp-record username ftp
[RouterA] nqa-ftp-record password ftp
Step 2 Configure the number of test results to be saved in a file through FTP.
[RouterA] nqa-ftp-record item-num 10010
Step 4 Send an alarm to the NM station after the FTP transmission succeeds.
[RouterA] nqa-ftp-record trap-enable
Step 5 Enable the FTP server to save NQA test results through FTP on Router A.
<RouterA> system-view
[RouterA] nqa-ftp-record enable
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
interface GigabitEthernet1/0/0
ip address 11.1.1.11 255.255.255.0
#
interface GigabitEthernet2/0/0
ip address 11.1.2.1 255.255.255.0
#
interface NULL0
#
aaa
authentication-scheme default
#
authorization-scheme default
#
accounting-scheme default
#
domain default
#
nqa-ftp-record enable
nqa-ftp-record trap-enable
nqa-ftp-record ip-address 11.1.1.8
nqa-ftp-record username ftp
nqa-ftp-record password %$%$gw1.QU~4M1I@ESF>b/VP,@7.%$%$
nqa-ftp-record filename icmp
nqa-ftp-record item-num 10010
nqa-ftp-record time 2
nqa test-instance admin icmp
test-type icmp
destination-address ipv4 11.1.1.10
frequency 5
#
snmp-agent
snmp-agent local-engineid 000007DB7F000001000021D7
snmp-agent community read public
snmp-agent community write private
snmp-agent sys-info version all
snmp-agent target-host trap address udp-domain 11.1.1.8 params securityname
wan
snmp-agent trap enable feature-name nqa trap-name nqaftpsaverecordnotification
#
user-interface con 0
user-interface vty 0 4
user-interface vty 16 20
#
return
#
interface GigabitEthernet1/0/0
ip address 11.1.1.10 255.255.255.0
#
return
Networking Requirements
As shown in Figure 7-34, Router A serves as the client to perform the ICMP Jitter test and
monitor the packet loss ratio of the test result. If the ratio exceeds the threshold, an alarm is sent
to the NM station.
Figure 7-34 Networking diagram of configuring a threshold for the NQA alarm
NM Station
11.1.2.8/24
GE2/0/0
11.1.2.1/24 GE1/0/0
11.1.1.1/24
GE1/0/0
11.1.1.20/24
RouterA RouterB
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Configure Router A as the client of the ICMP Jitter test. The configuration details are not
mentioned here.
Step 2 Configure the event corresponding to the alarm on Router A.
<RouterA> system-view
[RouterA] nqa event 10 log-trap
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
interface GigabitEthernet1/0/0
ip address 11.1.1.1 255.255.255.0
#
interface GigabitEthernet2/0/0
ip address 11.1.2.1 255.255.255.0
#
interface NULL0
#
aaa
authentication-scheme default
#
authorization-scheme default
#
accounting-scheme default
#
domain default
#
#
nqa-jitter tag-version 2
nqa event 10 log-trap
nqa test-instance admin icmpjitter
test-type icmpjitter
destination-address ipv4 11.1.1.20
frequency 5
alarm 10 lost-packet-ratio absolute rising-threshold 100 10 falling-threshold
1
0 10
#
snmp-agent
snmp-agent local-engineid 000007DB7F00000100000B31
snmp-agent sys-info version v2c v3
snmp-agent target-host trap address udp-domain 11.1.2.8 params securityname
alarm v2c
snmp-agent trap enable feature-name NQA trap-name nqaRisingAlarmNotification
snmp-agent trap enable feature-name NQA trap-name nqaFaillingAlarmNotification
#
user-interface con 0
user-interface vty 0 4
user-interface vty 16 20
#
aps fast-interval 0
#
return
Networking Requirements
As shown in Figure 7-35:
PE1 and PE2 are enabled with VPLS. CE1 is attached to PE1; CE2 is attached to PE2. CE1 and
CE2 are on the same VPLS network. PWs are established by using LDP as the VPLS signaling,
and VPLS is configured to realize the interworking between CE1 and CE2. PEs are enabled with
IGMP snooping.
With the NQA VPLS MFIB ping, you can check the following performance indexes of the
network as shown in Figure 7-35:
l Multicast connectivity of PEs belonging to a specified VSI in the VPLS domain
l IGMP snooping of the egress belonging to a specified VSI in the VPLS domain
CE1 CE2
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Configure a Martini VPLS network.
Configure a Martini VPLS network on the MPLS backbone network. For details, refer to the
chapter "VPLS Configuration" in the NE80E/40E Configuration Guide - VPN.
Enable IGMP snooping on PE1 and PE2. For details, refer to the chapter "Layer 2 Multicast
Configuration" in the NE80E/40E Configuration Guide - VPN.
Step 3 Configure an NQA VPLS MFIB test instance and specify a non-reserved multicast address as
the destination address.
# Do as follows on PE1:
<PE1> system-view
[PE1] nqa test-instance test vplsmping
[PE1-nqa-test-vplsming] test-type vplsmping
[PE1-nqa-test-vplsming] destination-address ipv4 225.0.0.1
[PE1-nqa-test-vplsming] vsi a2
----End
Configuration Files
l Configuration file of CE1
#
sysname CE1
#
interface GigabitEthernet1/0/0.1
vlan-type dot1q 10
ip address 10.1.1.1 255.255.255.0
#
interface GigabitEthernet2/0/0
ip address 100.1.1.1 255.255.255.0
#
return
igmp-snooping enable
#
l Configuration file of P
#
sysname P
#
mpls lsr-id 2.2.2.9
mpls
#
mpls ldp
#
interface Pos1/0/0
link-protocol ppp
ip address 168.1.1.2 255.255.255.0
mpls
mpls ldp
#
interface Pos2/0/0
link-protocol ppp
ip address 169.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 168.1.1.0 0.0.0.255
Networking Requirements
As shown in Figure 7-36:
PE1 and PE2 are enabled with VPLS. CE1 is attached to PE1; CE2 is attached to PE2. CE1 and
CE2 are on the same VPLS network.
PWs are established by using BGP as the VPLS signaling. The automatic discovery of VPLS
PEs is implemented through VPN targets to realize the interworking between CE1 and CE2. The
NQA VPLS MFIB ping test is used to check whether the multicast forwarding between PE1 and
PE2 is normal.
CE1 CE2
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure a Kompella VPLS network.
2. Initiate an NQA VPLS MFIB ping on PE1 and specify a reserved multicast address as the
destination address to initiate the test.
Data Preparation
To complete the configuration, you need the following data:
l IP addresses of peers
l Names of the VSIs on PE1 and PE2
l BGP AS numbers on PE1 and PE2
l Signaling protocol of a VSI, that is, BGP
l RDs, VPN targets, site IDs of VSIs on PEs
l Interfaces to which VSIs are bound and VLAN IDs of the interfaces
l Destination multicast address of the ping
Procedure
Step 1 Configure a Kompella VPLS network.
Configure a Komplla VPLS network on the MPLS backbone network. For details, refer to the
chapter "VPLS Configuration" in the NE80E/40E Configuration Guide - VPN.
Step 2 Configure an NQA VPLS MFIB ping test instance and specify a reserved multicast address as
the destination address.
# Do as follows on PE1:
<PE1> system-view
[PE1] nqa test-instance test vplsmping
[PE1-nqa-test-vplsming] test-type vplsmping
----End
Configuration Files
l Configuration file of CE1
#
sysname CE1
#
interface GigabitEthernet1/0/0.1
vlan-type dot1q 10
ip address 10.1.1.1 255.255.255.0
#
interface GigabitEthernet2/0/0
ip address 100.1.1.1 255.255.255.0
#
return
#
interface GigabitEthernet1/0/0.1
vlan-type dot1q 10
l2 binding vsi bgp1
#
interface Pos2/0/0
link-protocol ppp
ip address 168.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
#
bgp 100
peer 3.3.3.9 as-number 100
peer 3.3.3.9 connect-interface LoopBack1
#
vpls-family
policy vpn-target
peer 3.3.3.9 enable
#
ospf 1
area 0.0.0.0
network 1.1.1.9 0.0.0.0
network 168.1.1.0 0.0.0.255
#
nqa test-instance test vplsmping
test-type vplsmping
destination-address ipv4 224.0.0.1
vsi a2
#
return
l Configuration file of P
#
sysname P
#
mpls lsr-id 2.2.2.9
mpls
#
mpls ldp
#
interface Pos1/0/0
link-protocol ppp
ip address 168.1.1.2 255.255.255.0
mpls
mpls ldp
#
interface Pos2/0/0
link-protocol ppp
ip address 169.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 168.1.1.0 0.0.0.255
network 169.1.1.0 0.0.0.255
network 2.2.2.9 0.0.0.0
#
return
#
mpls lsr-id 3.3.3.9
mpls
mpls l2vpn
#
vsi bgp1 auto
pwsignal bgp
route-distinguisher 169.1.1.2:1
vpn-target 100:1 import-extcommunity
vpn-target 100:1 export-extcommunity
site 2 range 5 default-offset 0
#
mpls ldp
#
interface Pos1/0/0
link-protocol ppp
ip address 169.1.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet2/0/0.1
vlan-type dot1q 10
l2 binding vsi bgp1
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
#
bgp 100
peer 1.1.1.9 as-number 100
peer 1.1.1.9 connect-interface LoopBack1
#
vpls-family
policy vpn-target
peer 1.1.1.9 enable
#
ospf 1
area 0.0.0.0
network 1.1.1.9 0.0.0.0
network 169.1.1.0 0.0.0.255
return
Networking Requirements
As shown in Figure 7-37:
PE1 and PE2 are enabled with VPLS. CE1 is attached to PE1; CE2 is attached to PE2. CE1 and
CE2 are on the same VPLS network. PWs are established by using LDP as the VPLS signaling,
and VPLS is configured to realize the interworking between CE1 and CE2. PEs are enabled with
IGMP snooping.
With the NQA VPLS MFIB trace, you can check the IGMP snooping of the egress and trouble
PEs.
CE1 CE2
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure a Martini VPLS network.
2. Enable IGMP snooping on PE1 and PE2.
3. Configure an NQA VPLS MFIB trace test instance on PE1 and specify a non-reserved
multicast address as the destination address to initiate the test.
Data Preparation
To complete the configuration, you need the following data:
l VSI name and VSI ID
l IP addresses of peers and the tunnel policy used for setting up the peer relationship
l Name of an interface to which a VSI is bound
l Destination multicast address of the trace
Procedure
Step 1 Configure a Martini VPLS network.
Configure a Martini VPLS network on the MPLS backbone network. For details, refer to the
chapter "VPLS Configuration" in the NE80E/40E Configuration Guide - VPN.
Step 2 Enable IGMP snooping.
Enable IGMP snooping on PE1 and PE2. For details, refer to the chapter "Layer 2 Multicast
Configuration" in the NE80E/40E Configuration Guide - VPN.
Step 3 Configure an NQA VPLS MFIB trace test instance and specify an non-reserved multicast address
as the destination address.
# Do as follows on PE1:
<PE1> system-view
[PE1] nqa test-instance test vplsmtrace
----End
Configuration Files
l Configuration file of CE1
#
sysname CE1
#
interface GigabitEthernet1/0/0.1
vlan-type dot1q 10
ip address 10.1.1.1 255.255.255.0
#
interface GigabitEthernet2/0/0
ip address 100.1.1.1 255.255.255.0
#
return
#
vsi a2 static
pwsignal ldp
vsi-id 2
peer 3.3.3.9
igmp-snooping enable
#
mpls ldp
#
mpls ldp remote-peer pe2
remote-ip 3.3.3.9
#
interface GigabitEthernet1/0/0.1
vlan-type dot1q 10
l2 binding vsi a2
#
interface Pos2/0/0
link-protocol ppp
ip address 168.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 1.1.1.9 0.0.0.0
network 168.1.1.0 0.0.0.255
#
nqa test-instance test vplsmtrace
test-type vplsmtrace
destination-address ipv4 225.0.0.1
vsi a2
remote-address ipv4 3.3.3.9
#
return
l Configuration file of P
#
sysname P
#
mpls lsr-id 2.2.2.9
mpls
#
mpls ldp
#
interface Pos1/0/0
link-protocol ppp
ip address 168.1.1.2 255.255.255.0
mpls
mpls ldp
#
interface Pos2/0/0
link-protocol ppp
ip address 169.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 168.1.1.0 0.0.0.255
network 169.1.1.0 0.0.0.255
network 2.2.2.9 0.0.0.0
#
return
Networking Requirements
As shown in Figure 7-38, it is required that VPLS should be enabled on routerPE1 and PE2;
CE1 should be attached to PE1; CE2 should be attached to PE2; CE1 and CE2 should be on the
same VPLS network; PWs should be established by using LDP as the VPLS signaling, and VPLS
should be configured to realize the interworking between CE1 and CE2.
A VPLS MAC ping test is used to check the connectivity of the VPLS network.
CE1 CE2
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Configure a Martini VPLS network.
For details, refer to the chapter "VPLS Configuration" in the HUAWEI NetEngine80E/40E
Router Configuration Guide - VPN.
Step 2 Configure a VPLS MAC ping test instance based on the Martini VPLS network.
# Configure PE1.
<PE1> system-view
[PE1] nqa test-instance test vplsping
[PE1-nqa-test-vplsping] test-type vplsping
[PE1-nqa-test-vplsping] vsi a2
[PE1-nqa-test-vplsping] mac 00e0-5952-6f01
----End
Configuration Files
l Configuration file of CE1
#
sysname CE1
#
interface GigabitEthernet1/0/0.1
undo shutdown
vlan-type dot1q 10
ip address 10.1.1.1 255.255.255.0
#
return
mpls ldp
#
mpls ldp remote-peer 3.3.3.9
remote-ip 3.3.3.9
#
interface GigabitEthernet1/0/0.1
undo shutdown
vlan-type dot1q 10
l2 binding vsi a2
#
interface Pos2/0/0
link-protocol ppp
undo shutdown
ip address 168.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
#
nqa test-instance test vplsping
test-type vplsping
vsi a2
mac 00e0-5952-6f01
#
ospf 1
area 0.0.0.0
network 1.1.1.9 0.0.0.0
network 168.1.1.0 0.0.0.255
#
return
l Configuration file of P
#
sysname P
#
mpls lsr-id 2.2.2.9
mpls
#
mpls ldp
#
interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 168.1.1.2 255.255.255.0
mpls
mpls ldp
#
interface Pos2/0/0
link-protocol ppp
undo shutdown
ip address 169.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 168.1.1.0 0.0.0.255
network 169.1.1.0 0.0.0.255
network 2.2.2.9 0.0.0.0
#
return
l Configuration file of PE2
#
sysname PE2
#
Networking Requirements
As shown in Figure 7-39, it is required that a VPLS MAC trace test should be used to check the
connectivity of the VPLS network and locate the fault.
CE1 CE2
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure a Martini VPLS network.
2. Configure a VPLS MAC trace test instance on PE1.
3. Start the NQA test.
Data Preparation
To complete the configuration, you need the following data:
l VSI name and VSI ID
l IP addresses of peers and the tunnel policy used for setting up the peer relationship
l Interface to which the VSI is bound
l A specified peer MAC address
Procedure
Step 1 Configure a Martini VPLS network.
For details, refer to the chapter "VPLS Configuration" in the HUAWEI NetEngine80E/40E
Router Configuration Guide - VPN.
Step 2 Configure a VPLS MAC trace test instance based on the Martini VPLS network.
# Configure PE1.
<PE1> system-view
[PE1] nqa test-instance test vplstrace
[PE1-nqa-test-vplstrace ] test-type vplstrace
[PE1-nqa-test-vplstrace ] vsi a2
[PE1-nqa-test-vplstrace ] mac 00e0-5952-6f01
1 . Hop 1
Lost packet ratio :0 %
Last Good Probe Time :2009-2-1 13:33:21.5
2 . Hop 2
Lost packet ratio :0 %
Last Good Probe Time :2009-2-1 13:33:23.5
Send operation times :1
Destination ip address :
Receive response times :1
RTD OverThresholds number :0
Min/Max/Average Completion Time :0/0/0
Sum/Square-Sum Completion Time :0/0
----End
Configuration Files
l Configuration file of CE1
#
sysname CE1
#
interface GigabitEthernet1/0/0.1
undo shutdown
vlan-type dot1q 10
ip address 10.1.1.1 255.255.255.0
#
return
link-protocol ppp
undo shutdown
ip address 168.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 1.1.1.9 0.0.0.0
network 168.1.1.0 0.0.0.255
#
return
l Configuration file of P
#
sysname P
#
mpls lsr-id 2.2.2.9
mpls
#
mpls ldp
#
interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 168.1.1.2 255.255.255.0
mpls
mpls ldp
#
interface Pos2/0/0
link-protocol ppp
undo shutdown
ip address 169.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 168.1.1.0 0.0.0.255
network 169.1.1.0 0.0.0.255
network 2.2.2.9 0.0.0.0
#
return
l Configuration file of PE2
#
sysname PE2
#
mpls lsr-id 3.3.3.9
mpls
#
mpls l2vpn
#
vsi a2 static
pwsignal ldp
vsi-id 2
peer 1.1.1.9
#
mpls ldp
#
mpls ldp remote-peer 1.1.1.9
remote-ip 1.1.1.9
interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 169.1.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet2/0/0.1
undo shutdown
vlan-type dot1q 10
l2 binding vsi a2
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 3.3.3.9 0.0.0.0
network 169.1.1.0 0.0.0.255
#
return
Networking Requirements
On a VPLS network, the performance of PWs affects the entire network performance. For
example, the connectivity of PWs determines whether traffic can be normally forwarded between
users, and the forwarding performance of PWs determines whether the forwarding capacity of
the network complies with the SLA signed with users. To monitor PWs on the VPLS network,
VPLS PW ping and VPLS PW trace are developed for detecting the connectivity of PWs,
collecting performance information about PWs, discovering packet forwarding paths along PWs,
and locating faults on PWs.
VPLS PW ping or VPLS PW trace operations initiated through NQA commands are the same
as ping or trace operations initiated through common command lines in principle, and
additionally provide the scheduling and result collection mechanism and the threshold-
exceeding alarm function. You can combine the trace operation for locating faults and
discovering packet forwarding paths with the ping operation. When finding a fault by using the
ping operation, you can use the trace operation to locate the fault.
Figure 7-40 shows that VPLS PW ping and VPLS PW trace test instances can detect the
connectivity of a VPLS network and locate faults in the PW.
Figure 7-40 Networking diagram of configuring VPLS PW ping and VPLS PW trace test
instances
Loopback1: Loopback1: Loopback1:
1.1.1.9/32 2.2.2.9/32 3.3.3.9/32
POS2/0/0 POS2/0/0
168.1.1.1/24 169.1.1.1/24
PE1 PE2
POS1/0/0 POS1/0/0
GE1/0/0.1 168.1.1.2/24 169.1.1.2/24 GE2/0/0.1
P
GE1/0/0.1 GE1/0/0.1
10.1.1.1/24 10.1.1.2/24
CE1 CE2
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure the VPLS network and the service environment for starting NQA test instances.
In this example, a Martini VPLS network is configured.
2. Configure VPLS PW ping and VPLS PW trace test instances on PE1, and specify
mandatory configurations of test instances.
3. Start NQA test instances.
Data Preparation
To complete the configuration, you need the following data:
l Name and ID of the VSI
l IP addresses of peers and the tunnel policy used for setting up the peer relationship
l Interface to which the VSI is bound
Procedure
Step 1 Configure a Martini VPLS network.
For details, refer to the chapter "VPLS Configuration" in the HUAWEI NetEngine80E/40E
Router Configuration Guide - VPN.
Step 2 Configure VPLS PW ping and VPLS PW trace test instances.
1. Configure a VPLS PW ping test instance and start the test instance.
# Configure PE1.
<PE1> system-view
[PE1] nqa test-instance test vplspwping
[PE1-nqa-test-vplspwping ] test-type vplspwping
[PE1-nqa-test-vplspwping ] vsi a2
[PE1-nqa-test-vplspwping ] destination-address ipv4 3.3.3.9
2. Configure a VPLS PW trace test instance and start the test instance.
# Configure PE1.
<PE1> system-view
[PE1] nqa test-instance test vplspwtrace
[PE1-nqa-test-vplspwtrace ] test-type vplspwtrace
[PE1-nqa-test-vplspwtrace ] vsi a2
[PE1-nqa-test-vplspwtrace ] destination-address ipv4 3.3.3.9
----End
Configuration Files
l Configuration file of CE1
#
sysname CE1
#
interface GigabitEthernet1/0/0.1
undo shutdown
vlan-type dot1q 10
ip address 10.1.1.1 255.255.255.0
#
return
l Configuration file of CE2
#
sysname CE2
#
interface GigabitEthernet1/0/0.1
undo shutdown
vlan-type dot1q 10
ip address 10.1.1.2 255.255.255.0
#
return
l Configuration file of PE1
#
sysname PE1
#
mpls lsr-id 1.1.1.9
mpls
#
mpls l2vpn
#
vsi a2 static
pwsignal ldp
vsi-id 2
peer 3.3.3.9
#
mpls ldp
#
mpls ldp remote-peer 3.3.3.9
remote-ip 3.3.3.9
#
interface GigabitEthernet1/0/0.1
undo shutdown
vlan-type dot1q 10
l2 binding vsi a2
#
interface Pos2/0/0
link-protocol ppp
undo shutdown
ip address 168.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
#
nqa test-instance test vplspwping
test-type vplspwping
vsi a2
destination-address ipv4 3.3.3.9
#
nqa test-instance test vplspwtrace
test-type vplspwtrace
vsi a2
destination-address ipv4 3.3.3.9
#
ospf 1
area 0.0.0.0
network 1.1.1.9 0.0.0.0
network 168.1.1.0 0.0.0.255
#
return
l Configuration file of P
#
sysname P
#
mpls lsr-id 2.2.2.9
mpls
#
mpls ldp
#
interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 168.1.1.2 255.255.255.0
mpls
mpls ldp
#
interface Pos2/0/0
link-protocol ppp
undo shutdown
ip address 169.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 168.1.1.0 0.0.0.255
network 169.1.1.0 0.0.0.255
network 2.2.2.9 0.0.0.0
#
return
l Configuration file of PE2
#
sysname PE2
#
mpls lsr-id 3.3.3.9
mpls
#
mpls l2vpn
#
vsi a2 static
pwsignal ldp
vsi-id 2
peer 1.1.1.9
#
mpls ldp
#
mpls ldp remote-peer 1.1.1.9
remote-ip 1.1.1.9
interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 169.1.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet2/0/0.1
undo shutdown
vlan-type dot1q 10
l2 binding vsi a2
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 3.3.3.9 0.0.0.0
network 169.1.1.0 0.0.0.255
#
return
Networking Requirements
As shown in Figure 7-41, all devices are on a VLAN network and are enabled with basic Ethernet
CFM functions. A MAC ping and MAC trace test instance can be used to detect the connectivity
and locate fault of the VLAN network.
Figure 7-41 Networking diagram of configuring MAC ping and MAC trace for detecting the
connectivity of a VLAN network
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure a VLAN network and the service environment for starting the NQA test instance.
2. Configure Ethernet CFM and establish the mapping relationship between CFM and VLAN.
3. Configure an NQA MAC ping and MAC trace test instance on Router A, and specify
mandatory configurations for the test instance.
4. Start the NQA MAC ping and MAC trace test instance.
Data Preparation
To complete the configuration, you need the following data:
l VLAN ID
Procedure
Step 1 Configure the IP address. (The detailed procedure is not mentioned here.)
Step 2 Add Router A and Router B to VLAN 10.
# Configure Router A.
<HUAWEI> system-view
<HUAWEI> sysname RouterA
[RouterA] vlan 10
[Router-vlan10] quit
[RouterA] interface gigabitethernet 1/0/1
[Router-GigabitEthernet1/0/1] portswitch
[Router-GigabitEthernet1/0/1] port default vlan 10
# Configure Router B.
<HUAWEI> system-view
[HUAWEI] sysname RouterB
[RouterB] cfm version standard
[RouterB] cfm enable
[RouterB] cfm md md1
[RouterB-md-md1] ma ma1
[RouterB-md-md1-ma-ma1] map vlan 10
[RouterB-md-md1-ma-ma1] mep mep-id 2 interface gigabitethernet 1/0/1 outward
[RouterB-md-md1-ma-ma1] remote-mep mep-id 1
[RouterB-md-md1-ma-ma1] mep ccm-send enable
[RouterB-md-md1-ma-ma1] remote-mep ccm-receive enable
[RouterB-md-md1-ma-ma1] quit
[RouterB-md-md1] quit
NOTE
Each interface can be configured with only one MEP and the interface must be a Layer 2 interface.
Run the display cfm remote-mep command on Router A to view the status of Ethernet CFM.
The command output shows that the status of Ethernet CFM is Up.
[RouterA] display cfm remote-mep
The total number of RMEPs is : 1
The status of RMEPS : 1 up, 0 down, 0 disable
--------------------------------------------------
MD Name : md1
Level : 0
MA Name : ma1
RMEP ID : 2
Vlan ID : --
VSI Name : --
MAC : --
CCM Receive : enabled
Trigger-If-Down : disabled
CFM Status : up
Step 4 Configure a MAC ping and MAC trace test instance to detect the connectivity of a VLAN
network.
1. Configure a VLAN MAC Ping test instance and start the test instance.
# Configure Router A.
<RouterA> system-view
[RouterA] nqa test-instance test macping
[RouterA-nqa-test-macping ] test-type macping
[RouterA-nqa-test-macping ] destination-address mac 00e0-fca4-8ae7
[RouterA-nqa-test-macping ] md md1 ma ma1
[RouterA-nqa-test-macping] mep mep-id 1
2. Configure a VLAN MAC Trace test instance and start the test instance.
# Configure Router A.
<RouterA> system-view
[RouterA] nqa test-instance test mactrace
[RouterA-nqa-test-mactrace ] test-type mactrace
[RouterA-nqa-test-mactrace ] destination-address mac 00e0-fca4-8ae7
[RouterA-nqa-test-mactrace ] md md1 ma ma1
[RouterA-nqa-test-mactrace ] mep mep-id 1
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
cfm version standard
cfm enable
#
interface GigabitEthernet1/0/1
undo shutdown
portswitch
port default vlan 10
#
cfm md md1
ma ma1
map vlan 10
mep mep-id 1 interface GigabitEthernet1/0/1 outward
mep ccm-send mep-id 1 enable
remote-mep mep-id 2
remote-mep ccm-receive mep-id 2 enable
#
nqa test-instance test macping
test-type macping
destination-address mac 00e0-fca4-8ae7
md md1 ma ma1
mep mep-id 1
#
nqa test-instance test mactrace
test-type mactrace
destination-address mac 00e0-fca4-8ae7
md md1 ma ma1
mep mep-id 1
#
return
interface GigabitEthernet1/0/1
undo shutdown
portswitch
port default vlan 10
#
cfm md md1
ma ma1
map vlan 10
mep mep-id 2 interface GigabitEthernet1/0/1 outward
mep ccm-send mep-id 2 enable
remote-mep mep-id 1
remote-mep ccm-receive mep-id 1 enable
#
return
7.47.42 Example for Configuring a MAC Ping and MAC Trace Test
Instance to Detect the Connectivity of a VPLS Network
Networking Requirements
As shown in Figure 7-42, MAC ping and trace is enabled to detect the connectivity and locate
the fault on a VPLS network. Three PEs on the VPLS network are enabled with CFM functions.
An NQA MAC ping and trace test instance is configured on PE1, with the destination MAC
address of trace packets being the MAC address of the interface on PE2. The test instance is
initiated from PE1 to detect the connectivity between PE1 and PE2.
Figure 7-42 Networking diagram of configuring MAC ping and trace for detecting the
connectivity of a VPLS network
CE3
GE1/0/0.1
10.1.1.3/24
PE3 GE1/0/0.1
GE2/0/0 GE3/0/0
100.2.1.2/30 100.3.1.2/30
Loopback1
GE3/0/0 3.3.3.3/32 GE3/0/0
100.2.1.1/30 100.3.1.1/30
GE1/0/0.1 GE1/0/0.1
10.1.1.1/24 10.1.1.2/24
CE1 CE2
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure a Martini VPLS network and the service environment for starting the NQA test
instance.
2. Configure VPLS-based Ethernet CFM on PEs.
3. Configure an NQA MAC ping and trace test instance on PE1 to detect the connectivity of
the VPLS network.
Data Preparation
To complete the configuration, you need the following data:
l IP address of an interface
l MPLS LSR ID of each PE
l VSI names and VSI IDs on PE1, PE2, and PE3
l Interface to which the VSI is bound
l Name and level of the MD, name of the MA, MEP ID, name of the interface where the
MEP resides, and type of the MEP
l Destination MAC address
Procedure
Step 1 Configure routes among PE and CE.
Step 2 Configure the Martini VPLS on the MPLS backbone network.
For configuration details, refer to the chapter "VPLS Configuration" in the Configuration Guide
- VPN.
Step 3 Configure Ethernet CFM on PEs.
# Configure PE1.
[PE1] cfm version standard
[PE1] cfm enable
[PE1] cfm md md1
[PE1-md-md1] ma ma1
[PE1-md-md1-ma-ma1] ccm-interval 30
[PE1-md-md1-ma-ma1] map vsi ldp1
[PE1-md-md1-ma-ma1] mep mep-id 1 interface gigabitethernet 1/0/0.1 inward
[PE1-md-md1-ma-ma1] remote-mep mep-id 2
[PE1-md-md1-ma-ma1] remote-mep mep-id 3
[PE1-md-md1-ma-ma1] quit
# Configure PE2.
[PE2] cfm version standard
[PE2] cfm enable
[PE2] cfm md md1
[PE2-md-md1] ma ma1
[PE2-md-md1-ma-ma1] ccm-interval 30
[PE2-md-md1-ma-ma1] map vsi ldp1
[PE2-md-md1-ma-ma1] mep mep-id 2 interface gigabitethernet 1/0/0.1 inward
[PE2-md-md1-ma-ma1] remote-mep mep-id 1
[PE2-md-md1-ma-ma1] remote-mep mep-id 3
[PE2-md-md1-ma-ma1] quit
# Configure PE3.
2. Configure a VPLS MAC Trace test instance and start the test instance.
# Configure PE1.
<PE1> system-view
[PE1] nqa test-instance test mactrace
[PE1-nqa-test-mactrace ] test-type mactrace
[PE1-nqa-test-mactrace ] destination-address mac 00e0-fca4-8ae7
[PE1-nqa-test-mactrace ] md md1 ma ma1
[PE1-nqa-test-mactrace ] mep mep-id 1
----End
Configuration Files
l Configuration file of PE1
#
sysname PE1
#
cfm version standard
cfm enable
#
mpls lsr-id 1.1.1.1
mpls
mpls l2vpn
#
vsi ldp1 static
pwsignal ldp
vsi-id 2
peer 2.2.2.2
peer 3.3.3.3
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
#
interface GigabitEthernet1/0/0.1
undo shutdown
vlan-type dot1q 10
l2 binding vsi ldp1
#
interface Gigabitethernet2/0/0
undo shutdown
ip address 100.1.1.1 255.255.255.252
mpls
mpls ldp
#
interface Gigabitethernet3/0/0
undo shutdown
ip address 100.2.1.1 255.255.255.252
mpls
mpls ldp
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
cfm md md1
ma ma1
ccm-interval 30
map vsi ldp1
mep mep-id 1 interface gigabitethernet 1/0/0.1 inward
remote-mep mep-id 2
remote-mep mep-id 3
#
nqa test-instance test macping
test-type macping
destination-address mac 00e0-fca4-8ae7
md md1 ma ma1
mep mep-id 1
#
nqa test-instance test mactrace
test-type mactrace
destination-address mac 00e0-fca4-8ae7
md md1 ma ma1
mep mep-id 1
#
ospf 1
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 100.1.1.0 0.0.0.3
network 100.2.1.0 0.0.0.3
#
return
l Configuration file of PE2
#
sysname PE2
#
cfm version standard
cfm enable
#
mpls lsr-id 2.2.2.2
mpls
mpls l2vpn
#
vsi ldp1 static
pwsignal ldp
vsi-id 2
peer 1.1.1.1
peer 3.3.3.3
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
#
interface GigabitEthernet1/0/0.1
undo shutdown
vlan-type dot1q 10
l2 binding vsi ldp1
#
interface Gigabitethernet2/0/0
undo shutdown
ip address 100.1.1.2 255.255.255.252
mpls
mpls ldp
#
interface Gigabitethernet3/0/0
undo shutdown
ip address 100.3.1.1 255.255.255.252
mpls
mpls ldp
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
#
cfm md md1
ma ma1
ccm-interval 30
map vsi ldp1
mep mep-id 2 interface gigabitethernet 1/0/0.1 inward
mep ccm-send mep-id 2 enable
remote-mep mep-id 1
remote-mep ccm-receive mep-id 1 enable
remote-mep mep-id 3
remote-mep ccm-receive mep-id 3 enable
#
ospf 1
area 0.0.0.0
network 2.2.2.2 0.0.0.0
network 100.1.1.0 0.0.0.3
network 100.3.1.0 0.0.0.3
#
return
l Configuration file of PE3
#
sysname PE3
#
cfm version standard
cfm enable
#
mpls lsr-id 3.3.3.3
mpls
mpls l2vpn
#
vsi ldp1 static
pwsignal ldp
vsi-id 2
peer 1.1.1.1
peer 2.2.2.2
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
#
interface GigabitEthernet1/0/0.1
undo shutdown
vlan-type dot1q 10
l2 binding vsi ldp1
#
interface Gigabitethernet2/0/0
undo shutdown
ip address 100.2.1.2 255.255.255.252
mpls
mpls ldp
#
interface Gigabitethernet3/0/0
undo shutdown
ip address 100.3.1.2 255.255.255.252
mpls
mpls ldp
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
#
cfm md md1
ma ma1
ccm-interval 30
map vsi ldp1
mep mep-id 3 interface gigabitethernet 1/0/0.1 inward
mep ccm-send mep-id 3 enable
remote-mep mep-id 1
remote-mep ccm-receive mep-id 1 enable
remote-mep mep-id 2
remote-mep ccm-receive mep-id 2 enable
#
ospf 1
area 0.0.0.0
network 3.3.3.3 0.0.0.0
network 100.2.1.0 0.0.0.3
network 100.3.1.0 0.0.0.3
#
return
Networking Requirements
As shown in Figure 7-43, all devices are on VLAN 10. Router A and Router B are enabled with
GMAC ping and GMAC trace. NQA GMAC ping and NQA GMAC trace test instances are
configured on Router A, with the destination address of ping and trace packets being that of
Router B. NQA test instances are initiated to detect the connectivity and delay between
Router A and Router B.
Figure 7-43 Networking diagram of configuring GMAC ping and GMAC trace for detecting
the connectivity of a VLAN network
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure a VLAN network and the service environment for starting NQA test instances.
2. Configure NQA test instances on the NQA client where GMAC ping and GMAC trace are
to be performed. Specify mandatory configurations of NQA test instances.
3. Start NQA test instances.
Data Preparation
To complete the configuration, you need the following data:
l VLAN ID
l MAC address of the remote device
Procedure
Step 1 Add Router A and Router B to VLAN 10.
# Configure Router A.
<HUAWEI> system-view
<HUAWEI> sysname RouterA
[RouterA] vlan 10
[RouterA-vlan10] quit
[RouterA] interface gigabitethernet 1/0/1
[RouterA-GigabitEthernet1/0/1] portswitch
[RouterA-GigabitEthernet1/0/1] port default vlan 10
Step 2 Configure the standard version of CFM, and enable GMAC ping and GMAC trace.
# Configure Router A.
[RouterA] cfm version standard
[RouterA] ping mac enable
[RouterA] trace mac enable
Step 3 Configure VLAN GMAC ping and VLAN GMAC trace test instances.
1. Configure a VLAN GMAC ping test instance and start the test instance.
# Configure Router A.
<RouterA> system-view
[RouterA] nqa test-instance test gmacping
[RouterA-nqa-test-gmacping ] test-type gmacping
[RouterA-nqa-test-gmacping ] vlan 10
[RouterA-nqa-test-gmacping ] destination-address mac 00e0-fca4-8ae7
2. Configure a VLAN GMAC trace test instance and start the test instance.
# Configure Router A.
<RouterA> system-view
[RouterA] nqa test-instance test gmactrace
[RouterA-nqa-test-gmactrace ] test-type gmactrace
[RouterA-nqa-test-gmacping ] vlan 10
[RouterA-nqa-test-gmacping ] destination-address mac 00e0-fca4-8ae7
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
interface GigabitEthernet1/0/1
undo shutdown
portswitch
port default vlan 10
#
cfm version standard
ping mac enable
trace mac enable
#
nqa test-instance test gmacping
test-type gmacping
vlan 10
destination-address mac 00e0-fca4-8ae7
#
nqa test-instance test gmactrace
test-type gmactrace
vlan 10
destination-address mac 00e0-fca4-8ae7
#
return
Networking Requirements
GMAC ping and GMAC trace tests can be initiated on a PE or a CE to detect the connectivity
of the VPLS network between PEs, between CEs, and between PEs and CEs.
As shown in Figure 7-44, GMAC ping and GMAC trace are enabled to detect the connectivity
and locate the fault on a VPLS network. NQA GMAC ping and GMAC trace test instances are
configured on PE1. The bridge MAC address of PE2 is configured as the destination MAC
address and the VSI name is configured. Then, test instances are started. VPLS GMAC ping and
VPLS GMAC trace operations initiated through NQA commands are the same as ping and trace
operations initiated through common command lines in principle, and additionally provide the
scheduling and result collection mechanisms and the threshold-exceeding alarm function.
Figure 7-44 Networking diagram of configuring GMAC ping and GMAC for detecting the
connectivity of a VPLS network
Loopback1: Loopback1: Loopback1:
1.1.1.9/32 2.2.2.9/32 3.3.3.9/32
POS2/0/0 POS2/0/0
168.1.1.1/24 169.1.1.1/24
PE1 PE2
POS1/0/0 POS1/0/0
GE1/0/0.1 168.1.1.2/24 169.1.1.2/24 GE2/0/0.1
P
GE1/0/0.1 GE1/0/0.1
10.1.1.1/24 10.1.1.2/24
CE1 CE2
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure a VPLS network and the service environment for starting NQA test instances.
Examples in this document use the Martini VPLS network.
2. Configure VPLS GMAC ping and GMAC trace test instances on PE1, and specify
mandatory configurations for test instances.
3. Start NQA test instances.
Data Preparation
To complete the configuration, you need the following data:
l Name and ID of the VSI
l Peer IP address and the tunnel policy used for setting up the peer relationship
l Interface to which the VSI is bound
l MAC address of the remote device
Procedure
Step 1 Configure a Martini VPLS network.
For configuration procedures, see the configuration files in this example. For configuration
details, refer to the chapter "VPLS Configuration" in the HUAWEI NetEngine80E/40E Router
Configuration Guide - VPN.
Step 2 Configure the standard version of CFM, and enable GMAC ping and GMAC trace.
# Configure PE1.
[PE1] cfm version standard
[PE1] ping mac enable
[PE1] trace mac enable
# Configure PE2. The configuration of PE2 is similar to that of PE1, and therefore is not provided
here.
Step 3 Configure VPLS GMAC ping and VPLS GMAC trace test instances.
1. Configure a VPLS GMAC ping test instance and start the test instance.
# Configure PE1.
<PE1> system-view
[PE1] nqa test-instance test gmacping
[PE1-nqa-test-gmacping ] test-type gmacping
[PE1-nqa-test-gmacping ] vsi a2
[PE1-nqa-test-gmacping ] destination-address 00e0-fca4-8ae7
2. Configure a VPLS GMAC trace test instance and start the test instance.
# Configure PE1.
<PE1> system-view
[PE1] nqa test-instance test gmactrace
[PE1-nqa-test-gmactrace ] test-type gmactrace
[PE1-nqa-test-gmactrace ] vsi a2
[PE1-nqa-test-gmactrace ] destination-address 00e0-fca4-8ae7
----End
Configuration Files
l Configuration file of CE1
#
sysname CE1
#
interface GigabitEthernet1/0/0.1
undo shutdown
vlan-type dot1q 10
ip address 10.1.1.1 255.255.255.0
#
return
interface Pos2/0/0
link-protocol ppp
undo shutdown
ip address 168.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
#
cfm version standard
ping mac enable
trace mac enable
#
nqa test-instance test gmacping
test-type gmacping
vsi a2
destination-address mac 00e0-fca4-8ae7
#
nqa test-instance test gmactrace
test-type gmactrace
vsi a2
destination-address mac 00e0-fca4-8ae7
# nqa test-instance test gmactrace
test-type gmactrace
vsi a2
destination-address 00e0-fca4-8ae7
#
ospf 1
area 0.0.0.0
network 1.1.1.9 0.0.0.0
network 168.1.1.0 0.0.0.255
#
return
l Configuration file of P
#
sysname P
#
mpls lsr-id 2.2.2.9
mpls
#
mpls ldp
#
interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 168.1.1.2 255.255.255.0
mpls
mpls ldp
#
interface Pos2/0/0
link-protocol ppp
undo shutdown
ip address 169.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 168.1.1.0 0.0.0.255
network 169.1.1.0 0.0.0.255
network 2.2.2.9 0.0.0.0
#
return
l Configuration file of PE2
#
sysname PE2
#
mpls lsr-id 3.3.3.9
mpls
#
mpls l2vpn
#
vsi a2 static
pwsignal ldp
vsi-id 2
peer 1.1.1.9
#
mpls ldp
#
mpls ldp remote-peer 1.1.1.9
remote-ip 1.1.1.9
interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 169.1.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet2/0/0.1
undo shutdown
vlan-type dot1q 10
l2 binding vsi a2
#
cfm version standard
ping mac enable
trace mac enable
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 3.3.3.9 0.0.0.0
network 169.1.1.0 0.0.0.255
#
return
Networking Requirements
As shown in Figure 7-45, MD is used on a single-AS MPLS/BGP VPN network to deploy
multicast services.
The receiver is added to the multicast group whose IP address is 225.1.1.1 to receive multicast
data sent from the source. Router A is the last hop.
The mtrace command is run on Router B to check the RPF path from the multicast source to
the destination host on a specified multicast VPN network. The command output shows
information about nodes on the RPF path, which provides references for the faulty node
detection.
Source:10.0.40.90/24 Destination:10.0.2.10/24
RouterE RouterA
10.0.4.10/24
10.0.1.10/24
LoopBack0: LoopBack0:
4.4.4.4 2.2.2.2
RouterD 10.0.1.30/24
10.0.4.50/24
RouterB
20.0.5.50/24 20.0.1.30/24
20.0.5.40/24 20.0.1.20/24
RouterC
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure BGP/MPLS IP VPN to ensure that the VPN network works normally and unicast
routes are reachable.
2. Enable multicast and Protocol Independent Multicast (PIM) globally. Packet transmission
between PEs and P is multicast implementation on the public network while packet
transmission between the PE and the CE is multicast implementation in VPN instances.
3. Configure identical share-group address, MTI, and switch-address-pool range of Switch-
MDT for the same VPN instance on each PE.
4. Configure the MTI address of each PE as the IBGP peer interface address on the public
network, and enable PIM on the MTI.
5. Configure Router B as an NQA client and create a Mtrace test instance on it.
6. Start the NQA Mtrace test instance and verify the test result.
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Configure BGP/MPLS IP VPN to ensure that the VPN network works normally and unicast
routes are reachable.
For configuration details, refer to the chapter "BGP/MPLS IP VPN Configuration" in the NE80E/
40E Configuration Guide - VPN.
Step 2 Enable multicast and PIM globally. Packet transmission between PEs and P is multicast
implementation on the public network while packet transmission between the PE and the CE is
multicast implementation in VPN instances.
For configuration details, refer to the chapter "IPv4 Multicast VPN Configuration" in the NE80E/
40E Configuration Guide - IP Multicast.
Step 3 Configure identical share-group address, MTI, and switch-address-pool range of Switch-MDT
for the same VPN instance on each PE.
For configuration details, refer to the chapter "IPv4 Multicast VPN Configuration" in the NE80E/
40E Configuration Guide - IP Multicast.
Step 4 Configure the identical share-group address, MTI, and switch-address-pool range of Switch-
MDT for the same VPN instance on each PE.
For configuration details, refer to the chapter "IPv4 Multicast VPN Configuration" in the NE80E/
40E Configuration Guide - IP Multicast.
Step 5 Configure an NQA Mtrace test instance on Router B.
<RouterB> system-view
[RouterB] nqa test-instance admin mtrace
[RouterB-nqa-admin-mtrace] test-type mtrace
[RouterB-nqa-admin-mtrace] mtrace-source-address ipv4 10.0.40.90
[RouterB-nqa-admin-mtrace] destination-address ipv4 10.0.2.10
[RouterB-nqa-admin-mtrace] mtrace-query-type last-hop
[RouterB-nqa-admin-mtrace] vpn-instance red
[RouterB-nqa-admin-mtrace] mtrace-last-hop-address ipv4 10.0.1.10
# Run the display nqa results command on Router B. You can view that the RPF path from the
multicast source to the destination host is Router E->Router D->Router B->Router A.
<RouterB> display nqa results test-instance admin mtrace
NQA entry(admin, mtrace) :testflag is inactive ,testtype is mtrace
1 . Test 1 result The test is finished
Completions: success Query Mode: max-hop
Current Hop:4 Current Probe:1
SendProbe:1 ResponseProb:1
Timeout Count:0 Busy Count:0
Drop Count:0 Max Path Ttl:5
Responser:10.0.4.10 Response Rtt: 50
mtrace start time: 2008-12-31 14:16:46.2
Last Good Probe Time: 2007-2-7 17:36:18.3
Last Good Path Time: 2007-2-7 17:36:18.3
1 . Hop 1
Outgoing Interface Address: 10.0.2.20
Incoming Interface Address: 10.0.1.10
Prehop Router Address: 10.0.1.30
Protocol : PIM Forward Code:NO_ERROR
Forward Ttl:1 Current Path Ttl:5
SG Packet Count:0xffffffff Hop Time Delay(ms):1
Input Packet Count:0 Output Packet Count:0
Input Rate(pps): 0xffffffff Output Rate(pps): 0xffffffff
Input Loss Rate: 0xffffffff SG Loss Rate: 0xffffffff
2 . Hop 2
Outgoing Interface Address: 10.0.1.30
Incoming Interface Address: 0.0.0.0
Prehop Router Address: 0.0.0.0
Protocol : Unknown Forward Code:ADMIN_PROHIB
Forward Ttl:1 Current Path Ttl:4
SG Packet Count:0xffffffff Hop Time Delay(ms):1
Input Packet Count:0 Output Packet Count:0
Input Rate(pps): 0xffffffff Output Rate(pps): 0xffffffff
Input Loss Rate: 0xffffffff SG Loss Rate: 0xffffffff
3 . Hop 3
Outgoing Interface Address: 0.0.0.0
Incoming Interface Address: 10.0.4.50
Prehop Router Address: 10.0.4.10
Protocol : PIM Forward Code:NO_ERROR
Forward Ttl:1 Current Path Ttl:3
SG Packet Count:0xffffffff Hop Time Delay(ms):1
Input Packet Count:0 Output Packet Count:313
Input Rate(pps): 0xffffffff Output Rate(pps): 0xffffffff
Input Loss Rate: 0xffffffff SG Loss Rate: 0xffffffff
4 . Hop 4
Outgoing Interface Address: 10.0.4.10
Incoming Interface Address: 10.0.40.80
Prehop Router Address: 0.0.0.0
Protocol : PIM Forward Code:NO_ERROR
Forward Ttl:1 Current Path Ttl:2
SG Packet Count:0xffffffff Hop Time Delay(ms):0xffffffff
Input Packet Count:0 Output Packet Count:0
Input Rate(pps): 0xffffffff Output Rate(pps): 0xffffffff
Input Loss Rate: 0xffffffff SG Loss Rate: 0xffffffff
----End
Configuration Files
l Configuration file of Router A
#
sysname Router A
#
router id 1.1.1.1
#
multicast routing-enable
#
interface Ethernet0/0/0
undo shutdown
ip address 10.0.1.10 255.255.255.0
pim sm
igmp static-group 225.1.1.1
#
interface Ethernet0/0/2
undo shutdown
ipv4-family unicast
undo synchronization
peer vpn-g enable
peer 4.4.4.4 enable
peer 4.4.4.4 group vpn-g
#
ipv4-family vpnv4
policy vpn-target
peer vpn-g enable
peer 4.4.4.4 enable
peer 4.4.4.4 group vpn-g
#
ipv4-family vpn-instance red
import-route direct
import-route rip 2
#
ospf 1
area 0.0.0.0
network 10.0.1.0 0.0.0.255
network 20.0.1.0 0.0.0.255
network 10.0.3.0 0.0.0.255
network 2.2.2.2 0.0.0.0
#
rip 2 vpn-instance red
network 10.0.0.0
network 20.0.0.0
import-route direct
import-route bgp cost 3
#
igmp
#
nqa test-instance admin mtrace
test-type mtrace
destination-address ipv4 10.0.2.10
mtrace-last-hop-address ipv4 10.0.2.20
mtrace-source-address ipv4 10.0.40.90
mtrace-query-type last-hop
vpn-instance red
#
return
l Configuration file of Router C
#
sysname Router C
#
router id 3.3.3.3
#
multicast routing-enable
#
mpls lsr-id 3.3.3.3
mpls
#
mpls ldp
#
interface Ethernet0/0/0
undo shutdown
ip address 20.0.1.20 255.255.255.0
pim sm
mpls
mpls ldp
#
interface Ethernet0/0/1
undo shutdown
ip address 20.0.5.40 255.255.255.0
pim sm
mpls
mpls ldp
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
pim sm
#
ospf 1
area 0.0.0.0
network 20.0.1.0 0.0.0.255
network 20.0.5.0 0.0.0.255
network 3.3.3.3 0.0.0.0
#
igmp
#
pim
c-bsr LoopBack1
c-rp LoopBack1
#
return
l Configuration file of Router D
#
sysname Router D
#
router id 4.4.4.4
#
multicast routing-enable
#
ip vpn-instance red
ipv4-family
route-distinguisher 100:1
vpn-target 100:1 export-extcommunity
vpn-target 100:1 import-extcommunity
multicast routing-enable
multicast-domain share-group 239.1.1.1 binding mtunnel 0
multicast-domain switch-group-pool 225.2.2.0 255.255.255.240
#
mpls lsr-id 4.4.4.4
mpls
#
mpls ldp
#
interface Ethernet0/0/1
undo shutdown
ip address 20.0.5.50 255.255.255.0
pim sm
mpls
mpls ldp
#
interface Ethernet0/0/2
undo shutdown
ip binding vpn-instance red
ip address 10.0.4.50 255.255.255.0
pim sm
igmp enable
#
interface LoopBack1
ip address 4.4.4.4 255.255.255.255
pim sm
#
interface MTunnel0
ip binding vpn-instance red
ip address 4.4.4.4 255.255.255.255
pim sm
#
bgp 100
group vpn-g internal
peer vpn-g connect-interface LoopBack1
peer 2.2.2.2 as-number 100
peer 2.2.2.2 group vpn-g
#
ipv4-family unicast
undo synchronization
peer vpn-g enable
Networking Requirements
As shown in Figure 7-46, an NQA Jitter is required to monitor the packet loss ratio between
Router A and Router B. If the packet loss ratio in the test result exceeds the threshold, the linking
test instance is triggered to test whether Router B is reachable.
Figure 7-46 Networking diagram of configuring the NQA alarm threshold and test instance
linkage
Ethernet1/0/0
11.1.1.1/24
Ethernet1/0/0
RouterA 11.1.1.2/24 RouterB
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure Router A as the NQA client and Router B as the NQA server.
2. On the NQA client, configure the type of the linking test instance to be ICMP.
3. On the NQA client, specify the event that triggers test instance linkage.
4. On the NQA client, create an ICMP Jitter test instance, as a primary test instance.
5. On the NQA client, configure the alarm threshold.
6. On the NQA client, start the primary test instance.
Data Preparation
To complete the configuration, you need the following data:
l Index of the linking test instance
l Number of the event associated with the threshold
l Number of the alarm threshold
l Upper threshold and lower threshold
Procedure
Step 1 Enable the NQA client and create an NQA ICMP test instance.
<RouterA> system-view
[RouterA] nqa test-instance admin icmp
[RouterA-nqa-admin-icmp] test-type icmp
[RouterA-nqa-admin-icmp] destination-address ipv4 11.1.1.2
[RouterA-nqa-admin-icmp] quit
Step 2 On Router A, configure the event that triggers test instance linkage and create a linked test
instance admin icmp.
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
interface Ethernet1/0/0
ip address 11.1.1.1 255.255.255.0
#
nqa test-instance admin icmp
test-type icmp
destination-address ipv4 11.1.1.2
nqa test-instance admin icmpjitter
test-type icmpjitter
destination-address ipv4 11.1.1.2
nqa event 10 linkage admin icmp
nqa test-instance admin icmpjitter
alarm 10 lost-packet-ratio absolute rising-threshold 80 10 falling-threshold
10 10
#
return
7.47.47 Example for Configuring the LSP Trace Test for Checking
the CR-LSP Hotstandby Tunnel
This part provides examples for configuring an LSP Trace test to detect faults on the CR-LSP
hot standby tunnels.
Networking Requirements
In the MPLS VPN as shown in Figure 7-47, a TE tunnel with Router C being the egress is set
up on Router A, and CR-LSP hot standby is configured on the TE tunnel.
l OSPF is configured on Router A, Router B, Router C, and Router D to enable them to learn
the 32-bit host addresses of the loopback interfaces from each other.
l MPLS, MPLS TE, and MPLS RSVP-TE are enabled on Router A, Router B, Router C, and
Router D.
l MPLS, MPLS TE, and MPLS RSVP-TE are enabled on the POS interfaces connected to
Router A, Router B, and Router C. Then, a TE tunnel is set up from Router A to Router
C.
But if the hotstandby CR-LSP is faulty and therefore is unable to carry the traffic that is switched
from the primary CR-LSP, the hotstandby CR-LSP needs to be detected. NQA LSP Trace can
be used to detect the connectivity of the hotstandby CR-LSP. This function can detect the
connectivity of the hotstandby CR-LSP and its performance in real time. This helps detect and
identify faults on the hotstandby CR-LSP.
GE0/0/1 GE0/0/2
30.1.1.2/24 40.1.1.1/24
RouterD
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure Router A as the NQA client and create an LSP Trace test instance on Router A.
2. Configure Router C as the NQA server.
Data Preparation
To complete the configuration, you need the following data:
l TE tunnel interface number
Procedure
Step 1 Configure routes among Router A, Router B, and Router C.
For detailed configuration, see the configuration files in this example.
Step 2 Configure MPLS RSVP-TE on Router A, Router B, Router C, and Router D.
For detailed configuration, see the configuration files in this example.
Step 3 On Router A, set up a TE tunnel to Router C.
For detailed configuration, see the configuration files in this example.
Step 4 Configure an NQA test instance on Router A.
# Enable the NQA client and create an LSP Trace test instance for checking the TE tunnel.
<RouterA> system-view
[RouterA] nqa test-instance admin lsptrace
[RouterA-nqa-admin-lsptrace] test-type lsptrace
[RouterA-nqa-admin-lsptrace] lsp-type te
[RouterA-nqa-admin-lsptrace] lsp-tetunnel tunnel 1/0/0 hot-standby
Destination ip address:3.3.3.3
Lost packet ratio: 0 %
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
mpls lsr-id 1.1.1.1
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
explicit-path backup
next hop 30.1.1.2
next hop 40.1.1.2
next hop 3.3.3.3
#
explicit-path main
next hop 10.1.1.2
next hop 20.1.1.2
next hop 3.3.3.3
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 50000
mpls rsvp-te
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 30.1.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
interface Tunnel1/0/0
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.3
mpls te tunnel-id 100
mpls te record-route
mpls te path explicit-path main
mpls te path explicit-path backup secondary
mpls te backup hot-standby wtr 15
mpls te backup ordinary best-effort
mpls te commit
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 10.1.1.0 0.0.0.255
network 1.1.1.1 0.0.0.0
network 30.1.1.0 0.0.0.255
mpls-te enable
#
nqa test-instance admin lsptrace
test-type lsptrace
lsp-type te
area 0.0.0.0
network 20.1.1.0 0.0.0.255
network 40.1.1.0 0.0.0.255
network 3.3.3.3 0.0.0.0
mpls-te enable
#
return
8 NetStream Configuration
NetStream is a type of technology that samples, makes statistics about, and advertises network
traffic information. It collects statistics about different types of communication traffic and
resource usage, which helps to manage various types of services.
Context
NOTE
flows to the NDA for analysis. This can reduce the volume of network traffic generated for
sending statistics information and avoid unnecessary information.
8.7 Configuring NetStream Multi-Address Output
You can enable specific traffic statistics to be output a specified server through the NetStream
multi-address output function.
8.8 Maintaining NetStream
This section describes how to maintain NetStream, including clearing the NetStream statistics.
8.9 Configuration Examples
You can understand the configuration procedures through the configuration flowchart. This
section describes the networking requirements, configuration roadmap, and configuration notes.
The NetStream function conforms to IETF RFC3954. For security risks, see IETF RFC3954.
This function involves analyzing the communications information of terminal customers. Before
enabling the function, ensure that it is performed within the boundaries permitted by applicable
laws and regulations. Effective measures must be taken to ensure that information is securely
protected.
The rapid development of Internet requires more bandwidth resources and more delicate network
monitoring and management. Developing a technology to satisfy the preceding demands
becomes urgent.
NetStream is a technology that is based on network traffic statistics. It collects statistics on traffic
flows and resource usage in the network accordingly, and monitors and manages the network
based on types of services and resources. NetStream provides the following functions:
l Accounting
NetStream provides detailed statistics for the resource-occupation-based (such as links,
bandwidth, and time periods) accounting. Statistics such as IP addresses, number of packets
and bytes, transmission time, Type of Service (ToS) field, and application types are
collected. Based on the collected statistics, the ISP can charge users flexibly based on time
periods, bandwidth, application, or Quality of Service (QoS); enterprises can count their
expenses or distribute costs to make better use of resources.
l Network planning and analyzing
NetStream provides key information for advanced network management tools to optimize
the network design and planning. The minimum network operation cost thus achieves the
best network performance and reliability.
l Network monitoring
NetStream realizes the real-time network monitoring. The Remote Monitoring (RMON),
RMON-2, and flow-based analysis technology visually display the flow mode on a single
router or routers across the network. This provides bases for fault pre-detection and
effective fault rectification.
l Application monitoring and analyzing
NetStream provides detailed application statistics on the network. For example, the network
administrator can view the proportion of Web, the File Transfer Protocol (FTP), Telnet,
and other TCP/IP applications to network traffic. The ISP then properly plans and allocates
network application resources to meet the users' requirements according to these application
statistics.
l Abnormal traffic detecting
NetStream detects the abnormal traffic such as network attack traffic of various types in
the real-time manner. NetStream ensures network security by means of alarms of the NMS
and the cooperation with devices.
NetStream devices involve the following:
l NDE
l NSC
l NDA
Figure 8-1 shows the relationships between the preceding NetStream devices.
RouterA NSC
NDA
RouterB NSC
The NetStream Data Exporter (NDE) samples packets and exports the statistics data to the
NetStream Collector (NSC). The NSC is responsible for analyzing and collecting the statistics
data from the NDE. The NetStream Data Analyzer (NDA) analyzes the statistics data and then
provides the basis for various services, such as network accounting, network planning, network
monitoring, application monitoring, and analysis.
The router can serve as an NDE to sample packets, aggregate flows, and export flows.
Sampling Mode
The router supports the following sampling modes: interval sampling of fixed packets, interval
sampling of random packets, interval sampling at a fixed time, and interval sampling at random
times. You can configure sampling mode and sampling ratio in the system view, interface view,
and Access Control List (ACL) view.
l Interval sampling of random packets
Indicates that the interval between two samplings is the configured random-packets. That
is, on the interface configured with NetStream, N packets are sampled randomly from N*M
packets within the interval according to the sampling ratio.
l Interval sampling of fixed packets
Indicates that the interval between two samplings is the configured fix-packets. That is, the
packet N is sampled within the interval.
l Interval sampling at random times
Indicates that one packet is sampled every random-time.
l Interval sampling at a fixed time
Indicates that one packet is sampled every fix-time.
Aggregation of Statistics
The router supports the following types of aggregation mode:
l as
l as-tos
l bgp-nexthop-tos
l destination-prefix
l destination-prefix-tos
l index-tos
l mpls-label
l prefix
l prefix-tos
l protocol-port
l protocol-port-tos
l source-index-tos
l source-prefix
l source-prefix-tos
l vlan-id
l Distributed mode
A single LPU can support the complete NetStream function by collecting packets,
aggregating traffic, and exporting traffic independently.
l Integrated mode
Some LPUs cannot process NetStream services. They only sample packets and send the
sampled packets to the NetStream Service Processing Unit (SPU) for processing in traffic
aggregation mode and traffic export mode.
NOTE
Aging Mode
The router supports the following aging modes of NetStream: aging for timeout of the active
time, aging for timeout of the inactive time, aging for disconnection of the TCP connection,
aging for count overflow, and forced aging.
The traffic in the cache records the number of passed bytes. When the number of bytes
exceeds the specified upper limit, the traffic in the cache overflows. Thus, when the system
detects that the number of bytes in a flow exceeds the specified upper limit, the system
immediately ages the flow.
l Forced aging
The traffic in the cache is aged through command lines.
Applicable Environment
As shown in Figure 8-2, carriers obtain the statistics by deploying NetStream to analyze service
modes on a network and implement flexible charging policies for users. By analyzing the bi-
directional statistics on the traffic at the user side, carriers can know about the users' traffic in
detail and then carry out effective network management. Thus, network security is ensured and
the network is monitored.
Headquater PE3
Statistics of
inbound and outbound LAN
NSC&NDA
Pre-configuration Tasks
Before configuring traffic statistics on an IPv4 network, complete the following tasks:
Data Preparation
To configure the traffic statistics, you need the following data.
No. Data
Context
l AS domain mode: According to the protocol, the AS field in IP packets is 16-bit, but AS
domain modes on some network is 32-bit. Thus, you need to switch the AS domain mode
when configuring NetStream. Otherwise, NetStream cannot sample the traffic information
between AS domains.
CAUTION
On the network where the 32-bit AS domain mode is applied, the NMS must identify the
value of the 32-bit AS domain. Otherwise, the NMS cannot identify the traffic information
about the domains sent from the device.
l interface index: The NDA obtains the interface name from the received 32-bit interface
index carried by NetStream data. And the length of interface index can be 16 bits or 32 bits.
NDA from different manufacturers may adopt different interface index types. The interface
index type of NDE should be set in accordance with the NDA. For example, if the NMS
supports the 32-bit interface index, you can switch the default 16-bit interface index to 32-
bit interface index.
Before switching AS domain mode or interface index value of the NDE, complete the following
tasks:
l Setting the output version of the NetStream original traffic as v9
Procedure
Step 1 Run:
system-view
Step 2 Run:
ip netstream as-mode { 16 | 32 }
The switchover of the AS domain mode between 16 bits and 32 bits is enabled. By default,
NetStream supports the 16-bit AS domain mode.
Step 3 Run:
ip netstream export index-switch { 16 | 32 }
The switchover of the interface index between 16 bits and 32 bits is enabled. By default,
NetStream supports the 16-bit interface index mode.
----End
Context
The router provides the following processing modes for NetStream services:
l Distributed mode
l Integrated mode
NOTE
l If the LPU supports the distributed mode, you can choose the distributed mode to process NetStream
services.
l If the LPU cannot process NetStream services, you can configure the processing mode as integrated
mode for NetStream services. In this case, after sampling the packets, the LPU sends the sampled
packets to the NetStream SPU to perform the traffic aggregation and traffic export in integrated mode.
Procedure
l Configuring the processing mode for NetStream services as distributed.
1. Run:
system-view
The slot view of the LPU on which the NetStream sampling is to be performed is
displayed.
3. Run:
ip netstream sampler to slot self
1. (Optional)Run:
set board-type slot slot-id netstream netstream
NOTE
You can use display device slot-id command to check whether the current SPUC is in
NetStream mode.
2. Run:
system-view
The slot view of the LPU on which the NetStream sampling is to be performed is
displayed
4. Run the following commands as required.
– Run:
ip netstream sampler to slot slot-id1
The processing mode for NetStream sampling is configured as integrated and the
NetStream SPUC used to process traffic sampling is specified.
– Run:
ip netstream sampler to slot slot-id2 backup
The processing mode for NetStream sampling is configured as integrated and the
backup SPUC used to process traffic sampling are specified.
----End
Context
After NetStream is enabled on the interface, by default, the NetStream statistics is implemented
to the following types of packets:
l Unicast packets
l Multicast packets
l Fragmented packets
NOTE
After NetStream is enabled on the interfaces bound to a Virtual Private Network (VPN) instance, the
NetStream statistics can be implemented to all the packets in the VPN instance.
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface interface-type interface-number
NOTE
QinQ sub interface does not support NetStream.
Step 3 Run:
ip netstream { inbound | outbound }
NOTE
l Enable both the NetStream on the interfaces and the statistics of the next hop of BGP at the same time.
l Version 5 does not support the next hop of BGP.
By default, the incoming and outgoing NetStream statistics of the unicast packets are disabled.
NOTE
The router supports statistics of sampled packets on a maximum of 128 interfaces.
----End
Context
Do as follows on the router where the NetStream traffic statistics is to be performed:
To detect TCP flood attacks, you can configure the TCP-flag statistics of original traffic to obtain
information about TCP-flag for analysis.
Procedure
Step 1 Run:
system-view
----End
Context
To receive the statistics accurately and process them, configure an appropriate template for the
NSC. Thus, after you configure the refreshment parameters of the template, the NSC can be
synchronous with the system.
The option template includes NetStream configuration information. export-stats and
sampler represent the system option and the interface option, respectively. Once the refreshment
parameters of the option template are configured, collecting statistics of the system option or
that of the interface option is enabled.
If export-stats is specified, collecting statistics of the system option is enabled. If sampler is
specified, collecting statistics of the interface option is enabled.
Do as follows on the router where the NetStream traffic statistics is to be performed:
Procedure
Step 1 Run:
system-view
The interval for refreshing the template that is used to export statistics of original traffic in version
9 is configured.
----End
Context
Do as follows on the router where the NetStream traffic statistics is to be performed:
Procedure
Step 1 Run:
system-view
The destination address of the export NetStream packets is assigned and the port number of the
export NetStream packets is specified.
Two destination addresses can be assigned in the system view.
Step 3 Run:
ip netstream export source ip-address
NOTE
l After the processing mode for NetStream sampling is configured, the router can send the NetStream
packets to the destination address of the configured statistics export.
l If the router is configured with two destination addresses, you need to delete the original destination
addresses before modifying the destination addresses to which NetStream packets are exported
----End
Context
The router provides the following processing modes for NetStream services:
l Configuring NetStream common sampling
l Configuring NetStream ACL sampling
Procedure
l Configuring NetStream common sampling
Do as follows on the routers:
1. Run:
system-view
NOTE
During the configuration of the NetStream ACL sampling, for the procedure for configuring class-based
QoS, refer to the Configuration Guide - QoS. For related commands, refer to the Command Reference.
1. Run:
system-view
The Command Line Interface (CLI) quits from the classifier view.
5. Run:
traffic behavior behavior-name
----End
Context
Run the following commands to check the previous configuration.
Procedure
l Run the display device slot-id command to check whether the service mode of the SPU is
NetStream.
l Run display ip netstream cache origin slot slot-id [ summary ] command to check
information about NetStream in the cache.
l Run display ip netstream statistics slot slot-id command to check the statistics of export
NetStream packets.
l Run the display ip netstream statistics interface { interface-type | interface-number }
command to view the statistics of sampled packets on an interface.
l Run display netstream global command to check the configurations of NetStream in the
system view and the aggregation view.
l Run display netstream all command to check the configurations of NetStream in all the
views.
----End
Example
If the service mode of the SPU is Netstream, run the display device 3 command, and you can
view that the type of the SPU on the router is displayed as NetStream.
<HUAWEI> display device 3
SPU3's detail information:
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Description: Service Processing Unit - Netstream
Board status: Normal
Register: Registered
Uptime: 2009/01/19 12:03:21
CPU Utilization(%): 5%
Mem Usage(%): 39%
Clock information:
State item State
Current syn-clock: 18
Current line-clock: 23
Syn-clock state: Locked VCXO_OK REF_OK
Syn-clock 17 state: Actived
Syn-clock 18 state: Actived
Line-clock 23 state: Inactived
Line-clock 24 state: Inactived
Statistic information:
Statistic item Statistic number
SERDES interface link lost: 0
Mpu switchs: 0
Syn-clock switchs: 0
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
If the configuration is successful, run the display ip netstream cache origin slot 3 command,
and you can view various statistics of the IP packets collected in the routerNetStream cache.
<HUAWEI> display ip netstream cache origin slot 3
Start to show information of IP and MPLS from cache of slot 3.
Getting user data from cache success.
Null 0 0 6 0 0 19054
ET1 0 0 0
3::200:0:300:2 0
1::200:0:100:1 0
0.0.0.0 0 in
0 0 0
0 0 0
0 0 0
0.0.0.0 0
Null 0 0 6 0 2 23569
ET1 3 0 0
3::200:0:300:2 0
1::200:0:100:1 0
0.0.0.0 0 in
0 0 0
0 0 0
0 0 0
0.0.0.0 0
Application Environment
By applying NetStream, you can collect the statistics of IPv6 unicast traffic. This can provide
references for carriers to monitor the operation status of the IPv6 unicast network.
Pre-configuration Tasks
Before collecting the statistics of the IPv6 unicast traffic, complete the following tasks:
Data Preparation
To collect the statistics of the IPv6 unicast traffic, you need the following data.
No. Data
1 Name and number of the interface on which the traffic statistics need to be collected
2 Number of the version in which the traffic collected through NetStream is output
Context
Do as follows on the router on which the statistics of the IPv6 unicast traffic need to be collected.
Procedure
Step 1 Run:
system-view
NOTE
The router supports statistics of sampled packets on a maximum of 128 interfaces.
----End
Context
Do as follows on the router on which the statistics of the IPv6 unicast traffic need to be collected.
Procedure
Step 1 Run:
system-view
----End
Context
Do as follows on the router on which the statistics of the IPv6 unicast traffic need to be collected.
Procedure
Step 1 Run:
system-view
NOTE
----End
Prerequisites
The configurations of the Collecting the Statistics of IPv6 Unicast Traffic function are complete.
Procedure
l Run the display ip netstream cache origin slot slot-id [ summary ] command to check
information about the traffic in the cache.
----End
Example
If the data stream is collected, run the display ip netstream cache origin slot slot-id
[ summary ] command, and you can view information about the traffic in the cache.
<HUAWEI> display ip netstream cache origin slot 3
Start to show information of IP and MPLS from cache of slot 3.
Getting user data from cache
success.
Null 0 0 6 0 0 19054
ET1 0 0 0
3::200:0:300:2 0
1::200:0:100:1 0
0.0.0.0 0 in
0 0 0
0 0 0
0 0 0
0.0.0.0 0
Null 0 0 6 0 2 23569
ET1 3 0 0
3::200:0:300:2 0
1::200:0:100:1 0
0.0.0.0 0 in
0 0 0
0 0 0
0 0 0
0.0.0.0 0
collect the statistics about traffic of different types and resource consumption on the VPN
network.
Applicable Environment
If NetStream is deployed on a VPN network, accurate statistics about VPN traffic can be
collected.
l As shown in Figure 8-3, you can enable NetStream on the user-side interfaces of PEs to
charge users for VPN traffic, and you can enable NetStream on the network-side interfaces
of PEs and P to collect the statistics about MPLS traffic to analyze the operation of MPLS
services.
PE
MPLS Core
NSC&NDA
.
PE
CE
l To collect more accurate statistics about VPN traffic, you can enable NetStream to collect
the flow and TAL information in MPLS packets. On the VPN network as shown in Figure
8-4:
– NetStream is enabled on PE2 to collect and export the statistics about MPLS TAL
information to the NSC&NDA.
– NetStream is enabled on P to collect and export the statistics about incoming and
outgoing MPLS packets to the NSC&NDA.
– Traffic statistics are analyzed on the NSC&NDA to measure the user traffic between
PEs.
Figure 8-4 Networking diagram of collecting accurate statistics about VPN traffic
NSC&NDA
Stream:FEC+Label+IP
Stream:TAL
PE1 P PE2
CE1 CE2
VPN networks that can be enabled with NetStream to collect the statistics about traffic
between PEs are as follows:
– BGP/MPLS IP VPN network
– MVPN network
– Martini VLL network
– CCC VLL network
– SVC VLL network
– Kompella VLL network
– Dynamic SH PWE3 network
– Martini VPLS network
– Kompella VPLS network
Pre-configuration Tasks
Before enabling NetStream on a VPN network, complete the following tasks:
l Configuring parameters of the link layer protocol and IP addresses for interfaces to ensure
that the link layer protocol on the interfaces is Up
l Configuring the static route or enabling IGP to ensure that IP routes between nodes are
reachable
l Enabling basic VPN capabilities
Data Preparation
To enable NetStream on a VPN network, you need the following data.
No. Data
No. Data
Context
l AS domain mode: According to the protocol, the AS field in IP packets is 16-bit, but AS
domain modes on some network is 32-bit. Thus, you need to switch the AS domain mode
when configuring NetStream. Otherwise, NetStream cannot sample the traffic information
between AS domains.
CAUTION
On the network where the 32-bit AS domain mode is applied, the NMS must identify the
value of the 32-bit AS domain. Otherwise, the NMS cannot identify the traffic information
about the domains sent from the device.
l interface index: The NDA obtains the interface name from the received 32-bit interface
index carried by NetStream data. And the length of interface index can be 16 bits or 32 bits.
NDA from different manufacturers may adopt different interface index types. The interface
index type of NDE should be set in accordance with the NDA. For example, if the NMS
supports the 32-bit interface index, you can switch the default 16-bit interface index to 32-
bit interface index.
Before switching AS domain mode or interface index value of the NDE, complete the following
tasks:
l Setting the output version of the NetStream original traffic as v9
l Setting the output version of the NetStream aggregation traffic as v9
Procedure
Step 1 Run:
system-view
Step 3 Run:
ip netstream export index-switch { 16 | 32 }
The switchover of the interface index between 16 bits and 32 bits is enabled. By default,
NetStream supports the 16-bit interface index mode.
----End
Context
The router provides the following processing modes for NetStream services:
l Distributed mode
l Integrated mode
NOTE
l If the LPU supports the distributed mode, you can choose the distributed mode to process NetStream
services.
l If the LPU cannot process NetStream services, you can configure the processing mode as integrated
mode for NetStream services. In this case, after sampling the packets, the LPU sends the sampled
packets to the NetStream SPU to perform the traffic aggregation and traffic export in integrated mode.
Procedure
l Configuring the processing mode for NetStream services as distributed.
1. Run:
system-view
The slot view of the LPU on which the NetStream sampling is to be performed is
displayed.
3. Run:
ip netstream sampler to slot self
1. (Optional)Run:
set board-type slot slot-id netstream netstream
NOTE
You can use display device slot-id command to check whether the current SPUC is in
NetStream mode.
2. Run:
system-view
The slot view of the LPU on which the NetStream sampling is to be performed is
displayed
4. Run the following commands as required.
– Run:
ip netstream sampler to slot slot-id1
The processing mode for NetStream sampling is configured as integrated and the
NetStream SPUC used to process traffic sampling is specified.
– Run:
ip netstream sampler to slot slot-id2 backup
The processing mode for NetStream sampling is configured as integrated and the
backup SPUC used to process traffic sampling are specified.
----End
Context
Do as follows on the router where the traffic statistics needs to be exported.
Procedure
Step 1 Run:
system-view
NOTE
You can configure two destination addresses in the system view or the aggregation view.
----End
Context
Do as follows on the router where the statistics about MPLS TAL information needs to be
collected.
Procedure
Step 1 Run:
system-view
Configure the router to export the statistics about MPLS TAL information to the NSC&NDA.
NOTE
Only NetStream packets exported to the NSC&NDA in V9 format carry MPLS TAL information.
----End
Context
Do as follows on the router where the export format needs to be configured for NetStream
packets:
Procedure
Step 1 Run:
system-view
are all based on the User Data Protocol (UDP). Each output NetStream packet consists of a
packet header and the records of one or more flows.
The original flow can be output in Version 5 or Version 9, and the aggregated flow can be output
in Version 8 or Version 9.
Different from earlier versions, Version 9 is based on the template. It exports statistics more
flexibly, expands the new elements of the newly defined flow, and generates new records easily.
Version 9 is not compatible with Version 5 or Version 8.
----End
Context
Do as follows on the router where the statistics about MPLS traffic needs to be collected:
Procedure
Step 1 Run:
system-view
NOTE
The router supports statistics of sampled packets on a maximum of 128 interfaces.
----End
l Run the display ip netstream cache vlan-id slot slot-id command to display information
about the NetStream traffic aggregated based on VLAN ID in the cache.
l Run the display ip netstream statistics slot slot-id command to display the NetStream
statistics.
----End
Example
Run the display netstream all command to display NetStream configurations in all views.
<HUAWEI> display ip netstream all
system
ip netstream aggregation vlan-id
enable
export version 9
ip netstream export source 3.3.3.3
ip netstream export host 1.1.1.1 6000
ip netstream export host 2.2.2.2 6000
slot 3
GigabitEthernet3/0/0.1
ip netstream inbound
ip netstream sampler fix-packets 100 inbound
slot
slot 3:ip netstream sampler to slot self
Run the display ip netstream cache vlan-id slot slot-id command to display information about
the NetStream traffic aggregated based on VLAN ID in the cache.
<HUAWEI> display ip netstream cache vlan-id slot 3
Start to show information of IP and MPLS from cache of slot 3.
Getting user data from cache success.
Run the display ip netstream statistic slot slot-id command to display the NetStream statistics.
[HUAWEI] display ip netstream statistic slot 3
Netstream statistic information on slot 3:
--------------------------------------------------------------------------------
length of packets Number Protocol Number
--------------------------------------------------------------------------------
1 ~ 64 : 0 IPV4 : 3052043
65 ~ 128 : 30000000 IPV6 : 0
129 ~ 256 : 1495697 MPLS : 0
257 ~ 512 : 0 L2 : 28443654
513 ~ 1024 : 0 Total : 31495697
1025 ~ 1500 : 0
longer than 1500 : 0
--------------------------------------------------------------------------------
Aggregation Current Streams Aged Streams
Created Streams Exported Packets Exported Streams
--------------------------------------------------------------------------------
origin 1 487
488 116 ---
as 0 0
0 0 0
as-tos 0 0
0 0 0
protport 0 0
0 0 0
protporttos 0 0
0 0 0
srcprefix 0 0
0 0 0
srcpretos 0 0
0 0 0
dstprefix 0 0
0 0 0
dstpretos 0 0
0 0 0
prefix 0 0
0 0 0
prefix-tos 0 0
0 0 0
mpls-label 0 0
0 0 0
vlan-id 0 0
0 0 0
all-aggre 3276649 210
23 0 0
--------------------------------------------------------------------------------
Velocity Of Creating Streams (streams/second): 102
--------------------------------------------------------------------------------
srcprefix = source-prefix, srcpretos = source-prefix-tos,
dstprefix = destination-prefix, dstpretos = destination-prefix-tos,
protport = protocol-port, protporttos = protocol-port-tos,
all-aggre = all aggregation streams,
"---" means that the current board is not supported.
Applicable Environment
As shown in Figure 8-5, on the MPLS backbone network, carriers can implement traffic
charging on the MPLS network by deploying NetStream on the interface at the user side of the
PEs. After NetStream is deployed at the network side of the PEs and on the P router, users can
accurately analyze MPLS service modes by implementing MPLS traffic statistics.
Figure 8-5 Networking diagram of configuring traffic statistics on the MPLS network
PE
MPLS Core
NSC&NDA
.
P
PE
CE
Pre-configuration Tasks
Before configuring the traffic statistics, complete the following tasks:
Data Preparation
To configure the traffic statistics, you need the following data.
No. Data
Context
l AS domain mode: According to the protocol, the AS field in IP packets is 16-bit, but AS
domain modes on some network is 32-bit. Thus, you need to switch the AS domain mode
when configuring NetStream. Otherwise, NetStream cannot sample the traffic information
between AS domains.
CAUTION
On the network where the 32-bit AS domain mode is applied, the NMS must identify the
value of the 32-bit AS domain. Otherwise, the NMS cannot identify the traffic information
about the domains sent from the device.
l interface index: The NDA obtains the interface name from the received 32-bit interface
index carried by NetStream data. And the length of interface index can be 16 bits or 32 bits.
NDA from different manufacturers may adopt different interface index types. The interface
index type of NDE should be set in accordance with the NDA. For example, if the NMS
supports the 32-bit interface index, you can switch the default 16-bit interface index to 32-
bit interface index.
Before switching AS domain mode or interface index value of the NDE, complete the following
tasks:
l Setting the output version of the NetStream original traffic as v9
l Setting the output version of the NetStream aggregation traffic as v9
Procedure
Step 1 Run:
system-view
Step 2 Run:
ip netstream as-mode { 16 | 32 }
The switchover of the AS domain mode between 16 bits and 32 bits is enabled. By default,
NetStream supports the 16-bit AS domain mode.
Step 3 Run:
ip netstream export index-switch { 16 | 32 }
The switchover of the interface index between 16 bits and 32 bits is enabled. By default,
NetStream supports the 16-bit interface index mode.
----End
Context
For detailed configuration procedure, see "Configuring Processing Mode for NetStream
Services."
Context
For detailed configuration procedure, see "Enabling NetStream on an Interface."
Context
Do as follows on the router where the NetStream traffic statistics is to be performed:
Procedure
Step 1 Run:
system-view
----End
Context
To make the NSC receive and process the statistics accurately, the router needs to send the
templates related to statistics to the NSC. Thus, after you configure the refreshment parameters
of the templates, the NSC can synchronize with the templates of the system.
The option template contains NetStream configurations. export-stats and sampler represent
the system option and the interface option respectively. When the refreshment parameters of the
option template are configured, statistics of the system option or that of the interface option is
enabled.
If export-stats is specified, statistics of the system option is enabled. If sampler is specified,
statistics of the interface option is enabled.
NOTE
Do as follows on the router where the statistics on the NetStream traffic is to be performed:
Procedure
Step 1 Run:
system-view
The interval for refreshing the template that is used to export statistics of original traffic in version
9 is configured.
----End
Context
For detailed configuration procedure, see "Configuring the Export of NetStream Packets."
Context
For detailed configuration procedure, see "Configuring NetStream Sampling."
Context
Run the following commands to check the previous configuration.
Procedure
l Run the display device slot-id command to check whether the service mode of the SPU is
NetStream.
l Run display ip netstream cache origin slot slot-id [ summary ] command to check
information about NetStream in the cache.
l Run display ip netstream statistics slot slot-id command to check the statistics of the
export NetStream packets.
l Run the display ip netstream statistics interface { interface-type | interface-number }
command to view the statistics of sampled packets on an interface.
l Run display netstream global command to check the configurations of NetStream
configurations in the system view and the aggregation view.
l Run display netstream all command to check the configurations of NetStream
configurations in all the views.
----End
Example
If the service mode of the SPU is Netstream, run the display device 3 command, and you can
view that the type of the SPU on the router is displayed as NetStream.
<HUAWEI> display device 3
SPU3's detail information:
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Description: Service Processing Unit - Netstream
Board status: Normal
Register: Registered
Uptime: 2009/01/19 12:03:21
CPU Utilization(%): 5%
Mem Usage(%): 39%
Clock information:
State item State
Current syn-clock: 18
Current line-clock: 23
Syn-clock state: Locked VCXO_OK REF_OK
Syn-clock 17 state: Actived
Syn-clock 18 state: Actived
Line-clock 23 state: Inactived
Line-clock 24 state: Inactived
Statistic information:
Statistic item Statistic number
SERDES interface link lost: 0
Mpu switchs: 0
Syn-clock switchs: 0
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
If the configuration is successful, run the display ip netstream cache origin slot 3 command,
and you can view various statistics of the IP packets collected in the routerNetStream cache.
<HUAWEI> display ip netstream cache origin slot 3
Start to show information of IP and MPLS from cache of slot 3.
Getting user data from cache
success.
Null 0 0 6 0 0 19054
ET1 0 0 0
3::200:0:300:2 0
1::200:0:100:1 0
0.0.0.0 0 in
0 0 0
0 0 0
0 0 0
0.0.0.0 0
Null 0 0 6 0 2 23569
ET1 3 0 0
3::200:0:300:2 0
1::200:0:100:1 0
0.0.0.0 0 in
0 0 0
0 0 0
0 0 0
0.0.0.0 0
Applicable Environment
If NetStream is enabled on the devices, especially on the high-speed interfaces, the performance
of routers and servers is greatly affected when the NetStream packets are exported in version 5
and the sampling density is large. The packets can be exported in aggregation traffic mode to
decrease the occupation of the system resources and to export the data needed by users.
Pre-configuration Tasks
Before configuring the traffic statistics, complete the following tasks:
l Ensuring that the interfaces work normally
l Configuring the link layer attributes of the interfaces
l Assigning IP addresses to the interfaces
l Configuring NetStream statistics on the router
NOTE
For the configuration of NetStream statistics on the router, see "Configuring Traffic Statistics on an IPv4
Network" and "Configuring Traffic Statistics on an MPLS Network."
Data Preparation
To configure the traffic statistics, you need the following data.
No. Data
Context
Do as follows on the router where the NetStream traffic statistics is to be performed:
Procedure
Step 1 Run:
system-view
Enable
NOTE
The mask of aggregation can be configured in the following aggregation modes: destination-prefix,
destination-prefix-tos, prefix, prefix-tos, source-prefix, and source-prefix-tos.
----End
Context
Do as follows on the router where the NetStream traffic statistics is to be performed:
Procedure
Step 1 Run:
system-view
Step 2 Run:
ip netstream aggregation { as | as-tos | bgp-nexthop-tos | destination-prefix |
destination-prefix-tos | index-tos | mpls-label | prefix | prefix-tos | protocol-
port | protocol-port-tos | source-prefix | source-prefix-tos | vlan-id }
Step 3 Run:
export version { 8 | 9 }
The format of the packets that are exported in aggregation traffic mode is configured.
----End
Context
NOTE
The interval for refreshing the aggregation traffic template can be set only when the NetStream packet is
exported in version 9.
Procedure
Step 1 Run:
system-view
The interval for refreshing the corresponding template when the original traffic is exported in
version 9.
By default, the aggregation traffic is exported in version 8.
----End
Context
Do as follows on the router where the NetStream traffic is to be sampled:
Procedure
Step 1 Run:
system-view
By default, the active aging time of the NetStream aggregation traffic is 30 minutes; the inactive
aging time is 30 seconds.
----End
Context
Do as follows on the router where the NetStream traffic statistics is to be performed:
Procedure
Step 1 Run:
system-view
The destination address of the export NetStream packets is assign and the port number of the
export NetStream packets is specified.
Step 4 Run:
ip netstream export source ip-address
----End
Context
Run the following commands to check the previous configuration.
Procedure
l Run the display device slot-id command to check whether the service mode of the SPU is
NetStream.
l Run display ip netstream cache { as | as-tos | bgp-nexthop-tos | destination-prefix |
destination-prefix-tos | index-tos | mpls-label | prefix | prefix-tos | protocol-port |
Example
If the service mode of the SPU is Netstream, run the display device 3 command, and you can
view that the type of the SPU on the router is displayed as NetStream.
<HUAWEI> display device 3
SPU3's detail information:
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Description: Service Processing Unit - Netstream
Board status: Normal
Register: Registered
Uptime: 2009/01/19 12:03:21
CPU Utilization(%): 5%
Mem Usage(%): 39%
Clock information:
State item State
Current syn-clock: 18
Current line-clock: 23
Syn-clock state: Locked VCXO_OK REF_OK
Syn-clock 17 state: Actived
Syn-clock 18 state: Actived
Line-clock 23 state: Inactived
Line-clock 24 state: Inactived
Statistic information:
Statistic item Statistic number
SERDES interface link lost: 0
Mpu switchs: 0
Syn-clock switchs: 0
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
If the configurations are successful, run the display ip netstream cache destination-prefix
slot 3 command, and you can view the detailed statistics information about the AS domain,
mask, and prefix of the destination address in the NetStream cache.
<HUAWEI> display ip netstream cache destination-prefix slot 3
Start to show information of IP and MPLS from cache of slot 1.
Getting user data from cache
success.
GI3/0/9 0 32 192.168.111.2
Local 0 32 192.168.111.1 1 8 out
Applicable Environment
The interfaces on the router are connected with different networks. After NetStream is deployed
on the interfaces, to implement more efficient and classified monitoring over the networks, it is
required to output the NetStream statistics on each interface to a specific server for analysis.
Pre-configuration Tasks
None.
Data Preparation
To configure NetStream multi-address output, you need the following data.
No. Data
Context
Do as follows on the router where traffic statistics need to be collected:
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface interface-type interface-number
Step 3 Run:
NOTE
The router supports statistics of sampled packets on a maximum of 128 interfaces.
----End
Context
Do as follows on the router where traffic statistics need to be collected:
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface interface-type interface-number
Step 3 Run:
The sampling ratio of the inbound or outbound traffic is set for IPv4 flow.
l ipv6 netstream sampler { fix-packet fix-packet-number | random-packet random-
packet-number | fix-time fix-time-value | random-time random-time-value }
{ inbound | outbound }
The sampling ratio of the inbound or outbound traffic is set for IPv6 flow.
----End
Context
Do as follows on the router where traffic statistics need to be collected:
Procedure
Step 1 Run:
system-view
Step 2 Run:
slot slot-id
----End
Context
Do as follows on the router where traffic statistics need to be collected:
Procedure
Step 1 Run:
system-view
The version in which statistics about original traffic are output is configured for IPv4 flow.
l ipv6 netstream export version version
The version in which statistics about original traffic are output is configured for IPv6 flow.
----End
Context
Do as follows on the router where specific NetStream statistics need to be output to a specified
server:
Procedure
Step 1 Run:
system-view
Step 3 Run:
ip netstream export host ip-address port
The NetStream monitoring service is deployed in the inbound direction or outbound direction
of the interface for IPv4 flow.
----End
Prerequisites
All configurations about the NetStream multi-address output function are complete.
Procedure
l
– Run the display ip netstream monitor { all | monitor-name } [ slot slot-id ] command
to check information about the monitoring services run on the main control board or
the interface board.
l Run the display ip netstream statistics interface { interface-type | interface-number }
command to view the statistics of sampled packets on an interface.
----End
Example
Run the display ip netstream monitor command, and you can view information about the
monitoring services running on the main control board.
[HUAWEI]display ip netstream monitor all
Monitor test
ID : 1
AppCount : 2
First address : 10.2.1.2
First port : 5000
Second address : 0.0.0.0
Second port : 0
Monitor test1
ID : 1
AppCount : 2
First address : 10.1.1.1
First port : 6000
Context
Run the following command to forcibly age the original traffic in the cache of the specified slot
and to export the original traffic or the aggregated traffic.
CAUTION
Before running the reset ip netstream cache slot slot-id command to forcibly age the original
traffic in the cache, you need to run the undo ip netstream command in the interface view to
temporarily disable the sampling function. Otherwise, within 30 seconds after the reset ip
netstream cache slot slot-id command is run, the system forcibly outputs the sampled original
traffic without aggregation.
You can restore the sampling function of the interface 30 seconds after the reset ip netstream
cache slot slot-id slot-id command is run.
Procedure
l In the system view
Run:
reset ip netstream cache slot slot-id
----End
Context
NOTE
This document takes interface numbers and link types of the NE40E-X8 as an example. In working
situations, the actual interface numbers and link types may be different from those used in this document.
Networking Requirements
As shown in Figure 8-6, the NetStream statistics is configured to collect information about the
source IP address, destination IP address, interface, and protocol of the packets at the user side
on the network. In this manner, users can analyze users' behaviors and fast detect the terminals
that are infected by vermicular virus, source address and destination address of the Denial of
Service (DoS) attacks or of the Distributed Denial of Service (DDoS) attacks, trash mail source,
and private subterranean network station. Based on the features of the traffic, users can fast
locate the virus types and IP addresses of the abnormal traffic that is infected. In addition, based
on the other features of NetStream, users can take corresponding restriction and filtering
measures to suppress the broadcast of the virus traffic.
Figure 8-6 Networking diagram of configuring the statistics of abnormal traffic at the user side
on the IPv4 network
192.168.1.2/24 192.168.1.1/24
POS1/0/0 POS1/0/0
LAN IP backbone
PE
CE GE2/0/0
192.168.2.1/24
192.168.2.2/24
NSC&NDA
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure PE and CE to connect with each other through the network.
2. Enable the incoming and outgoing NetStream statistics function on the user side interfaces
of PE.
Data Preparation
To complete the configuration, you need the following data:
l Interface at the user side of PE
l Version number of the export NetStream packets
l Destination address, destination port, and source address of the export NetStream packets
l Slot number of the NetStream SPU, slot 4 in this example
Procedure
Step 1 Configure PE and CE to interwork with each other.
# Assign IP addresses and masks to the interfaces, as shown in Figure 8-6. The configuration
details are not mentioned here.
Step 2 Configure NetStream statistics on POS 1/0/0 that connects PE and CE.
# Configure processing mode for NetStream services of SPUC as integrated.
<PE> set board-type slot 4 netstream
<PE> system-view
[PE] slot 1
[PE-slot-1] ip netstream sampler to slot 4
[PE-slot-1] return
# Configure the incoming and outgoing NetStream statistics on POS 1/0/0 of PE.
[PE] interface pos 1/0/0
[PE-Pos1/0/0] undo shutdown
[PE-Pos1/0/0] ip netstream inbound
[PE-Pos1/0/0] ip netstream outbound
[PE-Pos1/0/0] quit
# Configure the destination address, destination port, and source address of the packets to be
exported in version 5.
[PE] ip netstream export host 192.168.2.2 9001
[PE] ip netstream export source 192.168.2.1
PO1/0/0 0 24 6 0 0 19054
PO2/0/0 0 0 17.0.0.2
10.0.0.2 0
10.2.10.1 0
0.0.0.0 0 in
0 0 0
0 0 0
0 0 0
0.0.0.0 0
PO1/0/0 0 24 6 0 2 23569
PO2/0/0 3 0 17.0.0.2
25.10.0.2 0
10.2.10.1 0
0.0.0.0 0 in
0 0 0
0 0 0
0 0 0
0.0.0.0 0
PO1/0/0 0 24 6 0 2 23539
PO2/0/0 3 0 10.10.10.2
11.0.10.2 0
10.2.10.1 0
0.0.0.0 0 in
0 0 0
0 0 0
0 0 0
0.0.0.0 0
----End
Configuration Files
l Configuration file of CE.
#
sysname CE
#
interface Pos 1/0/0
ip address 192.168.1.2 255.255.255.0
#
return
Networking Requirements
As shown in Figure 8-7, GE 2/0/2 and GE 2/0/3 are added to the VLANIF interface. Configure
the incoming NetStream statistics on VLANIF traffic.
Figure 8-7 Networking diagram of configuring the statistics on VLANIF traffic on the IPv4
network
Router
GE3/0/1 GE3/0/2
192.168.2.1/24 172.16.8.1/24
GE2/0/2 GE2/0/3
172.16.8.145/24
NSC&NDA
192.168.2.2/24
VLANIF100
192.168.1.1/24
Switch1 Switch2
192.168.1.2/24
VLAN 100
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
l IP address and subnet mask of each interface
l Address and port number of the export destination host
l Slot number of the NetStream SPU, slot 4 in this example
Procedure
Step 1 Create a VLAN and a VLANIF interface on the router and assign an IP address to each interface.
# Create VLAN 100.
<HUAWEI> system-view
[HUAWEI] vlan 100
[HUAWEI-vlan100] quit
# Configure the destination address, destination port, and source address of the packets exported
in version 5.
[HUAWEI] ip netstream export host 192.168.2.2 9001
[HUAWEI] ip netstream export source 192.168.2.1
[HUAWEI] quit
----End
Configuration Files
Configuration file of HUAWEI.
#
sysname HUAWEI
#
vlan batch 100
#
interface Vlanif100
ip address ip address 192.168.1.1 255.255.255.0
ip netstream inbound
#
interface GigabitEthernet2/0/2
undo shutdown
portswitch
port default vlan 100
#
interface GigabitEthernet2/0/3
undo shutdown
portswitch
port default vlan 100
#
interface GigabitEthernet3/0/1
undo shutdown
ip address 192.168.2.2 255.255.255.0
#
interface GigabitEthernet3/0/2
undo shutdown
ip address 192.16.8.1 255.255.255.0
#
slot 2
ip netstream sampler to slot 4
#
ip netstream sampler fix-packets 10000 inbound
Networking Requirements
As shown in Figure 8-8, in the Generic Routing Encapsulation (GRE) tunnel of the IPv4
network, the incoming and outgoing NetStream statistics is implemented on Router C to monitor
users' traffic.
Figure 8-8 Networking diagram of configuring the statistics on GRE traffic on the IPv4 network
RouterB NSC&NDA
POS1/0/0
20.1.1.2/2 POS2/0/0
4 30.1.1.1/24
192.168.2.2/24
Loopback1 Loopback1
1.1.1.9/32 2.2.2.9/32
POS1/0/0 POS1/0/0 GE2/0/1
20.1.1.1/24 30.1.1.2/24 192.168.2.1/24
RouterA Tunnel RouterC
GE2/0/0 Tunnel3/0/0 Tunnel3/0/0 GE2/0/0
10.1.1.2/24 40.1.1.1/24 40.1.1.2/24 10.2.1.2/2
4
PC1 PC2
10.1.1.1/24 10.2.1.1/24
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
l Source addresses, destination addresses on the two ends of the GRE tunnel, and IP address
of the tunnel interface
Procedure
Step 1 Assign an IP address to each interface.
Assign an IP address to each physical interface and the loopback interface, as shown in Figure
8-8. Run the undo shutdown command to make the physical interfaces Up. The configuration
details are not mentioned here.
Step 2 Configure basic functions of the GRE static routes.
For the configuration of the GRE static routes, refer to the "GRE Protocol Configuration" in the
HUAWEI NetEngine80E/40E Router Configuration Guide - VPN.
Step 3 Configure NetStream statistics on Tunne l3/0/0 of Router C.
# Configure the processing mode for NetStream sampling as integrated mode.
<Router C> set board-type slot 4 netstream
<Router C> system-view
[Router C] slot 5
[Router C-slot-5] ip netstream sampler to slot 4
[Router C-slot-5] return
# Configure the incoming and outgoing NetStream statistics on Tunne l3/0/0 of Router C.
[Router C] interface tunnel3/0/0
[Router C-Tunnel3/0/0] ip netstream inbound
[Router C-Tunnel3/0/0] ip netstream outbound
[Router C-Tunnel3/0/0] quit
# Configure the destination address, destination port, and source address of the packets to be
exported in version 5.
[Router C] ip netstream export host 192.168.2.2 9001
[Router C] ip netstream export source 192.168.2.1
GI2/0/0 0 24 6 0 0 19054
TL3/0/0 0 0 17.0.0.2
10.0.0.2 0
10.2.10.1 0
0.0.0.0 0 in
0 0 0
0 0 0
0 0 0
0.0.0.0 0
TL3/0/0 0 24 6 0 0 19054
GI2/0/0 0 0 17.0.0.2
10.2.10.1 0
10.0.0.2 0
0.0.0.0 0 in
0 0 0
0 0 0
0 0 0
0.0.0.0 0
----End
Configuration Files
l Configuration file of Router A.
#
sysname Router A
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
#
interface Pos1/0/0
undo shutdown
link-protocol ppp
ip address 20.1.1.1 255.255.255.0
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
target-board 5
binding tunnel gre
#
interface Tunnel3/0/0
undo shutdown
tunnel-protocol gre
ip address 40.1.1.1 255.255.255.0
source LoopBack1
destination 2.2.2.9
#
ospf 1
area 0.0.0.0
network 20.1.1.0 0.0.0.255
network 1.1.1.9 0.0.0.0
#
ip route-static 10.2.1.0 255.255.255.0 Tunnel3/0/0
#
return
link-protocol ppp
ip address 30.1.1.1 255.255.255.0
#
ospf 1
area 0.0.0.0
network 20.1.1.0 0.0.0.255
network 30.1.1.0 0.0.0.255
#
return
Networking Requirements
As shown in Figure 8-9, Router A, Router B, and Router C support MPLS. The Open Shortest
Path First (OSPF) is used as the IGP protocol on the MPLS backbone network.
The local Label Distribution Protocol (LDP) sessions are set up between Router A and Router
B, and between Router B and Router C. The remote LDP session is set up between Router A
and Router C. The NetStream statistics on the MPLS traffic is configured on Router A.
CE1 CE2
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Assign an IP address to each interface.
# Assign an IP address and a mask to each interface including the loopback interface, as shown
in Figure 8-9. The configuration details are not mentioned here.
Step 2 Configure LDP sessions between the routers.
# Configure OSPF to advertise the network segment that connects to the interfaces of routers
and the host routes, enable routers and basic MPLS functions on the interface, and enable LDP.
For the configuration of a static MPLS TE tunnel, refer to the chapter "Basic MPLS
Configuration" in the HUAWEI NetEngine80E/40E Router Configuration Guide - MPLS.
Step 3 Enable NetStream statistics on POS 1/0/0 at the user side of Router B.
# Configure the processing mode of the NetStream sampling on the SPUC.
<Router B> set board-type slot 4 netstream
<Router B> system-view
[Router B] slot 1
[Router B-slot-1] ip netstream sampler to slot 4
[Router B-slot-1] return
# Configure the incoming and outgoing NetStream statistics on POS 1/0/0 of Router B.
[Router B] interface pos 1/0/0
[Router B-Pos1/0/0] ip netstream inbound
[Router B-Pos1/0/0] ip netstream outbound
[Router B-Pos1/0/0] quit
# Configure the Router B to collect the statistics of the internal IP packet headers and labels of
MPLS packets.
[Router B] ip netstream mpls-aware label-and-ip
# Configure the destination address, destination port, and source address of the packets output
in version 5.
[Router A] ip netstream export host 192.168.1.2 2100
[Router A] ip netstream export source 10.1.2.1
[Router B-slot-1] return
PO1/0/0 0 24 6 0 0 19054
PO2/0/0 0 0 17.0.0.2
10.0.0.2 0
10.2.10.1 0
0.0.0.0 0 in
1024 3 1
0 0 0
0 0 0
1.1.1.9 0
----End
Configuration Files
l Configuration file of Router A.
#
sysname Router A
#
mpls lsr-id 1.1.1.9
mpls
lsp-trigger all
#
mpls ldp
#
mpls ldp remote-peer routerc
remote-ip 3.3.3.9
#
interface Pos1/0/0
undo shutdown
link-protocol ppp
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 10.1.1.0 0.0.0.255
network 1.1.1.9 0.0.0.0
#
return
Networking Requirements
As shown in Figure 8-10, networks A and B access the Wide Area Network (WAN) through
Router D. Router D collects and aggregates the traffic and then sends the aggregation traffic
statistics to the NSC.
RouterC
GE 1/0/0
RouterA 3.3.3.2/24
3.3.3.1/24
POS 1/0/0 GE 2/0/0
172.168.0.1/24
RouterD
A WAN
GE 1/0/0
POS 1/0/0 1.1.1.1/24
172.168.0.2/24 POS 2/0/0
172..1.1.2/24
172.1.1.1/24
POS 1/0/0
RouterB
B
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure the egress router in the Local Area Network (LAN) and the WAN to interwork
with each other.
2. Configure the access router and the NSC to interwork with each other.
3. Configure the access router to send traffic statistics to the specified NSC.
4. Configure the access router to send traffic statistics to the incoming interface of the NSC.
5. Configure traffic sampling to reduce the traffic on the NMS.
6. Enable NetStream on the incoming interface of the access router.
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Assign an IP address to each router. The configuration details are note mentioned here.
Step 2 Configure Router A and Router B to interwork with the WAN.
# Configure Router A and Router D to interwork with each other.
<Router A> system-view
[Router A] ip route-static 1.1.1.1 24 pos 1/0/0
Step 3 Configure Router D and the NSC to interwork with each other.
# Configure Router D and Router C to interwork with each other.
<Router D> system-view
[Router D] ip route-static 2.2.2.1 24 3.3.3.2
GI1/0/0 0 24 24.0.0.0
PO2/0/0 0 32 0 1 650 in
----End
Configuration Files
l Configuration file of Router A.
#
sysname Router A
#
interface Pos1/0/0
link-protocol ppp
ip address 172.168.0.1 255.255.0.0
#
ip route-static 1.1.1.1 2555.255.0 POS1/0/0
#
return
interface GigabitEthernet2/0/1
ip address 3.3.3.1 255.255.255.0
#
return
Networking Requirements
As shown in Figure 8-11, the sampled traffic is balanced by two servers to ensure the integrality
of the NetStream statistics on the device. Thus, two NSCs can back up each other.
NSC&NDA
Backup
192.168.3.1/24
192.168.3.2/24
GE2/0/1
192.168.1.2/24 192.168.1.1/24
POS1/0/0 POS1/0/0
LAN
PE IP backbone
CE GE2/0/0
192.168.2.1/24
192.168.2.2/24
NSC&NDA
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
l Destination address and destination port of the backup host that receives the NetStream
packets
l Slot number of the NetStream SPU, slot 4 in this example
Procedure
Step 1 Configure PE and CE to interwork with each other.
For detailed configuration procedure, see Step 1 in "Example for Configuring the Statistics
on Abnormal Traffic at the User Side on an IPv4 Network."
Step 2 Enable NetStream statistics on GE 1/0/0 at the user side of PE.
For detailed configuration procedure, see Step 2 in "Example for Configuring the Statistics
on Abnormal Traffic at the User Side on an IPv4 Network."
Step 3 Configure the backup of statistics export.
# Configure the destination address and destination port of the packets exported in version 5.
[PE] ip netstream export host 192.168.3.1 9002
----End
Configuration Files
l Configuration file of CE.
#
sysname CE
#
interface Pos 1/0/0
link-protcol ppp
ip address 192.168.1.2 255.255.255.0
#
return
Networking Requirements
As shown in Figure 8-12, the user network is connected to the Martini VLL through the DSLAM
device; sub-interfaces of the PE on the VPN is configured with statistics on the NetStream traffic
aggregated based on VLAN. These configurations help operators to monitor the service traffic
of their users and provide reference for the network accounting.
I n t e rn et
NM station1 NM station2
user 192.168.0.2/24 172.16.0.2/24
network
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To configure statistics on the NetStream traffic aggregated based on VLAN ID, you need the
following data:
l Sampling ratio
l Source IP address of the exported NetStream statistics
l Destination IP address and port number of the exported NetStream statistics
Procedure
Step 1 Configure a Martini VLL.
For the configuration of the Martini VLL, refer to the chapter "Configuring a Martini VLL" in
the HUAWEI NetEngine80E/40E Configuration Guide - VPN.
Step 2 Configure NetStream on PE1.
# Enable the statistics on the incoming and outgoing traffic on the interface.
<PE1> system-view
[PE1] interface gigabitethernet 3/0/0.1
[PE1-GigabitEthernet3/0/0.1] ip netstream inbound
Step 6 Configure the source IP address of the exported NetStream statistics on PE1.
[PE1-aggregation-vlanid] ip netstream export source 1.1.1.1
Step 7 Configure the destination IP address of the exported NetStream statistics on PE1.
[PE1-aggregation-vlanid] ip netstream export host 192.168.0.2 6000
[PE1-aggregation-vlanid] ip netstream export host 172.16.0.2 6000
[PE1-aggregation-vlanid] quit
ip netstream inbound
ip netstream sampler fix-packets 100 inbound
slot
slot 3:ip netstream sampler to slot self
# Run the display ip netstream cache vlan-id slot slot-id command. You can view information
about the VLAN-based NetStream aggregation in the cache.
[PE1] display ip netstream cache vlan-id slot 3
Show information of aggregation-vlanid cache is starting.
get show cache user data success.
SrcIf VlanId Packets Streams Direction
--------------------------------------------------------------------------------
# Run the display ip netstream statistic slot slot-id command. You can view the NetStream
statistics.
[PE1] display ip netstream statistic slot 3
Netstream statistic information on slot 3:
--------------------------------------------------------------------------------
length of packets Number Protocol Number
--------------------------------------------------------------------------------
1 ~ 64 : 0 IPV4 : 3052043
65 ~ 128 : 30000000 IPV6 : 0
129 ~ 256 : 1495697 MPLS : 0
257 ~ 512 : 0 L2 : 28443654
513 ~ 1024 : 0 Total : 31495697
1025 ~ 1500 : 0
longer than 1500 : 0
--------------------------------------------------------------------------------
Aggregation Current Streams Aged Streams
Created Streams Exported Packets Exported Streams
--------------------------------------------------------------------------------
origin 1 487
488 116 0
as 0 0
0 0 0
as-tos 0 0
0 0 0
protport 0 0
0 0 0
protporttos 0 0
0 0 0
srcprefix 0 0
0 0 0
srcpretos 0 0
0 0 0
dstprefix 0 0
0 0 0
dstpretos 0 0
0 0 0
prefix 0 0
0 0 0
prefix-tos 0 0
0 0 0
mpls-label 0 0
0 0 0
vlan-id 0 0
0 0 0
bgp-nhp-tos 0 0
0 0 0
index-tos 0 0
0 0 0
src-index-tos 0 0
0 0 0
----End
Configuration Files
l Configuration file of PE1
#
sysname PE1
#
mpls lsr-id 1.1.1.1
mpls
#
mpls l2vpn
mpls l2vpn default martini
#
mpls ldp
#
ospf 1
area 0.0.0.0
network 10.1.1.0 0.0.0.255
network 1.1.1.1 0.0.0.0
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 192.168.0.1 255.255.255.0
#
interface GigabitEthernet3/0/0
undo shutdown
#
interface GigabitEthernet3/0/0.1
vlan-type dot1q 10
mpls l2vc 2.2.2.2 10
ip netstream inbound
ip netstream sampler fix-packets 100 inbound
#
interface Pos6/0/0
link-protocol ppp
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
ip netstream aggregation vlan-id
enable
ip netstream export source 1.1.1.1
ip netstream export host 192.168.0.2 6000
ip netstream export host 172.16.0.2 6000
#
ip route-static 172.16.0.0 255.255.255.0 10.1.1.2
#
slot 3
ip netstream sampler to slot self
#
return
Networking Requirements
As shown in Figure 8-13, when deploying the NetStream service and outputting sampling
packets in v9 format, the user can configure the interface index mapped in NetStream to 32 bits.
In this case, the NM station can directly obtain the interface name according to the output 32-
bit interface index.
10.10.10.1/24
RouterA POS6/0/0 NM station
10.10.10.2/24
Configuration Roadmap
The configuration roadmap is as follows:
1. Ensure that the link between the router and the NM station is normal.
2. Switch the NetStream index from 16 bits to 32 bits.
Data Preparation
To complete the configuration, you need the following data:
l Management address of Router A being 10.10.10.1
l Value of the NetStream index being 32 bits
Procedure
Step 1 Set the version of the output packet as v9.
<HUAWEI> system-view
[HUAWEI] ip netstream export version 9
----End
Configuration Files
l Configuration file of Router A
#
sysname HUAWEI
#
interface POS6/0/0
undo shutdown
ip address 10.10.10.1 255.255.255.0
#
ip netstream export version 9
#
ip netstream aggregation as
enable
export version 9
#
ip netstream aggregation as-tos
enable
export version 9
#
ip netstream aggregation destination-prefix
enable
export version 9
#
ip netstream aggregation destination-prefix-tos
enable
export version 9
#
ip netstream aggregation mpls-label
enable
#
ip netstream aggregation prefix
enable
export version 9
#
ip netstream aggregation prefix-tos
enable
export version 9
#
ip netstream aggregation protocol-port
enable
export version 9
#
ip netstream aggregation protocol-port-tos
enable
export version 9
#
ip netstream aggregation source-prefix
enable
export version 9
#
ip netstream aggregation source-prefix-tos
enable
export version 9
#
ip netstream export index-switch 32
#
return
Networking Requirements
As shown in Figure 8-14, GE 1/0/0 and GE 1/0/1 on routerRouter A are connected to the
backbone network and the Information Data Center (IDC) respectively. NetStream is deployed
in the inbound direction of both GE 1/0/0 and GE 1/0/1 on Router A. The NetStream statistics
collected on GE 1/0/0 is output to IPv4 NMS1 while the NetStream statistics collected on GE
1/0/1 is output to IPv6 NMS2.
RouterA
IP Core
GE1/0/1 IDC
10.2.0.1/24
NMS2
FA12::1/64
Configuration Roadmap
The configuration roadmap is as follows:
1. Enable NetStream on each interface.
2. Set a NetStream sampling ratio.
3. Configure a NetStream sampling mode.
4. Configure NetStream multi-address output.
Data Preparation
To complete the configuration, you need the following data:
l Sampling ratio
l Destination IP address and port number of the output NetStream statistics
Procedure
Step 1 Enable NetStream on Router A.
# Enable NetStream for both incoming and outgoing traffic on an interface.
<Router A> system-view
[Router A] interface gigabitethernet 1/0/0
[Router A-GigabitEthernet1/0/0] ip netstream inbound
[Router A-GigabitEthernet1/0/0] quit
[Router A] interface gigabitethernet 1/0/1
[Router A-GigabitEthernet1/0/1] ip netstream inbound
# Run the display ip netstream statistics command. You can view NetStream statistics.
[Router A] display ip netstream statistics slot 3
Netstream statistic information on slot 3:
--------------------------------------------------------------------------------
length of packets Number Protocol Number
--------------------------------------------------------------------------------
1 ~ 64 : 0 IPV4 : 3052043
65 ~ 128 : 30000000 IPV6 : 0
129 ~ 256 : 1495697 MPLS : 0
257 ~ 512 : 0 L2 : 28443654
513 ~ 1024 : 0 Total : 31495697
1025 ~ 1500 : 0
longer than 1500 : 0
--------------------------------------------------------------------------------
Aggregation Current Streams Aged Streams
Created Streams Exported Packets Exported Streams
--------------------------------------------------------------------------------
origin 1 487
488 116 0
as 0 0
0 0 0
as-tos 0 0
0 0 0
protport 0 0
0 0 0
protporttos 0 0
0 0 0
srcprefix 0 0
0 0 0
srcpretos 0 0
0 0 0
dstprefix 0 0
0 0 0
dstpretos 0 0
0 0 0
prefix 0 0
0 0 0
prefix-tos 0 0
0 0 0
mpls-label 0 0
0 0 0
vlan-id 0 0
0 0 0
bgp-nhp-tos 0 0
0 0 0
index-tos 0 0
0 0 0
src-index-tos 0 0
0 0 0
all-aggre 3276649 210
23 0 0
--------------------------------------------------------------------------------
Velocity Of Creating Streams (streams/second): 102
--------------------------------------------------------------------------------
srcprefix = source-prefix, srcpretos = source-prefix-tos,
dstprefix = destination-prefix, dstpretos = destination-prefix-tos,
protport = protocol-port, protporttos = protocol-port-tos,
all-aggre = all aggregation streams,
"---" means that the current board is not supported.
----End
Example
Configuration file of Router A
#
sysname Router A
#
ip netstream monitor monitor1
ip netstream export host 192.168.0.1 6000
#
ip netstream monitor monitor2
ip netstream export host ipv6 FA12::1 6000
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.0.1 255.255.255.0
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.2.0.1 255.255.255.0
#
interface GigabitEthernet1/0/0
ip netstream inbound
ip netstream sampler fix-packets 100 inbound
ip netstream monitor monitor1 inbound
#
interface GigabitEthernet1/0/1
ip netstream inbound
ip netstream sampler fix-packets 100 inbound
ip netstream monitor monitor2 inbound
#
ip route-static 192.168.0.0 255.255.255.0 10.1.0.2
#
slot 3
ip netstream sampler to slot self
#
return
Networking Requirements
With the development of L3VPN services, carriers demand increasingly higher Quality of
Service (QoS) on VPNs. After voice over IP and video over IP services are carried out, carriers
and users all tend to sign Service Level Agreements (SLAs). With NetStream deployed on a
BGP/MPLS IP VPN, traffic statistics on the LSPs between PEs can be collected. Carriers can
then adjust networks based on the collected statistics to effectively meet service requirements.
In the IPv4 BGP/MPLS IP VPN shown in Figure 8-15:
l NetStream is enabled on PE2 to collect and export the statistics about MPLS TAL
information to the NSC&NDA.
l NetStream is enabled on P to collect and export the statistics about incoming and outgoing
MPLS packets to the NSC&NDA.
l Traffic statistics are analyzed on the NSC&NDA to measure the user traffic between PEs.
NSC&NDA
172.18.1.2/24 192.168.2.2/24
POS3/0/0
172.18.1.1/24
GE1/0/0
POS1/0/0 POS2/0/0 192.168.2.1/24
Loopback1 172.16.1.2/24 172.17.1.1/24
Loopback1
1.1.1.9/32 POS3/0/0 POS3/0/0 3.3.3.9/32
GE1/0/0 172.16.1.1/24 172.17.1.2/24 GE2/0/0
10.2.1.2/24 PE1 P PE2 10.4.1.2/24
Loopback1
2.2.2.9/32
MPLS backbone
AS: 100
GE1/0/0 GE1/0/0
10.2.1.1/24 10.4.1.1/24
CE2 CE4
VPN-A VPN-A
AS: 65420 AS: 65440
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an IP address for each interface.
2. Configure a BGP/MPLS IP VPN.
3. Enable NetStream on PE2 to collect and export the statistics about MPLS TAL information
to the NSC&NDA.
4. Enable NetStream on P to collect and export the statistics about incoming and outgoing
MPLS packets to the NSC&NDA.
Data Preparation
To complete the configuration, you need the following data:
l Version of the format in which NetStream packets are exported
l Destination addresses, destination ports, and source addresses of NetStream packets
l Number of the slot where the NetStream board is inserted (In this example, the NetStream
board is inserted into slot 4.)
Procedure
Step 1 Configure an IP address for each interface.
Configure the IP address and mask for each interface, including each Loopback interface as
shown in Figure 8-15. The detailed configuration is not mentioned here.
Step 2 Configure a BGP/MPLS IP VPN.
For details, see HUAWEI NetEngine80E/40E Router Configuration Guide - VPN.
Step 3 Enable NetStream on PE2 to collect and export the statistics about MPLS TAL information to
the NSC&NDA.
# Configure the NetStream board SPUC on PE2 to work in integrated mode.
<PE2> set board-type slot 4 netstream
<PE2> system-view
[PE2] slot 2
[PE2-slot-2] ip netstream sampler to slot 4
[PE2-slot-2] return
# Configure the destination addresses, destination ports, and source addresses of NetStream
packets to be exported in V9 format.
[PE2] ip netstream export version 9
[PE2] ip netstream export host 192.168.2.2 9000
[PE2] ip netstream export source 192.168.2.1
Step 4 Enable NetStream on P to collect and export the statistics about incoming and outgoing MPLS
packets to the NSC&NDA.
# Configure the NetStream board SPUC on P to work in integrated mode.
<P> set board-type slot 4 netstream
<P> system-view
[P] slot 2
[P-slot-2] ip netstream sampler to slot 4
[P-slot-2] quit
# Configure P to collect the statistics about incoming and outgoing MPLS packets on POS 2/0/0.
[P] interface Pos 2/0/0
[P-Pos2/0/0] ip netstream inbound
[P-Pos2/0/0] ip netstream outbound
[P-Pos2/0/0] quit
# Configure P to collect the statistics about inner IP packets and label information in the sampled
MPLS packets.
[P] ip netstream mpls-aware label-and-ip
# Configure the destination addresses, destination ports, and source addresses of NetStream
packets to be exported in V9 format.
<P> ip netstream export version 9
[P] ip netstream export host 192.168.2.2 9001
[P] ip netstream export source 172.17.1.1
# Enable NetStream sampling, with the sampling mode being regular packet sampling.
[P] ip netstream sampler fix-packets 10000 inbound
[P] ip netstream sampler fix-packets 10000 outbound
[P] quit
----End
Configuration Files
l Configuration file of PE1
#
sysname PE1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
tnl-policy gre1
vpn-target 100:1 export-extcommunity
vpn-target 100:1 import-extcommunity
#
mpls lsr-id 1.1.1.9
mpls
#
interface GigabitEthernet1/0/0
Networking Requirements
In the single-AS MPLS/BGP VPN as shown in Figure 8-16, you can deploy the MD scheme to
implement multicast services. With NetStream on the MD VPN, you can monitor user traffic
between PEs.
l NetStream is enabled on PE-A to collect and export the statistics about MPLS TAL
information to the NSC&NDA.
l NetStream is enabled on P to collect and export the statistics about incoming and outgoing
MPLS packets to the NSC&NDA.
l Traffic statistics are analyzed on the NSC&NDA to measure the user traffic between PEs.
VPN
RED
Source2
GE1
CE-Rb
PC3
GE2 GE3
GE1 Loopback1
VPN Loopback1
BLUE GE3 GE3 GE1
GE2 VPN
Public Loopback1 RED
CE-Bb
GE2 Loopback1 GE2
P CE-Rc
PE-B GE1 PE-C
GE2 GE3 GE2
Loopback2
GE1
Source1 GE1
GE4 GE3
CE-Ra
GE1 GE2
GE1 GE3
GE4 VPN
CE-Bc
GE2 BLUE
VPN GE2 GE1
RED 192.168.2.2/24
Loopback1
PE-A
PC1 PC4
192.168.9.2/24
NSC&NDA
NOTE
GE1 indicates GigabitEthernet 1/0/0, GE2 indicates GigabitEthernet 2/0/0, and GE3 indicates
GigabitEthernet 3/0/0. Table 8-1 shows IP addresses of these interfaces.
The routers support two MVPN forwarding modes: distributed mode and integrated mode. In distributed
mode, you must run the multicast-vpn slot command to enable the MVPN service on a specified SPUC.
In integrated mode, you must first run the set board-type slot command to set the service mode of a
specified SPUC to tunnel and then the multicast-vpn slot command to enable the MVPN service on the
SPUC.
P GE1: 192.168.6.2/24 -
GE2: 192.168.7.2/24 -
GE3: 192.168.8.2/24 -
GE4: 192.168.2.1/24 -
GE4: 192.168.9.1/24 -
GE2: 10.110.2.2/24 -
GE2: 10.110.3.2/24 -
GE2: 10.110.4.2/24 -
GE3: 10.110.12.1/24 -
GE2: 10.110.5.2/24 -
GE3: 10.110.12.2/24 -
GE2: 10.110.6.2/24 -
Multicast source/ In VPN RED, the multicast source is Source1 and the multicast
multicast receiver receivers are PC1, PC2, and PC3; in VPN BLUE, the multicast source
is Source2 and the multicast receiver is PC4. In VPN RED, the share-
group address is 239.1.1.1 and the switch-group address pool ranges
from 225.2.2.1 to 225.2.2.16; in VPN BLUE, the share-group address
is 239.2.2.2 and the switch-group address pool ranges from 225.4.4.1
to 225.4.4.16.
VPN instances to GE2 and GE3 on PE-A belong to the VPN-RED instance; GE1 and
which interfaces on Loopback1 on PE-A belong to the public network instance. GE2 and
PEs belong GE3 on PE-B belong to the VPN-BLUE instance and the VPN-RED
instance respectively; GE1 and Loopback1 on PE-B belong to the
public network instance. GE2 on PE-C belongs to the VPN-RED
instance; GE3 and Loopback2 on PE-C belong to the VPN-BLUE
instance; GE1 and Loopback1 on PE-C belong to the public network
instance.
Routing protocol Configure OSPF as a unicast routing protocol on the public network;
and MPLS configure RIP between the PE and the CE. Set up BGP peer
relationships between Loopback1 interfaces on PE-A, PE-B, and PE-
C to advertise VPN routes. Enable MPLS on the public network.
Multicast function Enable multicast on P. Enable multicast on the interfaces on the public
network sides of PE-A, PE-B, and PE-C; enable multicast in the VPN-
RED instances on PE-A, PE-B, and PE-C; enable multicast in the VPN-
BLUE instances on PE-B, and PE-C; enable multicast on CE-Ra, CE-
Rb, CE-Rc, CE-Bb, and CE-Bc.
IGMP function Enable IGMP on GE2 on PE-A; enable IGMP on GE1 interfaces on
CE-Rb, CE-Rc, and CE-Bc separately.
PIM function Enable PIM-SM on all the private network interfaces in VPN-RED and
VPN-BLUE separately; enable PIM-SM on all the interfaces on P and
CEs and interfaces on the public network sides of PEs. Configure
Loopback1 on P as both a C-BSR and a C-RP to serve all groups;
configure Loopback1 on CE-Rb as both a C-BSR and a C-RP in VPN-
RED to serve all groups; configure Loopback2 on PE-C as both a C-
BSR and a C-RP in VPN-BLUE to serve all groups.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an IP address for each interface.
2. Configure an MVPN.
3. Enable NetStream on PE-A to collect and export the statistics about MPLS TAL
information to the NSC&NDA.
4. Enable NetStream on P to collect and export the statistics about incoming and outgoing
MPLS packets to the NSC&NDA.
Data Preparation
See networking requirements in Table 8-2
Procedure
Step 1 Configure an IP address for each interface.
Configure the IP address and mask for each interface, including each Loopback interface as
shown in Figure 8-16. The detailed configuration is not mentioned here.
Step 2 Configure an MVPN.
For details, see the section "Example for Configuring a Single-AS MD VPN" in the chapter
"IPv4 Multicast VPN Configuration" in HUAWEI NetEngine80E/40E Router Configuration
Guide - IP Multicast.
Step 3 Enable NetStream on PE-A to collect and export the statistics about MPLS TAL information to
the NSC&NDA.
# Configure the NetStream board SPUC on PE-A to work in integrated mode.
<PE-A> set board-type slot 4 netstream
<PE-A> system-view
[PE-A] slot 2
[PE-A-slot-2] ip netstream sampler to slot 4
[PE-A-slot-2] return
# Configure the destination addresses, destination ports, and source addresses of NetStream
packets to be exported in V9 format.
[PE-A] ip netstream export version 9
[PE-A] ip netstream export host 192.168.2.2 9000
[PE-A] ip netstream export source 1.1.1.1
Step 4 Enable NetStream on P to collect and export the statistics about incoming and outgoing MPLS
packets to the NSC&NDA.
# Configure the NetStream board SPUC on P to work in integrated mode.
<P> set board-type slot 4 netstream
<P> system-view
[P] slot 2
[P-slot-2] ip netstream sampler to slot 4
[P-slot-2] quit
# Configure P to collect the statistics about incoming and outgoing MPLS packets on POS 2/0/0.
[P] interface GigabitEthernet 1/0/0
[P-GigabitEthernet 1/0/0] ip netstream inbound
[P-GigabitEthernet 1/0/0] ip netstream outbound
[P-GigabitEthernet 1/0/0] quit
# Configure P to collect the statistics about inner IP packets and label information in the sampled
MPLS packets.
[P] ip netstream mpls-aware label-and-ip
# Configure the destination addresses, destination ports, and source addresses of NetStream
packets to be exported in V9 format.
[P] ip netstream export version 9
[P] ip netstream export host 192.168.9.2 9001
[P] ip netstream export source 192.168.9.1
# Enable NetStream sampling, with the sampling mode being regular packet sampling.
[P] ip netstream sampler fix-packets 10000 inbound
[P] ip netstream sampler fix-packets 10000 outbound
[P] quit
----End
Configuration Files
l Configuration file of PE-A
#
sysname PE-A
#
router id 1.1.1.1
#
multicast routing-enable
#
multicast-vpn slot 4
#
mpls lsr-id 1.1.1.1
mpls
#
mpls ldp
#
ip vpn-instance RED
ipv4-family
route-distinguisher 100:1
vpn-target 100:1 export-extcommunity
vpn-target 100:1 import-extcommunity
multicast routing-enable
multicast-domain share-group 239.1.1.1 binding MTunnel 0
multicast-domain switch-group-pool 225.2.2.0 255.255.255.240
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 192.168.6.1 255.255.255.0
pim sm
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance RED
ip address 10.110.1.1 255.255.255.0
pim sm
igmp enable
#
interface GigabitEthernet3/0/0
undo shutdown
ip binding vpn-instance RED
ip address 10.110.2.1 255.255.255.0
pim sm
#
interface GigabitEthernet4/0/0
undo shutdown
ip address 192.168.9.1 255.255.255.0
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
pim sm
#
interface MTunnel0
ip binding vpn-instance RED
ip address 1.1.1.1 255.255.255.255
pim sm
#
bgp 100
group VPN-G internal
peer VPN-G connect-interface LoopBack1
peer 1.1.1.2 as-number 100
peer 1.1.1.2 group VPN-G
peer 1.1.1.3 as-number 100
peer 1.1.1.3 group VPN-G
#
ipv4-family unicast
undo synchronization
peer VPN-G enable
peer 1.1.1.2 enable
peer 1.1.1.2 group VPN-G
peer 1.1.1.3 enable
peer 1.1.1.3 group VPN-G
#
ipv4-family vpnv4
policy vpn-target
peer VPN-G enable
peer 1.1.1.2 enable
#
interface MTunnel1
ip binding vpn-instance BLUE
ip address 1.1.1.3 255.255.255.255
pim sm
#
bgp 100
group VPN-G internal
peer VPN-G connect-interface LoopBack1
peer 1.1.1.1 as-number 100
peer 1.1.1.1 group VPN-G
peer 1.1.1.2 as-number 100
peer 1.1.1.2 group VPN-G
#
ipv4-family unicast
undo synchronization
peer VPN-G enable
peer 1.1.1.1 enable
peer 1.1.1.1 group VPN-G
peer 1.1.1.2 enable
peer 1.1.1.2 group VPN-G
#
ipv4-family vpnv4
policy vpn-target
peer VPN-G enable
peer 1.1.1.1 enable
peer 1.1.1.1 group VPN-G
peer 1.1.1.2 enable
peer 1.1.1.2 group VPN-G
#
ipv4-family vpn-instance RED
import-route rip 2
import-route direct
#
ipv4-family vpn-instance BLUE
import-route rip 3
import-route direct
#
ospf 1
area 0.0.0.0
network 1.1.1.3 0.0.0.0
network 192.168.0.0 0.0.255.255
#
rip 2 vpn-instance RED
network 10.0.0.0
import-route bgp cost 3
#
rip 3 vpn-instance BLUE
network 10.0.0.0
import-route bgp cost 3
#
return
l Configuration file of P
#
sysname P
#
ip netstream mpls-aware label-and-ip
#
multicast routing-enable
#
mpls lsr-id 2.2.2.2
mpls
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 192.168.6.2 255.255.255.0
pim sm
mpls
mpls ldp
ip netstream inbound
ip netstream outbound
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 192.168.7.2 255.255.255.0
pim sm
mpls
mpls ldp
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 192.168.8.2 255.255.255.0
pim sm
mpls
mpls ldp
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
pim sm
#
pim
c-bsr Loopback1
c-rp Loopback1
#
ospf 1
area 0.0.0.0
network 2.2.2.2 0.0.0.0
network 192.168.0.0 0.0.255.255
#
slot 2
ip netstream sampler to slot 4
#
ip netstream sampler fix-packets 10000 inbound
ip netstream sampler fix-packets 10000 outbound
ip netstream export version 9
ip netstream export source 192.168.2.1
ip netstream export host 192.168.2.2 9001
#
return
l Configuration file of CE-Ra
#
sysname CE-Ra
#
multicast routing-enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.110.7.1 255.255.255.0
pim sm
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.110.2.2 255.255.255.0
pim sm
#
rip 2
network 10.0.0.0
import-route direct
#
return
l Configuration file of CE-Bb
#
sysname CE-Bb
#
multicast routing-enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.110.8.1 255.255.255.0
pim sm
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.110.3.2 255.255.255.0
pim sm
#
rip 3
network 10.0.0.0
import-route direct
#
return
l Configuration file of CE-Rb
#
sysname CE-Rb
#
multicast routing-enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.110.9.1 255.255.255.0
pim sm
igmp enable
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.110.4.2 255.255.255.0
pim sm
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 10.110.12.1 255.255.255.0
pim sm
#
interface loopback 1
ip address 22.22.22.22 32
pim sm
#
pim
c-bsr Loopback1
c-rp Loopback1
#
rip 2
network 10.0.0.0
network 22.0.0.0
import-route direct
#
return
l Configuration file of CE-Rc
#
sysname CE-Rc
#
multicast routing-enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.110.10.1 255.255.255.0
pim sm
igmp enable
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.110.5.2 255.255.255.0
pim sm
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 10.110.12.2 255.255.255.0
pim sm
#
rip 2
network 10.0.0.0
import-route direct
#
return
Networking Requirements
As shown in Figure 8-17, CE1 and CE2 are connected to PE1 and PE2 respectively through
VLANs.
A Martini VLL is set up between CE1 and CE2.
With NetStream deployed on the VLL, traffic statistics on the LSPs between PEs can be
collected.
NSC&NDA
192.168.9.2/24
192.168.2.2/24
Loopback1 POS 3/0/0
1.1.1.9/32 192.168.9.1/24 POS 1/0/0
P 192.168.2.1/24
POS 2/0/0 POS 1/0/0 PE 2
10.1.1.2/24 10.2.2.2/24 Loopback1
PE 1
POS 2/0/0 POS 2/0/0 3.3.3.9/32
GE1/0/0.1 10.1.1.1/24 10.2.2.1/24 GE1/0/0.1
VLAN10 Loopback1 VLAN20
GE1/0/0.1 2.2.2.9/32 GE 1/0/0.1
172.16.1.1/24 172.16.1.2/24
VLAN10 VLAN20
Martini
CE 1 CE 2
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Configure an IP address for each interface.
Configure the IP address and mask for each interface, including each Loopback interface as
shown in Figure 8-17. The detailed configuration is not mentioned here.
For details, see the section "Example for Configuring a Martini VLL" in the chapter "VLL
Configuration" in the HUAWEI NetEngine80E/40E Router Configuration Guide - VPN.
Step 3 Enable NetStream on PE2 to collect and export the statistics about MPLS TAL information to
the NSC&NDA.
# Configure the destination addresses, destination ports, and source addresses of NetStream
packets to be exported in V9 format.
[PE2] ip netstream export version 9
[PE2] ip netstream export host 192.168.2.2 9000
[PE2] ip netstream export source 192.168.2.1
Step 4 Enable NetStream on P to collect and export the statistics about incoming and outgoing MPLS
packets to the NSC&NDA.
# Configure P to collect the statistics about incoming and outgoing MPLS packets on POS 1/0/0.
[P] interface Pos 1/0/0
[P-Pos1/0/0] ip netstream inbound
[P-Pos1/0/0] ip netstream outbound
[P-Pos1/0/0] quit
# Configure P to collect the statistics about inner IP packets and label information in the sampled
MPLS packets.
[P] ip netstream mpls-aware label-and-ip
# Configure the destination addresses, destination ports, and source addresses of NetStream
packets to be exported in V9 format.
[P] ip netstream export version 9
[P] ip netstream export host 192.168.9.2 9001
[P] ip netstream export source 192.168.9.1
# Enable NetStream sampling, with the sampling mode being regular packet sampling.
[P] ip netstream sampler fix-packets 10000 inbound
[P] ip netstream sampler fix-packets 10000 outbound
[P] quit
# Run the display ip netstream cache origin slot 4 command on P. You can view information
about MPLS packets in the NetStream packet cache.
<P> display ip netstream cache origin slot 4
----End
Configuration Files
l Configuration file of CE1
#
sysname CE1
#
interface GigabitEthernet1/0/0
undo shutdown
#
interface GigabitEthernet1/0/0.1
vlan-type dot1q 10
ip address 172.16.1.1 255.255.255.0
#
return
l Configuration file of P
#
sysname P
#
ip netstream mpls-aware label-and-ip
#
mpls lsr-id 2.2.2.9
mpls
#
mpls ldp
#
interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 10.2.2.2 255.255.255.0
mpls
mpls ldp
ip netstream inbound
ip netstream outbound
#
interface Pos2/0/0
link-protocol ppp
undo shutdown
ip address 10.1.1.2 255.255.255.0
mpls
mpls ldp
#
interface Pos3/0/0
link-protocol ppp
undo shutdown
ip address 192.168.9.1 255.255.255.0
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 2.2.2.9 0.0.0.0
network 10.1.1.0 0.0.0.255
network 10.2.2.0 0.0.0.255
#
slot 2
ip netstream sampler to slot 4
#
ip netstream sampler fix-packets 10000 inbound
ip netstream sampler fix-packets 10000 outbound
ip netstream export version 9
ip netstream export source 192.168.9.1
ip netstream export host 192.168.9.2 9001
#
return
l Configuration file of PE2
#
sysname PE2
#
mpls lsr-id 3.3.3.9
mpls
#
mpls l2vpn
mpls l2vpn default martini
#
mpls ldp
#
mpls ldp remote-peer 1.1.1.9
remote-ip 1.1.1.9
#
flow-wred test
color green low-limit 70 high-limit 100 discard-percentage 100
color yellow low-limit 60 high-limit 90 discard-percentage 100
color red low-limit 50 high-limit 80 discard-percentage 100
#
flow-mapping test
map flow-queue af1 to port-queue ef
#
flow-queue test
queue af1 lpq shaping 500 flow-wred test
queue ef pq shaping 1000 flow-wred test
#
service-template test
network-header-length 12 outbound
#
qos-profile test
mpls-hqos flow-queue test flow-mapping test service-template test
#
interface GigabitEthernet1/0/0
undo shutdown
#
interface GigabitEthernet1/0/0.1
vlan-type dot1q 20
mpls l2vc 1.1.1.9 101
mpls l2vpn qos cir 2000 pir 3000 qos-profile test
mpls l2vpn pw traffic-statistic enable
#
interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 192.168.2.1 255.255.255.0
#
interface Pos2/0/0
link-protocol ppp
undo shutdown
ip address 10.2.2.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 3.3.3.9 0.0.0.0
network 10.2.2.0 0.0.0.255
#
slot 2
ip netstream sampler to slot 4
#
ip netstream export template option application-label l2vpn
ip netstream export version 9
ip netstream export source 192.168.2.1
ip netstream export host 192.168.2.2 9000
#
return
Networking Requirements
As shown in Figure 8-18, CE1 and CE2 are connected to PE1 and PE2 respectively through
VLANs; PE1 and PE2 are connected through an MPLS backbone.
It is required that a Label Switched Path (LSP) be used to set a dynamic Pseudo-Wire (PW)
between PE1 and PE2.
With NetStream deployed on a dynamic SH-PWE3 network, statistics about traffic passing
through the LSP between PEs can be collected.
Figure 8-18 Networking diagram of a dynamic SH-PW with the LSP connecting two PEs
NSC&NDA
192.168.9.2/24
192.168.2.2/24
Loopback0 POS3/0/0
2.2.2.2/32 192.168.9.1/24
POS1/0/0
POS2/0/0 P POS2/0/0 192.168.2.1/24
10.1.1.1/24 10.2.2.2/24 Loopback0
POS1/0/0 POS2/0/0 3.3.3.3/32
PE1 10.1.1.2/24 10.2.2.1/24 PE2
GE1/0/0.1 4.4.4.4/32 GE1/0/0.1
Loopback0
VLAN1 GE1/0/0.1 GE1/0/0.1 VLAN
172.16.1.1/24 172.16.1.2/24 2
PW
CE1 CE2
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
l Number of the slot where the NetStream board is inserted (In this example, the NetStream
board is inserted into slot 4.)
Procedure
Step 1 Assign an IP address to each interface.
Assign the IP address and mask to each interface (including Loopback interfaces) as shown in
Figure 8-18. The configuration details are not mentioned here.
For details, refer to the Chapter "Example for Configuring Dynamic SH-PW (Using the LSP
Tunnel)" in the HUAWEI NetEngine80E/40E Router Configuration Guide - VPN.
Step 3 Enable NetStream on PE2 to collect and export the statistics about MPLS TAL information to
the NSC&NDA.
# Configure the destination addresses, destination ports, and source addresses for NetStream
packets to be exported in V9 format.
[PE2] ip netstream export version 9
[PE2] ip netstream export host 192.168.2.2 9000
[PE2] ip netstream export source 192.168.2.1
Step 4 Enable NetStream on P to collect and export the statistics about incoming and outgoing MPLS
packets to the NSC&NDA.
# Configure P to collect the statistics about incoming and outgoing MPLS packets on POS 2/0/0.
[P] interface Pos 2/0/0
[P-Pos2/0/0] ip netstream inbound
[P-Pos2/0/0] ip netstream outbound
[P-Pos2/0/0] quit
# Configure P to collect the statistics about inner IP packets and label information in the sampled
MPLS packets.
[P] ip netstream mpls-aware label-and-ip
# Configure the destination addresses, destination ports, and source addresses for NetStream
packets to be exported in V9 format.
[P] ip netstream export version 9
[P] ip netstream export host 192.168.9.2 9001
[P] ip netstream export source 192.168.9.1
# Enable NetStream sampling, with the sampling mode being regular packet sampling.
# Run the display ip netstream cache origin slot 4 command on P. You can view information
about MPLS packets in the NetStream packet cache.
<P> display ip netstream cache origin slot 4
----End
Configuration Files
l Configuration file of CE1
#
sysname CE1
#
interface GigabitEthernet1/0/0
undo shutdown
#
interface GigabitEthernet1/0/0.1
undo shutdown
vlan-type dot1q 1
ip address 172.16.1.1 255.255.255.0
#
return
l Configuration file of P
#
sysname P
#
ip netstream mpls-aware label-and-ip
#
mpls lsr-id 4.4.4.4
mpls
#
mpls ldp
#
interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 10.1.1.2 255.255.255.0
mpls
mpls ldp
#
interface Pos2/0/0
link-protocol ppp
undo shutdown
ip address 10.2.2.1 255.255.255.0
mpls
mpls ldp
ip netstream inbound
ip netstream outbound
#
interface Pos3/0/0
link-protocol ppp
undo shutdown
ip address 192.168.9.1 255.255.255.0
#
interface LoopBack0
ip address 4.4.4.4 255.255.255.255
#
ospf 1
area 0.0.0.0
network 4.4.4.4 0.0.0.0
network 10.1.1.0 0.0.0.255
network 10.2.2.0 0.0.0.255
#
slot 2
ip netstream sampler to slot 4
#
ip netstream sampler fix-packets 10000 inbound
ip netstream sampler fix-packets 10000 outbound
ip netstream export version 9
ip netstream export source 192.168.9.1
ip netstream export host 192.168.9.2 9001
#
return
Networking Requirements
With NetStream deployed on the Martini VPLS network, statistics about MPLS traffic
exchanged between PEs can be collected.
As shown in Figure 8-19, VPLS is enabled on PE1 and PE2; CE1 and CE2 are connected to
PE1 and PE2 respectively. CE1 and CE2 are on the same Martini VPLS network and
communicate with each other through a PW established by using Label Distribution Protocol
(LDP) as the VPLS signaling protocol.
l NetStream is enabled on PE2 to collect and export the statistics about MPLS TAL
information to the NSC&NDA.
l NetStream is enabled on P to collect and export the statistics about incoming and outgoing
MPLS packets to the NSC&NDA.
l Traffic statistics are analyzed on the NSC&NDA to measure the user traffic between PEs.
NSC&NDA
192.168.9.2/24 192.168.2.2/24
Loopback1 POS3/0/0
1.1.1.9/32 192.168.9.1/24
POS2/0/0
POS2/0/0 P POS2/0/0 192.168.2.1/24
172.16.1.1/24 172.17.1.1/24
Loopback1
POS1/0/0 POS1/0/0 3.3.3.9/32
GE1/0/0.1 172.16.1.2/24 172.17.1.2/24 GE2/0/0.1
GE1/0/0.1 PE1 Loopback1 PE2 GE1/0/0.1
10.1.1.1/24 2.2.2.9/32 10.1.1.2/24
CE1 CE2
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Assign an IP address to each interface.
Assign the IP address and mask to each interface (including Loopback interfaces) as shown in
Figure 8-19. The configuration details are not mentioned here.
For details, refer to the Chapter "Example for Configuring Martini VPLS" in the HUAWEI
NetEngine80E/40E Router Configuration Guide - VPN.
Step 3 Enable NetStream on PE2 to collect and export the statistics about MPLS TAL information to
the NSC&NDA.
# Configure the destination addresses, destination ports, and source addresses for NetStream
packets to be exported in V9 format.
[PE2] ip netstream export version 9
[PE2] ip netstream export host 192.168.2.2 9000
[PE2] ip netstream export source 192.168.2.1
Step 4 Enable NetStream on P to collect and export the statistics about incoming and outgoing MPLS
packets to the NSC&NDA.
# Enable P to collect the statistics about incoming and outgoing MPLS packets on POS 2/0/0.
[P] interface Pos 2/0/0
[P-Pos2/0/0] ip netstream inbound
[P-Pos2/0/0] ip netstream outbound
[P-Pos2/0/0] quit
# Configure P to collect the statistics about inner IP packets and label information in the sampled
MPLS packets.
[P] ip netstream mpls-aware label-and-ip
# Configure the destination addresses, destination ports, and source addresses for NetStream
packets to be exported in V9 format.
[P] ip netstream export version 9
[P] ip netstream export host 192.168.9.2 9001
[P] ip netstream export source 192.168.9.1
# Enable NetStream sampling, with the sampling mode being regular packet sampling.
[P] ip netstream sampler fix-packets 10000 inbound
[P] ip netstream sampler fix-packets 10000 outbound
[P] quit
# Run the display ip netstream cache origin slot 4 command on P. You can view information
about MPLS packets in the NetStream packet cache.
<P> display ip netstream cache origin slot 4
----End
Configuration Files
l Configuration file of CE1
#
sysname CE1
#
interface GigabitEthernet1/0/0
undo shutdown
#
interface GigabitEthernet1/0/0.1
undo shutdown
vlan-type dot1q 10
ip address 10.1.1.1 255.255.255.0
#
return
link-protocol ppp
undo shutdown
ip address 172.16.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 1.1.1.9 0.0.0.0
network 172.16.1.0 0.0.0.255
#
return
l Configuration file of P
#
sysname P
#
ip netstream mpls-aware label-and-ip
#
mpls lsr-id 2.2.2.9
mpls
#
mpls ldp
#
interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 172.16.1.2 255.255.255.0
mpls
mpls ldp
#
interface Pos2/0/0
link-protocol ppp
undo shutdown
ip address 172.17.1.1 255.255.255.0
mpls
mpls ldp
ip netstream inbound
ip netstream outbound
#
interface Pos3/0/0
link-protocol ppp
undo shutdown
ip address 192.168.9.1 255.255.255.0
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 172.16.1.0 0.0.0.255
network 172.17.1.0 0.0.0.255
network 2.2.2.9 0.0.0.0
#
slot 2
ip netstream sampler to slot 4
#
ip netstream sampler fix-packets 10000 inbound
ip netstream sampler fix-packets 10000 outbound
ip netstream export version 9
ip netstream export source 192.168.9.1
ip netstream export host 192.168.9.2 9001
#
return
l Configuration file of PE2
#
sysname PE2
#
mpls lsr-id 3.3.3.9
mpls
#
mpls l2vpn
#
vsi a2 static
pwsignal ldp
vsi-id 2
peer 1.1.1.9
#
mpls ldp
#
mpls ldp remote-peer 1.1.1.9
remote-ip 1.1.1.9
#
interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 172.17.1.2 255.255.255.0
mpls
mpls ldp
#
interface Pos2/0/0
link-protocol ppp
undo shutdown
ip address 192.168.2.1 255.255.255.0
#
interface GigabitEthernet2/0/0
undo shutdown
#
interface GigabitEthernet2/0/0.1
undo shutdown
vlan-type dot1q 10
l2 binding vsi a2
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 3.3.3.9 0.0.0.0
network 172.17.1.0 0.0.0.255
#
slot 2
ip netstream sampler to slot 4
#
ip netstream export template option application-label l2vpn
ip netstream export version 9
ip netstream export source 192.168.2.1
ip netstream export host 192.168.2.2 9000
#
return
Networking Requirements
With NetStream deployed on the Kompella VPLS network, statistics about MPLS traffic
exchanged between PEs can be collected.
As shown in Figure 8-20, VPLS is enabled on PE1 and PE2; CE1 and CE2 are connected to
PE1 and PE2 respectively. CE1 and CE2 are on the same Kompella VPLS network and
communicate with each other through a PW established by using Border Gateway Protocol
(BGP) as the VPLS signaling protocol.
On the Kompella VPLS network shown in Figure 8-20:
l NetStream is enabled on PE2 to collect and export the statistics about MPLS TAL
information to the NSC&NDA.
l NetStream is enabled on P to collect and export the statistics about incoming and outgoing
MPLS packets to the NSC&NDA.
l Traffic statistics are analyzed on the NSC&NDA to measure the user traffic between PEs.
NSC&NDA
192.168.9.2/24 192.168.2.2/24
Loopback1 POS3/0/0
1.1.1.9/32 192.168.9.1/24
POS2/0/0
POS2/0/0 P POS2/0/0 192.168.2.1/24
172.16.1.1/24 172.17.1.1/24
Loopback1
POS1/0/0 POS1/0/0 3.3.3.9/32
GE1/0/0.1 172.16.1.2/24 172.17.1.2/24 GE2/0/0.1
GE1/0/0.1 PE1 Loopback1 PE2 GE1/0/0.1
10.1.1.1/24 2.2.2.9/32 10.1.1.2/24
CE1 CE2
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign an IP address to each interface.
2. Configure a Kompella VPLS network.
3. Enable NetStream on PE2 to collect and export the statistics about MPLS TAL information
to the NSC&NDA.
4. Enable NetStream on P to collect and export the statistics about incoming and outgoing
MPLS packets to the NSC&NDA.
Data Preparation
To complete the configuration, you need the following data:
l Version of the format in which NetStream packets are exported
l Destination addresses, destination ports, and source addresses of NetStream packets
l Number of the slot where the NetStream board is inserted (In this example, the NetStream
board is inserted into slot 4.)
Procedure
Step 1 Assign an IP address to each interface.
Assign the IP address and mask to each interface (including Loopback interfaces) as shown in
Figure 8-20. The configuration details are not mentioned here.
For details, refer to the Chapter "Example for Configuring Kompella VPLS" in the HUAWEI
NetEngine80E/40E Router Configuration Guide - VPN.
Step 3 Enable NetStream on PE2 to collect and export the statistics about MPLS TAL information to
the NSC&NDA.
# Configure the destination addresses, destination ports, and source addresses for NetStream
packets to be exported in V9 format.
[PE2] ip netstream export version 9
[PE2] ip netstream export host 192.168.2.2 9000
[PE2] ip netstream export source 192.168.2.1
Step 4 Enable NetStream on P to collect and export the statistics about incoming and outgoing MPLS
packets to the NSC&NDA.
# Configure P to collect the statistics about incoming and outgoing MPLS packets on POS 2/0/0.
[P] interface Pos 2/0/0
[P-Pos2/0/0] ip netstream inbound
[P-Pos2/0/0] ip netstream outbound
[P-Pos2/0/0] quit
# Configure P to collect the statistics about inner IP packets and label information in the sampled
MPLS packets.
[P] ip netstream mpls-aware label-and-ip
# Configure the destination addresses, destination ports, and source addresses for NetStream
packets to be exported in V9 format.
[P] ip netstream export version 9
[P] ip netstream export host 192.168.9.2 9001
[P] ip netstream export source 192.168.9.1
# Enable NetStream sampling, with the sampling mode being regular packet sampling.
----End
Configuration Files
l Configuration file of CE1
#
sysname CE1
#
interface GigabitEthernet1/0/0.1
undo shutdown
vlan-type dot1q 10
ip address 10.1.1.1 255.255.255.0
#
return
#
sysname PE2
#
mpls lsr-id 3.3.3.9
mpls
#
mpls l2vpn
#
vsi bgp1 auto
pwsignal bgp
route-distinguisher 172.17.1.2:1
vpn-target 100:1 import-extcommunity
vpn-target 100:1 export-extcommunity
site 2 range 5 default-offset 0
#
mpls ldp
#
interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 172.17.1.2 255.255.255.0
mpls
mpls ldp
#
interface Pos2/0/0
link-protocol ppp
undo shutdown
ip address 192.168.2.1 255.255.255.0
#
interface GigabitEthernet2/0/0.1
undo shutdown
vlan-type dot1q 10
l2 binding vsi bgp1
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
#
bgp 100
peer 1.1.1.9 as-number 100
peer 1.1.1.9 connect-interface LoopBack1
#
vpls-family
policy vpn-target
peer 1.1.1.9 enable
#
ospf 1
area 0.0.0.0
network 3.3.3.9 0.0.0.0
network 172.17.1.0 0.0.0.255
#
slot 2
ip netstream sampler to slot 4
#
ip netstream export template option application-label l2vpn
ip netstream export version 9
ip netstream export source 192.168.2.1
ip netstream export host 192.168.2.2 9000
#
return
This chapter describes how to check the network connectivity through ping and tracert
operations.
The ping command is used to check network connections and detect whether a host is reachable.
The tracert command is used to detect the gateways that packets pass when being transmitted
from source hosts to destinations. It is mainly used to check if the network connection is
reachable, and locate the network fault.
Applicable Environment
A user cannot access the network. Then you need to use Ping and Tracert to test the network
connectivity.
Pre-configuration Task
Before configuring Ping or Tracert, complete the following tasks:
Data Preparation
To configure Ping and Tracert, you need the following data.
No. Data
Context
Do as follows on the user end in all views.
Procedure
Step 1 To test the network connection, run ping [ ip ] [ -a source-ip-address | -c count | -d | { -f | ignore-
mtu } | -h ttl-value | -i interface-type interface-number [ source ] | -m time | -n |-name | -p
pattern | -q | -r | -s packetsize | -t timeout | -tos tos-value | -v | -vpn-instance vpn-instance-
name ] * host
The preceding command contains only a part of the parameters. For descriptions of the
parameters of this command, refer to the HUAWEI NetEngine80E/40E Router Command
Reference.
l Status of the responses to the Ping. If the system does not receive a response packet within
the timeout period, it outputs a "Request time out" message; if receiving a response packet,
the system outputs bytes of data, sequence number, TTL, and response time of each response
packet.
l Final statistics, including the number of sent packets, number of received packets, percentage
of unacknowledged packets to all transmitted packets, and the minimum, maximum, and
mean response time.
NOTE
If the destination address of the ping command is a broadcast address, the source address carried in the
Reply message is the broadcast address.
<HUAWEI> ping 202.20.36.25
PING 202.20.36.25: 56 data bytes, press CTRL_C to break
Reply from 202.20.36.25: bytes=56 Sequence=1 ttl=255 time=2 ms
Reply from 202.20.36.25: bytes=56 Sequence=2 ttl=255 time=1 ms
Reply from 202.20.36.25: bytes=56 Sequence=3 ttl=255 time=1 ms
Reply from 202.20.36.25: bytes=56 Sequence=4 ttl=255 time=1 ms
Reply from 202.20.36.25: bytes=56 Sequence=5 ttl=255 time=1 ms
----End
Context
Do as follows in all views on the user end. Before running the tracert command to check network
connectivity, you can run the icmp time-exceed command to specify the format of ICMP Time
Exceeded packets.
Procedure
Step 1 (Optional) Run:
icmp time-exceed { extension { compliant | non-compliant } | classic }
NOTE
Please run this command in the system view.
Step 2 To locate the fault in the network, run tracert [ -a source-ip-address | -f first-ttl | -m max-ttl | -
p port | -q nqueries | -v | -vpn-instance vpn-instance-name | -w timeout ] * host
The preceding command contains only a part of the parameters. For the description of the options
and parameters of this command, refer to the HUAWEI NetEngine80E/40E Router Command
Reference.
----End
Application Environment
You can use the ping lsp or tracert lsp command on the ingress to check connectivity of the
LDP LSP destined for the egress according to the specified FEC and mask. If load balancing is
configured on the ingress, you need to specify the next hop address when checking connectivity
of the specified LDP LSP.
Pre-configuration Tasks
Before detecting the LDP LSP through the ping or tracert operation, complete the following
task:
l Configuring an LDP LSP correctly
Data Preparation
To detect the LDP LSP through the ping or tracert operation, you need the following data.
No. Data
1 destination IPv4 address of an LDP LSP and the mask length of the destination
address
2 (Optional) Source IPv4 address, EXP value and TTL value of the sent Echo
Request packet, reply mode, number of bytes of the sent Echo Request packet,
total number of the sent Echo Request packets, and timeout period of the Echo
Reply packet
Context
Do as follows on each node along the LSP to check connectivity:
Procedure
Step 1 Run:
ping lsp [ -a source-ip | -c count | -exp exp-value | -h ttl-value | -m interval |
-r reply-mode | -s packet-size | -t time-out | -v ] * ip destination-address mask-
length [ ip-address ] [ nexthop nexthop-address | draft6 ]
For detailed information about each parameter and its description in the ping command, refer
to the HUAWEI NetEngine80E/40E Router Command Reference.
l Information about responses to each Echo Request packet is displayed, including the number
of bytes, sequence number, sending time of the Echo Reply packet. If no Echo Reply packet
is received within a certain period, a message of "Request time out" is displayed.
l Statistics are displayed, including the number of the sent Echo Request packets, number of
the received Echo Reply packets, percentage of the Echo Request packets that are not replied,
and the minimum, maximum and average delay time of sending Echo Reply packets.
<HUAWEI> ping lsp -v ip 3.3.3.3 32
LSP PING FEC: IPV4 PREFIX 3.3.3.3/32 : 100 data bytes, press CTRL_C to break
Reply from 3.3.3.3: bytes=100 Sequence=1 time = 4 ms Return Code 3, Subcode 1
Reply from 3.3.3.3: bytes=100 Sequence=2 time = 4 ms Return Code 3, Subcode 1
Reply from 3.3.3.3: bytes=100 Sequence=3 time = 4 ms Return Code 3, Subcode 1
Reply from 3.3.3.3: bytes=100 Sequence=4 time = 4 ms Return Code 3, Subcode 1
Reply from 3.3.3.3: bytes=100 Sequence=5 time = 5 ms Return Code 3, Subcode 1
--- FEC: IPV4 PREFIX 3.3.3.3/32 ping statistics ---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 4/4/5 ms
----End
Context
Do as follows on each node along the LSP to check connectivity:
Procedure
Step 1 Run:
tracert lsp [ -a source-ip | -exp exp-value | -h ttl-value | -r reply-mode | -t time-
out ] * ip destination-address mask-length [ ip-address ] [ nexthop nexthop-
address ] [ draft6 ]
For detailed information about each parameter and its description in the tracert lsp command,
refer to the HUAWEI NetEngine80E/40E Router Command Reference.
<HUAWEI> tracert lsp ip 3.3.3.3 32 nexthop 66.1.1.2
TTL Replier Time Type Downstream
0 Ingress 66.1.1.2/[17 ]
1 66.1.1.2 230 ms Transit 88.1.1.1/[3 ]
2 3.3.3.3 80 ms Egress
As shown in the preceding command output, you can view information about each node along
the specified LDP LSP and the response time of each hop.
----End
Application Environment
You can use the ping lsp or tracert lsp command on the ingress to check connectivity of the
TE tunnel destined for the egress. If a hot-standby CR-LSP is set up, you can check connectivity
of the hot-standby CR-LSP specified through a command line.
Pre-configuration Tasks
Before checking connectivity of the TE tunnel through the ping or tracert operation, complete
the following task:
Data Preparation
To check connectivity of the TE tunnel through the ping or tracert operation, you need the
following data.
No. Data
2 (Optional) Source IPv4 address, EXP value and TTL value of the sent Echo
Request packet, reply mode, number of bytes of the sent Echo Request packet,
total number of the sent Echo Request packets, and timeout period of the Echo
Reply packet
Context
Do as follows on each node along the TE tunnel to check connectivity:
Procedure
Step 1 Run:
ping lsp [ -a source-ip | -c count | -exp exp-value | -h ttl-value | -m interval |
-r reply-mode | -s packet-size | -t time-out | -v ] * te tunnel interface-number
[ hot-standby ] [ draft6 ]
--- FEC: RSVP IPV4 SESSION QUERY Tunnel1/0/0 ping statistics ---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 8/34/52 ms
----End
Context
Do as follows on each node along the TE tunnel to check connectivity:
Procedure
Step 1 Run:
tracert lsp [ -a source-ip | -exp exp-value | -h ttl-value | -r reply-mode | -t time-
out ] * te tunnel interface-number [ hot-standby ] [ draft6 ]
Gateways that the packets pass along the TE tunnel are displayed and the faulty node is located.
For detailed information about each parameter and its description in the tracert command, refer
to the HUAWEI NetEngine80E/40E Router Command Reference.
<HUAWEI> tracert lsp te tunnel 1/0/0
LSP Trace Route FEC: TE TUNNEL IPV4 SESSION QUERY Tunnel1/0/0 , press CTRL_C to
break.
TTL Replier Time Type Downstream
0 Ingress 10.1.2.2/[13312 ]
1 10.1.2.2 63 ms Transit
2 6.6.6.6 93 ms Egress
As shown in the preceding command output, you can view information about each node along
the TE tunnel between the ingress and the egress and the response time of each hop.
----End
Application Environment
In the Kompella VLL networking, you can run the ping command to check connectivity of the
PW. Alternatively, you can run the tracert command to detect the PW to view information about
PEs and P devices along the PW. In addition, you can check connectivity of the Layer 2
forwarding link and locate the faulty node.
Pre-configuration Tasks
Before detecting the VLL network through the ping or tracert operation, complete the following
task:
l Configuring a VLL network correctly
Data Preparation
To detect the VLL network through the ping or tracert operation, you need the following data.
No. Data
2 (Optional) Remote PW ID, number of the sent Echo Request packets, interval for
sending Echo Request packets, number of bytes of the sent Echo Request packet,
and timeout period of sending the Echo Request packet
Context
Do as follows on the PE of a Kompella VLL network to check connectivity:
Procedure
Step 1 Run the following commands as network requirements:
l To check connectivity of the VLL network through the control word channel, run:
ping vc vpn-instance vpn-name local-ce-id remote-ce-id [ -c echo-number | -m
time-value | -s data-bytes | -t timeout-value | -v ] * control-word
l To check connectivity of the VLL network through the MPLS Router Alert channel, run:
ping vc vpn-instance vpn-name local-ce-id remote-ce-id [ -c echo-number | -m
time-value | -s data-bytes | -t timeout-value | -v ] * label-alert
Before using the ping vc vpn-instance command to check connectivity of a VLL network, you
must configure as follows:
l Configure the Kompella VLL network correctly.
l Configure the PW template. In addition, perform the following configurations in the PW
template view:
– Control word channel: Run the control-word command to enable the control word
function.
– MPLS Router Alert channel: Run the control-word command to enable the control word
function.
For detailed information about each parameter and its description in the ping vc vpn-instance
command, refer to the HUAWEI NetEngine80E/40E Router Command Reference.
The following information is displayed in the ping vc vpn-instance command output:
l Information about responses to each Echo Request packet is displayed, including the number
of bytes, sequence number, sending time of the Echo Reply packet. If no Echo Reply packet
is received within a certain period, a message of "Request time out" is displayed.
l Statistics are displayed, including the number of the sent Echo Request packets, number of
the received Echo Reply packets, percentage of the Echo Request packets that are not replied,
and the minimum, maximum and average delay time of sending Echo Reply packets.
<HUAWEI> ping vc ethernet 100 control-word remote 100
Reply: bytes=100 Sequence=1 time = 11 ms
Reply: bytes=100 Sequence=2 time = 4 ms
Reply: bytes=100 Sequence=3 time = 4 ms
Reply: bytes=100 Sequence=4 time = 4 ms
Reply: bytes=100 Sequence=5 time = 4 ms
--- FEC: FEC 128 PSEUDOWIRE (NEW). Type = ethernet, ID = 100 ping statistics---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
----End
Context
Do as follows on the PE of the Kompella VLL network to check connectivity:
Procedure
Step 1 Run either of the following commands as required:
l To check connectivity of the VLL network through the control word channel, run:
tracert vc -vpn-instance vpn-name local-ce-id remote-ce-id [ -exp exp-value | -
f first-ttl | -m max-ttl | -r reply-mode | -t timeout-value ] * control-word
[ full-lsp-path ] [ draft6 ]
l To check connectivity of the VLL network through the label alert channel, run:
tracert vc -vpn-instance vpn-name local-ce-id remote-ce-id [ -exp exp-value | -
f first-ttl | -m max-ttl | -r reply-mode | -t timeout-value ] * label-alert
[ full-lsp-path ] [ draft6 ]
l To check connectivity of the VLL network through the ordinary channel, run:
tracert vc pw-type pw-id [ -exp exp-value | -f first-ttl | -m max-ttl | -r reply-
mode | -t timeout-value ] * normal [ remote remote-ip-address ] } [ full-lsp-
path ] [ draft6 ]
Before using the tracert vc -vpn-instance command to check connectivity of a VLL network,
you must configure as follows:
l Configure the Kompella VLL network correctly.
l Configure the PW template and enable VCCV-PING. In addition, perform the following
configurations in the PW template view:
– Control word channel: Run the control-word command to enable the control word
function.
– MPLS Router Alert channel: Run the control-word command to enable the control word
function.
– Ordinary mode: Run the control-word command to enable the control word function.
The control word channel and the ordinary mode cannot be configured together.
For detailed information about each parameter and its description in the tracert vc -vpn-
instance command, refer to the HUAWEI NetEngine80E/40E Router Command Reference.
<HUAWEI> tracert vc ppp 100 control-word remote 200 full-lsp-path
TTL Replier Time Type Downstream
0 Ingress 20.1.1.2/[17409 3 ]
1 20.1.1.2 110 ms Transit 30.1.1.2/[17408 3 11264 ]
2 30.1.1.2 50 ms Transit 40.1.1.1/[3 ]
3 4.4.4.4 50 ms Egress
As shown in the preceding command output, you can view information about each node along
the PW and the response time of each hop.
----End
Application Environment
l In the PWE3 networking, you can run the ping command to check connectivity of the
PWE3 network. After the PE receives the Echo Request packet, the PE abstracts and sends
FEC information in the packet to the L2VPN plane to determine whether the PE is the
egress. If the PE is the egress, an Echo Reply packet is sent.
– VCCV-PING can be enabled and performed only when the PW template is configured
on the PW.
– Connectivity can be checked in control word mode or label alert mode.
– If the Echo Request packet is replied through the control channel of the application
plane, the label alert function must be enabled on the PW.
– If the multi-hop PW is detected in label alert mode, the Echo Request packet is sent to
the service provider end (SPE) that the L2VPN plane determines that the SPE is not the
egress. Then, the packet is forwarded and no Echo Reply packet is sent.
l In the PWE3 networking, you can run the tracert command to detect the PW. Then, SPEs
and P devices along the PW of the PWE3 network are displayed; connectivity of the PW
is checked; the faulty node is located.
The TTL value in each sent Echo Request packet is increased by 1 hop. After receiving an
Echo Request packet, if the TTL in the Echo Request packet times out, the transit node
sends an Echo Reply packet containing information about its next hop information. The
tracert operation can terminate when the packet reaches the egress or when the TTL reaches
the upper limit.Different from the ping operation, the tracert operation can be performed
in normal mode. The normal mode and the control word mode cannot be configured
together.
Pre-configuration Tasks
Before detecting the PWE3 network through the ping or tracert operation, complete the following
task:
Data Preparation
To detect the PWE3 network through the ping or tracert operation, you need the following data.
No. Data
2 (Optional) Remote PW ID, number of the sent Echo Request packets, interval for
sending Echo Request packets, number of bytes of the sent Echo Request packet,
and timeout period of sending the Echo Request packet
Context
Do as follows on the PE of a PWE3 network:
Procedure
Step 1 To check connectivity of the PWE3 network, run either of the following commands as required:
l To check connectivity of the PWE3 network through the control word channel, run:
ping vc pw-type pw-id [ -c echo-number | -m time-value | -s data-bytes | -t
timeout-value | -exp exp-value | -r reply-mode | -v ] * control-word [ remote
remote-ip-address peer-pw-id [ draft6 ] ] [ ttl ttl-value ] [ pipe | uniform ]
l To check connectivity of the PWE3 network through the label alert channel, run:
ping vc pw-type pw-id [ -c echo-number | -m time-value | -s data-bytes | -t
timeout-value | -v ] * label-alert [ remote remote-ip-address ] [ draft6 ]
Before using the ping vc command to check connectivity of a PWE3 network, you must
configure as follows:
l Configure the PWE3 network correctly.
l Configure the PW template and enable VCCV-PING. In addition, perform the following
configurations in the PW template view:
– Control word channel: Run the control-word command to enable the control word
function.
– MPLS Router Alert channel: Run the control-word command to enable the control word
function.
For details about parameters in the ping vc command, refer to the HUAWEI NetEngine80E/
40E Router Command Reference.
l Information about responses to each Echo Request packet is displayed, including the number
of bytes, sequence number, sending time of the Echo Reply packet. If no Echo Reply packet
is received within a certain period, a message of "Request time out" is displayed.
l Statistics are displayed, including the number of the sent Echo Request packets, number of
the received Echo Reply packets, percentage of the Echo Request packets that are not replied,
and the minimum, maximum and average delay time of sending Echo Reply packets.
<HUAWEI> ping vc ethernet 100 control-word remote 100
Reply: bytes=100 Sequence=1 time = 11 ms
Reply: bytes=100 Sequence=2 time = 4 ms
Reply: bytes=100 Sequence=3 time = 4 ms
Reply: bytes=100 Sequence=4 time = 4 ms
Reply: bytes=100 Sequence=5 time = 4 ms
--- FEC: FEC 128 PSEUDOWIRE (NEW). Type = ethernet, ID = 100 ping statistics---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 4/5/11 ms
----End
Context
Do as follows on the PE of a PWE3 network:
Procedure
Step 1 To locate the faulty node on a PWE3 network, run either of the following commands as required:
l To check connectivity of the PWE3 network through the control word channel, run:
tracert vc pw-type pw-id [ -exp exp-value | -f first-ttl | -m max-ttl | -r reply-
mode | -t timeout-value ] * control-word [ draft6 ] [ full-lsp-path ] [ pipe |
uniform ]
tracert vc pw-type pw-id [ -exp exp-value | -f first-ttl | -m max-ttl | -r reply-
mode | -t timeout-value ] * control-word remote remote-ip-address [ full-lsp-
path ] [ pipe | uniform ]
tracert vc pw-type pw-id [ -exp exp-value | -f first-ttl | -m max-ttl | -r reply-
mode | -t timeout-value ] * control-word remote remote-pw-id draft6 [ full-lsp-
path ] [ pipe | uniform ]
l To check connectivity of the PWE3 network through the label alert channel, run:
tracert vc pw-type pw-id [ -exp exp-value | -f first-ttl | -m max-ttl | -r reply-
mode | -t timeout-value ] * label-alert [ remote remote-ip-address ] } [ full-
lsp-path ] [ draft6 ]
Before using the tracert vc command to check connectivity of a PWE3 network, you must
configure as follows:
As shown in the preceding command output, you can view information about each node along
the PW and the response time of each hop.
----End
Application Environment
You can run the ping or tracert command to check connectivity of a VPLS network. Either
command can be used to detect only the single-hop PW. On a Hierarchical Virtual Private LAN
Service (HVPLS) network, the ping or tracert operation terminates at the first hop. You can
detect a specified PW by setting a PW ID. If the PW ID is not set, the VSI ID is used.
You can use the ping operation but not the tracert operation to detect an inter-AS VPLS network.
Pre-configuration Tasks
Before detecting the VPLS network through the ping or tracert operation, complete the following
task:
Data Preparation
To detect the VPLS network through the ping or tracert operation, you need the following data.
No. Data
1 (Optional) In Martini mode: VSI name, IP address of the remote PW, and local
PW ID
3 (Optional) Number of the sent Echo Request packets, interval for sending Echo
Request packets, number of bytes of the sent Echo Request packet, timeout period
of sending the Echo Request packet, reply mode, and EXP value of the sent Echo
Request packet
Context
Do as follows on the PE of a VPLS network:
Procedure
Step 1 To check connectivity of the VPLS network, run either of the following commands as required:
l In Kompella mode, run:
ping vpls [ -c echo-number | -m time-value | -s data-bytes | -t timeout-value |
-r reply-mode | -exp exp-value | -v ] * vsi vsi-name local-site-id remote-site-
id
For detailed information about each parameter and its description in the ping vpls command,
refer to the HUAWEI NetEngine80E/40E Router Command Reference.
The following information is displayed in the ping vpls command output:
l Information about responses to each Echo Request packet is displayed, including the number
of bytes, sequence number, sending time of the Echo Reply packet. If no Echo Reply packet
is received within a certain period, a message of "Request time out" is displayed.
l Statistics are displayed, including the number of the sent Echo Request packets, number of
the received Echo Reply packets, percentage of the Echo Request packets that are not replied,
and the minimum, maximum and average delay time of sending Echo Reply packets.
<HUAWEI> ping vpls -c 10 -m 10 -s 65 -t 100 -v vsi test 10 10
Reply: bytes=65 Sequence=1 time = 31 ms Return Code 3, Subcode 1
Reply: bytes=65 Sequence=2 time = 15 ms Return Code 3, Subcode 1
Reply: bytes=65 Sequence=3 time = 32 ms Return Code 3, Subcode 1
Reply: bytes=65 Sequence=4 time = 15 ms Return Code 3, Subcode 1
--- FEC: FEC 128 PSEUDOWIRE (NEW). Type = ethernet, ID = 100 ping statistics
10 packet(s) transmitted
10 packet(s) received
0.00% packet loss
round-trip min/avg/max = 15/21/32 ms
----End
Context
Do as follows on the PE of a VPLS network:
Procedure
Step 1 To locate the faulty node on the VPLS network, run either of the following commands as
required:
l In Kompella mode, run:
tracert vpls [ -exp exp-value | -f first-ttl | -m max-ttl | -r reply-mode | -t
timeout-value ] * vsi vsi-name local-site-id remote-site-id [ full-lsp-path ]
For detailed information about each parameter and its description in the tracert vpls command,
refer to the HUAWEI NetEngine80E/40E Router Command Reference.
<HUAWEI> tracert vpls vsi test 10 10 full-lsp-path
TTL Replier Time Type Downstream
0 Ingress 20.1.1.2/[17409 3 ]
1 20.1.1.2 110 ms Transit 30.1.1.2/[17408 3 11264 ]
2 30.1.1.2 50 ms Transit 40.1.1.1/[3 ]
3 4.4.4.4 50 ms Egress
As shown in the preceding command output, you can view information about each node along
the PW and the response time of each hop.
----End
Application Environment
After a VPN is correctly configured, you can run the ping lsp command on the PE to ping the
peer PE to check connectivity of the LSP of the BGP/MPLS IP VPN.
The public network tunnel can be:
l Equal-cost load balancing LDP LSPs
l TE tunnels
l Backup VPN FRR tunnels
The private network routes are generated through iteration of public network routes.
If the CE address is pinged and the link between the CE and PE is faulty, the ping operation can
be performed successfully because the end-to-end link between PEs is detected actually.
Pre-configuration Tasks
Before detecting the BGP/MPLS IP VPN through the ping operation, complete the following
task:
l Configuring a BGP/MPLS IP VPNcorrectly
Data Preparation
To detect the BGP/MPLS IP VPN through the ping operation, you need the following data.
No. Data
2 (Optional) Source IPv4 address, EXP value and TTL value of the sent Echo
Request packet, reply mode, number of bytes of the sent Echo Request packet,
total number of the sent Echo Request packets, and timeout period of the Echo
Reply packet
Context
Do as follows on the PE of a BGP/MPLS IP VPN:
Procedure
Step 1 Run:
ping lsp [ -a source-ip | -c count | -exp exp-value | -h ttl-value | -m interval |
-r reply-mode | -s packet-size | -t time-out | -v ] * vpn-instance vpn-name remote
remote-address mask-length
----End
Applicable Environment
After the VPLS network is configured, an NQA VPLS MAC VSI ping test or an NQA VPLS
MAC VSI trace test can be initiated to check the connectivity of Layer 2 forwarding links on
the VPLS network.
VPLS MAC ping can be used to check whether a reachable VPLS path to the destination MAC
address exists on the VPLS. However, it cannot reflect the actual path along which packets are
forwarded. If the network has faults, VPLS MAC trace can be used to locate faults.
Pre-configuration Tasks
l Configuring a VPLS network
Data Preparation
To configure VPLS MAC ping and VPLS MAC trace to check the VPLS network, you need the
following data.
No. Data
2 (Optional) VLAN ID
3 (Optional) For VPLS MAC ping: Number of sent Request packets, size of the
Request packet, interval for sending Request packets, timeout period for waiting
for a Reply packet, priority of the packet, and reply mode
4 (Optional) For VPLS MAC trace: Size of the Request packet, timeout period for
waiting for a Reply packet, priority of the packet, initial TTL, maximum TTL, and
reply mode
Context
Do as follows on the PE of the VPLS network whose connectivity is to be checked.
Procedure
Step 1 Run:
ping vpls mac mac-address vsi vsi-name [ vlan vlan-id ] [ -c count | -m time-value
| -s packsize | -t timeout | -exp exp | -r replymode | -h ttl ] *,
or ping vpls mac mac-address vsi vsi-name rapid [ vlan vlan-id ] [ -c count_rapid
| -s packsize | -t timeout | -exp exp | -r replymode | -h ttl ]
For details about parameters in the ping command, refer to the Command Reference.
l Response to each ping packet: If no Reply packet is received within a certain period, the
message saying "Request time out" is displayed. Otherwise, the bytes of the data, sequence
number of the packet, TTL value, and response time carried in the Reply packet are displayed.
l Final statistics, including the number of sent packets, number of received Reply packets,
percentage of non-response packets, and the minimum, maximum, and average values of the
response time.
l If rapid is configured in the ping command, only the following summary statistics are
displayed: numbers of sent packets and received packets, percentage of packets that are not
responded, and minimum, maximum and average response time.
<HUAWEI> ping vpls mac 00e0-5952-6f01 vsi v123
Ping mac 00e0-5952-6f01 vsi v123 : 100 data bytes , press CTRL_C to break
Reply from 10.1.1.1 : bytes=100 sequence=1 time = 1ms
Reply from 10.1.1.1 : bytes=100 sequence=2 time = 1ms
Reply from 10.1.1.1 : bytes=100 sequence=3 time = 2ms
Reply from 10.1.1.1 : bytes=100 sequence=4 time = 3ms
Reply from 10.1.1.1 : bytes=100 sequence=5 time = 2ms
The IP address of the PE is 5.5.5.9 and the interface on the PE is
GigabitEthernet5/0/0.100.
--- vsi : v123 00e0-5952-6f01 ping statistics ---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 1/2/3 ms
<HUAWEI> ping vpls mac 00e0-5952-6f01 vsi v123 rapid
Ping mac 00e0-5952-6f01 vsi v123 : 130 data bytes , press CTRL_C to break !!!!!
--- vsi : v123 00e0-5952-6f01 ping statistics ---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 1/2/3 ms
----End
Context
Do as follows on the PE of the VPLS network whose connectivity is to be checked.
Procedure
Step 1 Run:
trace vpls mac mac-address vsi vsi-name [ vlan vlan-id ] [-t timeout | -f first-
ttl | -m max-ttl | -exp exp | -r replymode ] *
For details about parameters in the trace command, refer to the Command Reference.
4.4.4.4
Info: Succeeded in tracing the destination address 00e0-5952-6f01.
Based on the preceding result, you can view gateways through which the packet passes from the
source address to the MAC address of the specified VSI and the response time of each hop.
----End
Applicable Environment
The traditional ping command is used to detect the IP network; the ping LSP command is used
to detect the MPLS network. On the MPLS network, packets sent through the ping or ping
LSP command are forwarded by tunnel IDs. Even though the MPLS forwarding fails, packets
are not forwarded through IP. Therefore, when the MPLS link becomes faulty, it is difficult for
you to determine whether the fault occurs on the IP network or the MPLS network. To solve the
preceding problem, you can specify the parameter ip-forwarding to forcibly forward ping
packets through IP, so that you can determine whether the fault occurs on the IP network.
Pre-configuration Tasks
Before detecting an MPLS network through a ping operation, complete the following tasks:
Data Preparation
To detect an MPLS network through a ping operation, you need the following data.
No. Data
1 Destination IP address
Procedure
Step 1 To check whether IP forwarding on an MPLS network is normal, run: ping [ ip ] [ -a source-
ip-address | -c count | -d | -f | -h ttl-value | -i interface-type interface-number [ source ] | -m
time | -n | -name | -p pattern | -q | -r | -s packetsize | -system-time | -t timeout | -tos tos-value |
-v | -vpn-instance vpn-instance-name ] * host ip-forwarding
l Response to each ping message: If the time expires and no Echo Request message is received,
a message "Request time out" is displayed; if an Echo Request message is received, the bytes
of the data, the sequence number of the message, and the response time are displayed.
l The final statistics: including the number of sent packets, number of received response
packets, percentage of non-response packets, and minimum, maximum and average values
of response time.
<HUAWEI> ping 10.1.1.2 ip-forwarding
PING 10.1.1.2: 56 data bytes, press CTRL_C to break
Reply from 10.1.1.2: bytes=56 Sequence=1 ttl=255 time=170 ms
Reply from 10.1.1.2: bytes=56 Sequence=2 ttl=255 time=30 ms
Reply from 10.1.1.2: bytes=56 Sequence=2 ttl=255 time=30 ms
Reply from 10.1.1.2: bytes=56 Sequence=2 ttl=255 time=50 ms
Reply from 10.1.1.2: bytes=56 Sequence=2 ttl=255 time=50 ms
--- 18.18.18.18 ping statistics ---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 30/66/170 ms
----End
Applicable Environment
Each trunk member interface transmits services through a separate path. Therefore, the delay,
jitter, and packet loss percentage on each path is unique. When the quality of services on trunk
member links declines, you can run the trunk member-port-inspect command to enable the
detection of member interfaces and then run the ping command to detect whether the network
connectivity of each member interface is normal.
Pre-configuration Tasks
Before detecting trunk member links through a ping operation, complete the following tasks:
l Configuring IP address and IGP routes for devices to communication with each other
Data Preparation
To detect trunk member links through a ping operation, you need the following data.
No. Data
1 IP address of the peer end and the outbound interface of the local end
Context
Before performing the ping operation to detect trunk member links, you must run the trunk
member-port-inspect command on the local and peer devices to enable the detection of trunk
member interfaces.
NOTE
The trunk member-port-inspect command makes sense for all Layer 3 trunk member interfaces. Therefore,
you must disable the command immediately after the detection to save system resources.
Procedure
Step 1 To detect the connectivity of Layer 3 trunk member interfaces on the MPLS network, run:
ping [ ip ] [ -a source-ip-address | -c count | -d | -f | -h ttl-value | -i interface-type interface-
number [ source ] | -m time | -n | -name | -p pattern | -q | -r | -s packetsize | -system-time | -t
timeout | -tos tos-value | -v | -vpn-instance vpn-instance-name ] * host [ ip-forwarding ]
NOTE
This command can detect only the connectivity of the link between directly-connected trunk member
interfaces.
The preceding command contains only a part of the parameters. For descriptions of the
parameters of this command, refer to the HUAWEI NetEngine80E/40E Router Command
Reference.
Information displayed in the ping command output is as follows:
l Response to each ping message: If the time expires and no Echo Request message is received,
a message "Request time out" is displayed; if an Echo Request message is received, the
number of data bytes, the sequence number of the message, and the response time are
displayed.
l Final statistics: The number of sent packets, number of received response packets, percentage
of non-response packets, and minimum, maximum and average values of the response time
are displayed.
<HUAWEI> ping -i gigabitethernet 2/0/0 10.1.1.2
PING 10.1.1.2: 56 data bytes, press CTRL_C to break
Reply from 10.1.1.2: bytes=56 Sequence=1 ttl=255 time=170 ms
Reply from 10.1.1.2: bytes=56 Sequence=2 ttl=255 time=30 ms
Reply from 10.1.1.2: bytes=56 Sequence=2 ttl=255 time=30 ms
Reply from 10.1.1.2: bytes=56 Sequence=2 ttl=255 time=50 ms
Reply from 10.1.1.2: bytes=56 Sequence=2 ttl=255 time=50 ms
--- 18.18.18.18 ping statistics ---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 30/66/170 ms
----End
Applicable Environment
Figure 9-1 shows the typical network topology of the MH-PW using LDP as the signaling
protocol. An MH-PW is used in the following situations:
l Two PEs are not in the same AS. No signaling connection or tunnel can be set up between
the two PEs.
l Two PEs have different signaling protocols, for example, LDP on one PE and Resource
Reservation Protocol (RSVP) on the other PE.
l If the access device is capable of running MPLS but is unable to create a large number of
LDP sessions, the User Facing Provider Edge (UFPE) can be used as a UPE, whereas the
high performance service PE (SPE) can be used as the switching node of the LDP sessions
(similar to a signaling reflector).
CE2 CE2
With MH-PW, devices on networks of different types can communicate with each other, which
is an improvement and an enhancement of the original data network. However, if devices from
different vendors or devices running different versions that function as SPEs adopt different
TTL propagation modes, the traditional ping or tracert operation cannot detect the connectivity
of MH-PWs between these devices.
NOTE
The VCCV ping or tracert provided by the NE80E/40E can detect the connectivity of the MS-
PW. After the TTL propagation mode of the SPE is obtained, a ping or tracert operation is
initiated with the obtained TTL propagation mode being applied to the ping or tracert packet.
NOTE
It is recommended that the TTL propagation modes be consistent on the entire network. Otherwise, the ping or
tracert operation may fail.
As shown in Figure 9-1:
l CE1 and CE2 are connected to T-PE1 and T-PE2 respectively in Ethernet mode.
l A PW is established between T-PE1 and S-PE1, S-PE1 and S-PE2, and S-PE2 and T-PE2
separately. An MH-PW is therefore formed.
l The MH-PW is enabled.
The ping or tracert operation can be used to check the connectivity of the network shown in
Figure 9-1 in control word, normal, or label alert mode.
The ping and tracert operations can detect the connectivity of MH-PWs in the same AS. This
improves the maintainability of the MH-PW and ensures the service quality.
Pre-configuration Tasks
Before detecting an MH-PW through the ping and tracert operations, complete the following
tasks:
l Configuring an IGP for the MPLS backbone network of each AS to ensure the IP
connectivity of the backbone network within an AS
l Configuring basic MPLS functions on the MPLS backbone network of each AS
l Configuring MPLS LDP and establishing the LDP LSP for the MPLS backbone of each
AS
Data Preparation
To detect an MH-PW through the ping and tracert operations, you need the following data.
No. Data
1 Destination IP address
Context
The ping operation supports the following TTL propagation modes:
l Pipe
In this mode, the entire MPLS domain is regarded as one hop. When a probe packet passes
through the MPLS domain, and the IP TTL of the probe packet is reduced by 1 on the
ingress and egress respectively.
l Uniform
In this mode, the IP TTL of the probe packet is reduced by 1 each time it passes through
one hop in the MPLS domain.
Procedure
Step 1 To check connectivity of the MH-PW, run either of the following commands as required:
l To check connectivity of the MH-PW through the control word channel, run:
l To check connectivity of the MH-PW through the label alert channel, run:
ping vc pw-type pw-id [ -c echo-number | -m time-value | -s data-bytes | -t
timeout-value | -exp exp-value | -r reply-mode | -v ] * label-alert [ no-control-
word ] [ remote remote-ip-address | draft6 ] * [ pipe | uniform ]
--- FEC: FEC 128 PSEUDOWIRE (NEW). Type = ethernet, ID = 100 ping statistics---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 4/5/11 ms
----End
Context
The tracert operation supports the following TTL propagation modes:
l Pipe
In this mode, the entire MPLS domain is regarded as one hop. When a probe packet passes
through the MPLS domain, and the IP TTL of the probe packet is decreased by 1 on the
ingress and egress separately.
l Uniform
In this mode, the IP TTL of the probe packet is decreased by 1 each time it passes through
one hop in the MPLS domain.
Procedure
Step 1 To check connectivity of the MH-PW, run either of the following commands as required:
l To check connectivity of the MH-PW through the control word channel, run:
tracert vc pw-type pw-id [ -exp exp-value | -f first-ttl | -m max-ttl | -r reply-
mode | -t timeout-value ] * control-word [ draft6 ] [ full-lsp-path ] [ pipe |
uniform ]
tracert vc pw-type pw-id [ -exp exp-value | -f first-ttl | -m max-ttl | -r reply-
mode | -t timeout-value ] * control-word remote remote-ip-address [ full-lsp-
path ] [ pipe | uniform ]
tracert vc pw-type pw-id [ -exp exp-value | -f first-ttl | -m max-ttl | -r reply-
mode | -t timeout-value ] * control-word remote remote-pw-id draft6 [ full-lsp-
path ] [ pipe | uniform ]
l To check connectivity of the MH-PW through the label alert channel, run:
tracert vc pw-type pw-id [ -exp exp-value | -f first-ttl | -m max-ttl | -r reply-
mode | -t timeout-value ] * label-alert [ remote remote-ip-address ] [ full-lsp-
path ] [ draft6 ] [ pipe | uniform ]
For details on parameters of the tracert vc command, refer to the NE80E/40E - Command
Reference.
<HUAWEI> tracert vc ppp 100 control-word remote 200 draft6 full-lsp-path
TTL Replier Time Type Downstream
0 Ingress 10.1.1.2/[1025 ]
1 10.1.1.2 230 ms Transit 20.1.1.2/[3 ]
2 20.1.1.2 230 ms Transit 30.1.1.2/[3 ]
3 30.1.1.2 100 ms Transit 40.1.1.2/[3 ]
4 40.1.1.2 150 ms Egress
In the preceding command output, you can view each node along the MH-PW and the response
time of each node.
----End
Applicable Environment
Figure 9-2 is a typical PWE3 networking diagram.
AC AC
VC
Tunnel
MPLS
CE1 PE1 Network PE2 CE2
Service ping is used to detect the consistency of PE configurations on the network to ensure that
service connections are established properly.
Previously, configuration consistency was checked by network maintenance engineers, which
is error-prone when there are many PWE3 services on the device.
Pre-configuration Tasks
Before detecting the PWE3 network through a service ping operation, complete the following
task:
l Configuring a PWE3 network correctly
Data Preparation
To detect the PWE3 network through a service ping operation, you need the following data.
No. Data
Context
Do as follows on the PE on the PWE3 network:
Procedure
Step 1 Run:
ping vpn-config peer-address peer-address interface { interface-type interface-
number | interface-name } [ secondary ] [ local ] [ remote ]
The command output is as follows. For description of fields in the command output, see the
ping vpn-config command.
<HUAWEI> ping vpn-config peer-address 200.0.0.5 interface GigabitEthernet1/0/8
secondary
VPN-CONFIG PING: Prese CTRL_C to break.
Result Detail: Request Sent - Reply Received
Local VPN description:
Remote VPN description:
PW State: Up
local remote
------------------------------------------------
VPN Type: PWE3 PWE3
VSI Name: N/A N/A
VSI ID: N/A N/A
Admin State: N/A N/A
Oper State: N/A N/A
MTU: 1500 1500
CE Count: 1 1
Control Word: Disable Disable
Primary Or Secondary: Secondary N/A
----End
Applicable Environment
Figure 9-3 is a networking diagram of the VLL accessing the VPLS network.
Figure 9-3 Networking diagram of the VLL accessing the VPLS network
UPE1 UPE2
Service ping is used to detect the consistency of PE configurations on the network to ensure that
service connections are established properly.
Pre-configuration Tasks
Before detecting the VLL accessing the VPLS network through a service ping operation,
complete the following task:
Data Preparation
To detect the VLL accessing the VPLS network through a service ping operation, you need the
following data.
No. Data
Context
Do as follows on the PE on the VLL accessing the VPLS network:
Procedure
Step 1 Run:
ping vpn-config peer-address peer-address interface { interface-type interface-
number | interface-name } [ secondary ] [ local ] [ remote ]
The command output is as follows. For description of fields in the command output, see the
ping vpn-config command.
<HUAWEI> ping vpn-config peer-address 200.0.0.2 interface GigabitEthernet 3/0/7.1
VPN-CONFIG PING: Prese CTRL_C to break.
Result Detail: Request Sent - Reply Received
Local VPN description:
Remote VPN description:
PW State: Up
local remote
------------------------------------------------
VPN Type: PWE3 Martini VPLS
VSI Name: N/A vpls1
VSI ID: N/A 10089
Admin State: N/A Up
Oper State: N/A Up
MTU: 1500 1500
CE Count: 1 0
Control Word: Disable N/A
Primary Or Secondary: Primary N/A
----End
Applicable Environment
Before running the ping command to check the network connectivity, you are required to set
many parameters. Using the ping smart command, you can easily start a ping operation, and
get the relative ping parameters automatically arranged and combined by the ping module. This
helps detect link faults to some extent.
Pre-configuration Tasks
Before configuring Smart Ping, complete the following tasks:
l Configuring an IP address
l Configuring a routing protocol to ensure that the IP route is reachable between every two
nodes.
Data Preparation
To configure Smart Ping, you need the following data.
No. Data
1 Destination IP address
Procedure
Step 1 Run:
ping smart [ -a source-ip-address | -vpn-instance vpn-instance-name | -c count ]*
host
l Information about response messages after the ping smart command is run, including the
complete value combination list of parameter Timeout, Interval, Payload, Size,and ToS. If
no ICMP Response message is returned within the specified period, "Timeout" is displayed.
Otherwise, the response time of the three messages sent out by each parameter combination
is displayed.
l Information about final statistics after the ping smart command is run, including the number
of sent packets, number of received response packets, percentage of non-response packets,
and minimum, maximum and average values of response time.
<HUAWEI> ping smart 10.1.1.2
-----------------------------------------------------------------
Timeout Interval Payload Size Tos Reply1 Reply2 Reply
(ms) (ms) (byte) (ms) (ms) (ms)
-----------------------------------------------------------------
2000 20 0x00 64 0 5ms 3ms 1ms,
2000 500 0x00 64 1 1ms 1ms 1ms,
10000 20 0x00 64 2 1ms 12ms 1ms,
10000 500 0x00 64 3 1ms 2ms 1ms,
2000 20 0xFF 64 4 1ms 1ms 1ms,
2000 500 0xFF 64 5 1ms 88ms 1ms,
10000 20 0xFF 64 6 1ms 1ms 1ms,
10000 500 0xFF 64 7 27ms 1ms 28ms,
2000 20 0x5a 64 0 1ms 1ms 1ms,
2000 500 0x5a 64 1 1ms 1ms 1ms,
10000 20 0x5a 64 2 2ms 19ms 1ms,
10000 500 0x5a 64 3 1ms 1ms 2ms,
2000 20 0xa5 64 4 1ms 1ms 1ms,
2000 500 0xa5 64 5 2ms 1ms 1ms,
10000 20 0xa5 64 6 1ms 2ms 1ms,
10000 500 0xa5 64 7 3ms 6ms 3ms,
2000 20 0x00 256 0 2ms 1ms 1ms,
2000 500 0x00 256 1 1ms 1ms 1ms,
10000 20 0x00 256 2 2ms 3ms 1ms,
10000 500 0x00 256 3 2ms 1ms 2ms,
2000 20 0xFF 256 4 2ms 1ms 2ms,
2000 500 0xFF 256 5 2ms 1ms 1ms,
10000 20 0xFF 256 6 1ms 1ms 1ms,
10000 500 0xFF 256 7 1ms 1ms 1ms,
2000 20 0x5a 256 0 1ms 2ms 1ms,
2000 500 0x5a 256 1 1ms 1ms 3ms,
10000 20 0x5a 256 2 2ms 1ms 1ms,
10000 500 0x5a 256 3 1ms 1ms 1ms,
2000 20 0xa5 256 4 2ms 1ms 1ms,
2000 500 0xa5 256 5 2ms 2ms 1ms,
10000 20 0xa5 256 6 2ms 1ms 2ms,
10000 500 0xa5 256 7 2ms 3ms 2ms,
2000 20 0x00 1500 0 6ms 16ms 5ms,
2000 500 0x00 1500 1 6ms 6ms 6ms,
10000 20 0x00 1500 2 6ms 7ms 10ms,
10000 500 0x00 1500 3 6ms 10ms 6ms,
2000 20 0xFF 1500 4 5ms 6ms 6ms,
2000 500 0xFF 1500 5 6ms 6ms 6ms,
10000 20 0xFF 1500 6 5ms 5ms 5ms,
10000 500 0xFF 1500 7 16ms 5ms 6ms,
2000 20 0x5a 1500 0 6ms 6ms 6ms,
2000 500 0x5a 1500 1 6ms 8ms 6ms,
10000 20 0x5a 1500 2 6ms 9ms 5ms,
10000 500 0x5a 1500 3 37ms 5ms 9ms,
2000 20 0xa5 1500 4 31ms 5ms 6ms,
2000 500 0xa5 1500 5 7ms 7ms 7ms,
10000 20 0xa5 1500 6 6ms 6ms 13ms,
10000 500 0xa5 1500 7 8ms 9ms 8ms,
2000 20 0x00 4478 0 10ms 8ms 7ms,
2000 500 0x00 4478 1 9ms 8ms 9ms,
10000 20 0x00 4478 2 8ms 7ms 7ms,
10000 500 0x00 4478 3 9ms 8ms 9ms,
2000 20 0xFF 4478 4 8ms 33ms 8ms,
2000 500 0xFF 4478 5 9ms 9ms 8ms,
10000 20 0xFF 4478 6 7ms 13ms 9ms,
10000 500 0xFF 4478 7 8ms 8ms 8ms,
2000 20 0x5a 4478 0 8ms 9ms 7ms,
2000 500 0x5a 4478 1 8ms 8ms 8ms,
10000 20 0x5a 4478 2 8ms 43ms 8ms,
10000 500 0x5a 4478 3 8ms 10ms 8ms,
----End
10 LLDP Configuration
Network devices obtain the status of their directly-connected devices through the Link Layer
Discovery Protocol (LLDP).
10.1 Introduction
LLDP is a Layer 2 discovery protocol defined in the IEEE 802.1ab.
10.2 Configuring LLDP
In addition to describing how to enable LLDP, this section introduces the logical relationships
between configuration tasks.
10.3 Maintaining LLDP
This section describes how to maintain LLDP, including debugging and monitoring LLDP, and
clearing LLDP statistics.
10.4 Configuration Examples
You can understand the configuration procedures through the configuration flowchart. This
section describes the networking requirements, configuration roadmap, and configuration notes.
10.1 Introduction
LLDP is a Layer 2 discovery protocol defined in the IEEE 802.1ab.
Background
At present, the Ethernet technology is widely used in the LAN and Metropolitan Area Network
(MAN). The increasing demand for large-scale networks poses higher requirements on the
capability of the Network Management System (NMS). For example, the NMS should address
problems such as obtaining topology of interconnected devices and conflicts in configurations
on different devices.
Recently, the NMS software adopts the automated discovery function to trace topology changes.
However, most NMS software can at best analyze the Layer 3 network topology and group
devices to different IP subnets. Data provided by the NMS concern only the basic events of
adding or deleting devices. The NMS cannot get information about which interfaces on a device
are used to connect another device. That is, the NMS cannot locate a device and judge its
operation mode.
Introduction to LLDP
The Layer 2 Discovery (L2D) protocol can precisely obtain information about which interfaces
are attached to the devices and which devices are connected to other devices. In addition, L2D
displays the paths between the client, switch, router, application server, and network server. The
preceding detailed information helps find the root cause for the network failure.
The Link Layer Discovery Protocol (LLDP) is an L2D protocol defined in the IEEE 802.1ab.
The LLDP protocol specifies that the status information is stored on all the interfaces and the
device can send its status to the neighbor stations. The interfaces can also send status upgrade
information to the neighbor stations as required. The neighbor stations then store the received
information in the standard Management Information Base (MIB) of the Simple Network
Management Protocol (SNMP). The NMS can search for the Layer 2 information in the MIB.
As specified in the IEEE 802.1ab standard, the NMS can also find the unreasonable Layer 2
configurations based on the information provided by LLDP.
When the LLDP protocol runs on the devices, the NMS can obtain the Layer 2 information about
all the devices that it connects and the detailed network topology information. This expands the
scope of network management. LLDP also helps find unreasonable configurations on the
network and reports the incorrect configurations to the NMS. In this manner, the incorrect
configurations can be removed timely.
MIB
MIB is short for the Management Information Base. MIB is classified into the LLDP local system
MIB and the LLDP remote system MIB.
The LLDP local system MIB stores information about the local station, including the chassis
ID, port ID, system name, system description, port description, system capabilities, and
management address.
The LLDP remote system MIB stores information about adjacent stations, including the chassis
ID, port ID, system name, system description, port description, system capabilities, and
management address.
NSAP Identifier
The MAC service access point (NSAP) identifier consists of the chassis ID and the port ID. The
identifier is used as an index in the MIB.
LLDP Agent
An LLDP agent is the protocol entity that manages LLDP operations for an interface.
An LLDP agent performs the following tasks:
l Maintains current information in the LLDP local system MIB.
l Extracts and sends LLDP local system MIB information to neighbor stations when the
status of the local device changes. An LLDP agent also extracts and sends LLDP local
system MIB information to neighbor stations at regular intervals when no status change
occurs on the local device.
l Identifies and processes received LLDP packets.
l Maintains current information in the LLDP remote system MIB.
l Sends LLDP traps to the NMS when the status of LLDP local system MIB or the LLDP
remote system MIB changes.
LLDP Traps
When the LLDP local system MIB or the LLDP remote system MIB changes, the device sends
traps to the NMS for updating the topology. The traps can be triggered in the following cases:
l LLDP is enabled or disabled globally.
l The local management address changes.
l Neighbor information changes.
The LLDP alarm function is of global significance for the router. That is, it provides the alarm
function on all the interfaces.
Applicable Environment
LLDP is used to obtain neighbor information and discover topology. As shown in Figure
10-1, when the NMS needs to collect the topology information on Router A and Router B, you
need to enable LLDP on Router A and Router B. In this manner, Router A and Router B can
exchange their status information, thus, the NMS can obtain the topology information. You also
need to set the management address of LLDP on Router A and Router B so that the NMS can
pinpoint Router A and Router B. Router A or Router B sends traps to the NMS for updating the
topology when any of the following conditions is met:
l LLDP is enabled or disabled globally.
l Management address changes.
l Neighbor information changes.
This requires that the LLDP alarm function be enabled on Router A or Router B.
SNMP
SNMP
LLDPDU
LL
U
D
PD
PD
D
U
LL
RouterA RouterB
Pre-configuration Tasks
Before configuring LLDP, complete the following tasks:
The management address of LLDP carried in an LLDP frame is used to identify a device. Thus,
you need to select an IP address that the NMS can identify and manage easily. The IP address
can be a management address and must be configured before the management address of LLDP
is configured.
Data Preparation
To configure LLDP, you need the following data.
No. Data
No. Data
5 Delay for the LLDP module on the interface to be re-enabled from the disabled state
Context
The LLDP alarm function must be enabled on the router so that the router can send traps to the
NMS for updating the topology when LLDP is enabled or disabled, the management address of
LLDP changes, or the neighbor information changes.
Do as follows on Router A and Router B:
Procedure
Step 1 Run:
system-view
----End
Context
When the router and its neighbors are all enabled with LLDP, the router notifies the neighbors
of its status and obtains the status of the neighbors by exchanging LLDP packets. The NMS can
obtain information about Layer 2 connection status of the router and then analyze the network
topology.
Do as follows on Router A and Router B:
Procedure
Step 1 Run:
system-view
Step 2 Run:
lldp enable
----End
Context
NOTE
You can disable LLDP on an interface only after LLDP is enabled globally on the router.
When LLDP is enabled globally on the router, all the interfaces are enabled with LLDP by
default. For the interfaces that do not need the LLDP function, you can run the undo lldp
enable command in the interface view to disable the LLDP function on these interfaces.
Do as follows on the interfaces that connect Router A and Router B to devices that do not need
the LLDP function:
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface { ethernet | gigabitethernet | pos } interface-number
Step 3 Run:
undo lldp enable
----End
Context
NOTE
You can configure the management address of LLDP only after LLDP is enabled globally on the router.
An LLDP management address must be a unicast IP address that is legal and exists on the device.
Procedure
Step 1 Run:
system-view
----End
Procedure
l (Optional) Setting the interval for sending LLDP packets.
Do as follows on Router A and Router B as required:
1. Run:
system-view
– If the value of interval is smaller than or equal to 32768, you can increase the value
of interval regardless of the value of delay.
– If the value of interval is reduced, it must be no less than four times of the value
of delay. Thus, when the value of interval to be set is less than four times of the
value of delay, the value of delay must be adjusted to be smaller than or equal to
a quarter of the value of interval. After that, the value of interval can be set.
l (Optional) Setting the delay in sending LLDP packets
1. Run:
system-view
The delay in sending LLDP packets must be set properly. You need to adjust the value
of the parameter according to network load. The greater the value is, the less frequently
LLDP packets are exchanged. This saves resources of the system. However, if the
value is too great, that is, the delay in sending LLDP packets is too long, the router
cannot timely notify the neighbors of its status. As a result, the NMS cannot timely
discover topology changes in the network. The smaller the value is, the more
frequently the local status information is sent to the neighbors. This helps the NMS
to timely discover topology changes in the network. However, if the value is too small,
LLDP packets are exchanged too frequently. This increases the burden on the system
and wastes resources.
You must consider the value of interval when adjusting the value of delay because
the two values affect each other.
– If the value of delay is greater than or equal to 1, you can decrease the value of
delay regardless of the value of interval.
– If the value of delay is increased, it must be no greater than a quarter of the value
of interval. Thus, when the value of delay to be set is greater than a quarter of the
value of interval, the value of interval must be adjusted to be greater than or equal
to four times of the value of delay. After that, the value of delay can be set.
l (Optional) Setting the time multiplier of device information held in the neighbor stations.
1. Run:
system-view
The time multiplier of device information held in the neighbor stations is set.
The greater the value is, the longer device information is held in the neighbor stations.
l (Optional) Setting the delay in re-enabling LLDP on an interface.
1. Run:
system-view
delay is configured to control the status change of LLDP on an interface. This reduces
the topology flapping of the neighbor stations.
l (Optional) Setting the delay in sending traps of changes in neighbor information to the
NMS.
1. Run:
system-view
The delay in sending traps of changes in neighbor information to the NMS is set.
When the neighbor information changes frequently, you can prolong the delay so that
the router sends traps to the NMS less frequently. This suppresses the topology
flapping.
----End
Context
Run the following command to check the previous configuration.
Procedure
l Run display lldp local [ interface interface-type interface-number ] command to check
the status of LLDP on the device.
----End
Example
After the configuration is successful, run the display lldp local command. You can view the
following information:
Context
To clear the statistics on LLDP, run the reset lldp statistic command in the user view.
Procedure
l Run reset lldp statistic [ interface interface-type interface-number ] command to clear the
statistics on LLDP of an interface. The statistics include the number of received packets,
the number of sent packets, and the number of error frames.
----End
Context
To check the running status of LLDP during routine maintenance, run the following display
commands in any view.
Procedure
l Run display lldp local [ interface interface-type interface-number ] command to check
the LLDP status globally or on a specified interface.
l Run display lldp statistics [ interface interface-type interface-number ] command to check
the statistics on LLDP packets sent and received on an interface.
l Run display lldp neighbor [ interface interface-type interface-number ] command to
check the neighbor information on an interface.
----End
Context
NOTE
This document takes interface numbers and link types of the NE40E-X8 as an example. In working
situations, the actual interface numbers and link types may be different from those used in this document.
Networking Requirements
As shown in Figure 10-2, Router A and Router B are connected through the Ethernet interfaces.
Both Router A and Router B have reachable routes to the NMS. It is required that Router A and
Router B can obtain the status of each other through the LLDP protocol and the NMS can find
Router A and Router B based on the management address of LLDP to discover the topology.
When the management address of LLDP changes, LLDP is disabled globally, or neighbor
information changes, Router A is required to send LLDP traps to the NMS.
PM NMS
SN
SN
M
P
10.10.10.1 10.10.10.2
LLDPDU
RouterA RouterB
Interfaces enabled with LLDPDU
LLDP SNMP packets
NMS: Network Management
System
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
l The management address of Router A is 10.10.10.1, and the management address of Router
B is 10.10.10.2
l The interval for sending LLDP packets is 60 seconds, the delay in sending LLDP packets
is 9 seconds, and the delay in sending traps of changes in neighbor information to the NMS
is 10 seconds
Procedure
Step 1 Enable the LLDP alarm function on Router A and Router B.
# Enable the LLDP alarm function on Router A.
<HUAWEI> system-view
[HUAWEI] sysname Router A
[Router A] snmp-agent trap enable lldp
SysCapSupported: bridge
SysCapEnabled: bridge
LLDPUpTime: 2008/6/20 15:41:49
System configuration:
LLDP enable status: enable (default is disable)
LldpMsgTxInterval: 60s (default is 30s)
LldpMsgTxHoldMultiplier: 4 (default is 4)
LldpReinitDelay: 2s (default is 2s)
LldpTxDelay: 9s (default is 2s)
LldpNotificationInterval: 10s (default is 5s)
LldpNotificationEnable: enable (default is disable)
Management address: IP: 10.10.10.1
Remote Table Statistics:
RemTablesLastChangeTime: 0 days, 0 hours, 0 minutes, 0 seconds
RemTableInserts: 0
RemTableDeletes: 0
RemTableDrops: 0
RemTablesAgeouts: 0
Neighbors Total: 0
Port information:
Interface GigabitEthernet1/0/0:
PortId Subtype: interfaceName
PortId: GigabitEthernet1/0/0
PortDesc: GigabitEthernet1/0/0 Interface
LLDP Enable Status: enable (default is disable)
LLDP Running Status: running
Neighbors Total: 0
----End
Configuration Files
l Configuration file of Router A.
#
sysname Router A
#
lldp enable
#
snmp-agent trap enable lldp
#
lldp message-transmission interval 60
#
lldp message-transmission delay 9
#
lldp trap-interval 10
#
lldp management-address 10.10.10.1
#
return
Networking Requirements
As shown in Figure 10-3, Router A and Router B are connected through an Eth-Trunk. It is
required that three Ethernet interfaces on both Router A and Router B be respectively added to
the Eth-Trunk. Among the three Ethernet interfaces on both Router A and Router B that are
respectively added to the Eth-Trunk, two of them should send and receive LLDP packets to
obtain the status of each other. The other Ethernet interface is disabled from sending and
receiving LLDP packets.
GE1/0/1 GE2/0/1
RouterA Eth-Trunk1 RouterB
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
l The management address of Router A is 10.10.10.1, and the management address of Router
B is 10.10.10.2
l The number of the Eth-Trunk that connects Router A and Router B, and the number of the
interfaces that are added to the Eth-Trunk
Procedure
Step 1 Enable LLDP globally on Router A and Router B.
# Enable LLDP globally on Router A.
<HUAWEI> system-view
[HUAWEI] sysname Router A
[Router A] lldp enable
ChassisId: 00e0-fcc8-1b31
SysName: RouterA
SysDesc: Huawei Versatile Routing Platform Software
VRP (R) software, Version 5.90 (NE80E&40E V600R003C00)
Copyright (C) 2000-2009 Huawei Technologies Co., Ltd.
HUAWEI NE40E
SysCapSupported: bridge
SysCapEnabled: bridge
LLDPUpTime: 2007/6/21 14:46:58
System configuration:
LLDP enable status: enable (default is disable)
LldpMsgTxInterval: 30s (default is 30s)
LldpMsgTxHoldMultiplier: 4 (default is 4)
LldpReinitDelay: 2s (default is 2s)
LldpTxDelay: 2s (default is 2s)
LldpNotificationInterval: 5s (default is 5s)
LldpNotificationEnable: enable (default is disable)
Management address: IP: 10.10.10.1
Remote Table Statistics:
RemTablesLastChangeTime: 0 days, 0 hours, 0 minutes, 0 seconds
RemTableInserts: 0
RemTableDeletes: 0
RemTableDrops: 0
RemTablesAgeouts: 0
Neighbors Total: 0
Port information:
Interface GigabitEthernet1/0/1:
PortId Subtype: interfaceName
PortId: GigabitEthernet1/0/1
PortDesc: GigabitEthernet1/0/1 Interface
LLDP Enable Status: enable (default is disable)
LLDP Running Status: running
Neighbors Total: 0
Interface GigabitEthernet1/0/2:
PortId Subtype: interfaceName
PortId: GigabitEthernet1/0/2
PortDesc: GigabitEthernet1/0/2 Interface
LLDP Enable Status: enable (default is disable)
LLDP Running Status: running
Neighbors Total: 0
Interface GigabitEthernet1/0/3:
PortId Subtype: interfaceName
PortId: GigabitEthernet1/0/3
PortDesc: GigabitEthernet1/0/3 Interface
LLDP Enable Status: disable (default is disable)
LLDP Running Status: stop
Neighbors Total: 0
----End
Configuration Files
l Configuration file of Router A.
#
sysname Router A
#
lldp enable
#
interface Eth-Trunk1
#
interface GigabitEthernet1/0/1
eth-trunk 1
#
interface GigabitEthernet1/0/2
eth-trunk 1
#
interface GigabitEthernet1/0/3
eth-trunk 1
undo lldp enable
#
lldp management-address 10.10.10.1
#
return
Networking Requirements
As shown in Figure 10-4, there are reachable links between Router A, Router B, and Router C.
Both Router A and Router C have reachable links to the NMS. It is required that Router A,
Router B, and Router C exchange LLDP packets through reachable links to obtain the status of
each other. In addition, the NMS can find Router A and Router C based on the management
address of LLDP to discover the topology.
Figure 10-4 Diagram of configuring LLDP on the network where an interface has multiple
neighbors
NMS
SNMP
SNMP
LLDPDU
RouterD RouterF
LL
D
PD
U
LL
U
PD
D
PD
LLDPDU
D
LL
RouterE U
10.10.10.2
10.10.10.1
10.10.10.3
Interfaces enabled with LLDP SNMP packets
NMS:Network Management System LLDPDU
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Enable LLDP globally on Router A, Router B, and Router C.
# Enable LLDP globally on Router A.
<HUAWEI> system-view
[HUAWEI] sysname Router A
[Router A] lldp enable
PortId: GigabitEthernet1/0/0
PortDesc: GigabitEthernet1/0/0 Interface
LLDP Enable Status: enable (default is disable)
LLDP Running Status: running
Neighbors Total: 0
----End
Configuration Files
l Configuration file of Router A.
#
sysname Router A
#
lldp enable
#
lldp management-address 10.10.10.1
#
return
11 Fault Management
Applicable Environment
By using fault management, you can configure alarm management, including changing alarm
severities, enabling delayed alarm reporting, and suppressing alarms.
Pre-configuration Tasks
Before configuring alarm management, complete the following task:
l Installing system software to the router and powering it on
Data Preparation
Before configuring alarm management, you need the following data.
No. Data
1 Alarm name
No. Data
3 Period after which a generated alarm is reported and period after which a generated
recovery alarm is reported
4 IP address of the NMS host to which non-root-cause alarms are not reported, and
security name, VPN instance name, and interface name on the NMS
Procedure
Step 1 Run:
system-view
----End
Procedure
Step 1 Run:
system-view
By default, this function is enabled to prevent intermittent alarms and repeated alarms from being
reported during the delay period.
Step 4 Run:
suppression alarm-name alarm-name { cause-period cause-seconds | clear-period
clear-seconds }
After such a period is set for an alarm, there are the following situations:
l If no recovery alarm is generated during the period, the alarm is not reported to the NMS
until the period expires.
l If a recovery alarm is generated during this period, the alarm and its recovery alarm are both
deleted from the alarm queue and will not be reported to the NMS.
You can use the parameter cause-period cause-seconds to set the period after which a generated
alarm is reported.
You can use the parameter clear-period clear-seconds to set the period after which a generated
recovery alarm is reported.
----End
Procedure
Step 1 Run:
system-view
Step 2 Run:
alarm
Step 3 Run:
correlation-analyze enable
----End
Prerequisites
The configurations of alarm management are complete.
Context
l Run the display alarm active command to check active alarms.
l Run the display alarm history command to check historical alarms.
l Run the display alarm information [ name alarm-name ] command to check alarm
information.
l Run the display this command to check information about delayed alarm reporting.
Example
Run the display alarm active command to view active alarms. For example:
<HUAWEI> display alarm active
A/B/C/D/E/F/G/H/I/J
A=Sequence, B=RootKindFlag(Independent|RootCause|
nonRootCause)
C=Generating time, D=Clearing time
E=ID, F=Name, G=Level, H=State
I=Description information for locating(Para info, Reason info)
J=RootCause alarm sequence(Only for nonRootCause alarm)
1/RootCause/2010-7-8 17:38:46/-/0x502001/linkDown/Critical/Start/OID
1.3.6.1.6.3.1.1.5.3 Interface 5 turned into DOWN state.
Run the display alarm history command to view historical alarms. For example:
<HUAWEI> display alarm history
A/B/C/D/E/F/G/H/I/J
A=Sequence, B=RootKindFlag(Independent|RootCause|nonRootCause)
C=Generating time, D=Clearing time
Run the display alarm information [ name alarm-name ] command to view information about
a specified alarm. For example:
<HUAWEI> display alarm information name linkup
**********************************
AlarmName: linkUp
AlarmType: Resume Alarm
AlarmLevel: Critical
Suppress Period: 10s
CauseAlarmName: linkDown
Match VB Name: ifIndex ifAdminStatus
**********************************
Run the display this command in the alarm view to check the period after which a generated
alarm is reported. For example:
<HUAWEI> system-view
[HUAWEI] alarm
[HUAWEI-alarm] display this
#
alarm
suppression alarm-name hwElmiEvcStatusNotActiveFaultOccur cause-period 5
#
return
Applicable Environment
You can configure event management to configure delayed event reporting.
Pre-configuration Tasks
Before configuring event management, complete the following task:
Data Preparation
Before configuring event management, you need the following data.
No. Data
1 Event name
Procedure
Step 1 Run:
system-view
----End
Prerequisites
The configurations of event management are complete.
Context
l Run the display event command to check the contents of events.
l Run the display event information [ name event-name ] command to check information
about events.
l Run the display this command to check information about delayed event reporting.
Example
Run the display event command to view events. For example:
<HUAWEI> display event
A/B/C/D/E/F/G/H/I/J
A=Sequence, B=RootKindFlag(Independent|RootCause|
nonRootCause)
C=Generating time, D=Clearing time
E=ID, F=Name, G=Level, H=State
I=Description information for locating(Para info, Reason info)
J=RootCause alarm sequence(Only for nonRootCause alarm)
1/Independent/2010-7-8 17:21:44/-/0x4055a000/hwCfgManEventlog/Warning/Start/OID
1.3.6.1.4.1.2011.6.10.2.1 Configure changed. (EventIndex=1, CommandSource=3,
ConfigSource=4, ConfigDestination=2)
Run the display event information [ name event-name ] command to view information about
a specified event. For example:
<HUAWEI> display event information name hwcfgmaneventlog
**********************************
EventName: hwCfgManEventlog
EventType: Critical Event
EventLevel: Warning
Suppress Period: 3s
Match VB Name: hwCfgLogSrcCmd hwCfgLogSrcData hwCfgLogDesData
**********************************
Run the display this command in the event view to check the period after which a generated
event is reported. For example:
<HUAWEI> system-view
[HUAWEI] event
[HUAWEI-event] display this
#
event
suppression event-name hwelmivlannotcfg period 5
#
return
Applicable Environment
A faulty board or flexible card causes the peer device to frequently lose packets, affecting the
running services. To resolve such a problem, you can configure fault isolation to immediately
power off the faulty board or flexible card.
Pre-configuration Tasks
Before configuring fault isolation, complete the following task:
l Powering on the router and ensuring that the router detects no error during self-check
Procedure
Step 1 Run the system-view command to enter the system view.
Step 2 Run the entity-fault { board | card } isolate enable command to configure fault isolation for
an entity.
----End
The displayed information in bold indicates that fault isolation has been enabled on the board
and flexible card.
11.5 Maintenance
This section describes how to maintain fault management.
Context
CAUTION
After alarm messages are cleared, there is no way for the NMS to obtain any information about
these cleared messages. Therefore, before deleting alarm messages, be sure that the NMS no
longer needs these alarm messages.
In routine maintenance, you can run the following commands in the alarm view to clear alarm
messages.
Procedure
Step 1 Run:
system-view
Step 2 Run:
alarm
----End
Context
CAUTION
After event messages are cleared, there is no way for the NMS to obtain any information about
these cleared messages. Therefore, before deleting event messages, be sure that the NMS no
longer needs these event messages.
In routine maintenance, you can run the following commands in the event view to clear event
messages.
Procedure
Step 1 Run:
system-view
----End
Context
Operations that trigger the MTP module to generate a log are as follows:
l When the neighbor relationship established between service modules (for example, LDP
modules) is interrupted because the IGP route is unreachable, a ping operation is started
on the MTP module to detect the reachability of the IGP route. LDP needs to deliver the
ping operation to the MTP module before it times out.
l Packet statistics in the IPC and VP channels: When packets are discarded by the IPC and
VP channels, which causes the LDP neighbor relationship to be interrupted and thus the
protocol to time out, the number of discarded packet is counted.
l Packet statistics on the CPCAR: When packets are discarded on the NP at the lower layer,
causing the LDP neighbor relationship to be interrupted and thus the protocol to time out,
the number of packets discarded and forwarded on the CPCAR is counted.
Procedure
l If the maintainable information has been collected and recorded in logs on the MTP module,
run the display mtp statistics command in the user view.
----End
Networking Requirements
A user logs in to the router to perform alarm management.
Configuration Notes
None.
Configuration Roadmap
The configuration roadmap is as follows:
1. Set alarm parameters.
2. Set a period after which a generated alarm is reported.
3. Configure NMS-based correlated alarm suppression.
4. Configure interface-based alarm filtering.
Data Preparation
To complete the configuration, you need the following data:
l IP address of the NMS host, security name, and VPN instance name
l Type and number of the interface on which alarms are filtered
Procedure
Step 1 Configure an SNMPv3 user and an alarm host.
Step 2 Set the severity level for the linkDown alarm to major.
<HUAWEI> system-view
[HUAWEI] alarm
[HUAWEI-alarm] alarm-name linkdown severity major
Step 3 Configure the linkDown alarm to be reported to the NMS 5 seconds after it is generated.
[HUAWEI-alarm] delay-suppression enable
[HUAWEI-alarm] suppression alarm-name linkdown cause-period 5
Step 4 Configure correlated alarm suppression for the NMS host with the security name being aa and
the IP address being 192.168.3.1.
[HUAWEI-alarm] correlation-analyze enable
[HUAWEI-alarm] quit
[HUAWEI] alarm correlation-suppress enable target-host 192.168.3.1 securityname aa
After the preceding configurations, run the following commands to view alarm information.
<HUAWEI> display alarm information name linkdown
**********************************
AlarmName: linkDown
AlarmType: Alarm
AlarmLevel: Major
Suppress Period: 5s
CauseAlarmName: NA
Match VB Name: ifIndex ifAdminStatus
**********************************
The preceding information shows that linkDown is the root-cause alarm and
hwOspfv3IfStateChange is a non-root-cause alarm.
----End
Configuration Files
#
sysname HUAWEI
#
snmp-agent
snmp-agent local-engineid 800007DB0300E000030003CA
snmp-agent sys-info version all
snmp-agent group v3 huawei
snmp-agent target-host trap address udp-domain 10.164.9.211 params securityname
user v3
snmp-agent usm-user v3 user huawei
snmp-agent trap enable feature-name CONFIGURATION trap-name linkDown
#
alarm
alarm-name linkDown severity major
suppression alarm-name linkDown cause-period 5
correlation-analyze enable
mask interface GigabitEthernet1/0/1
#
return
A Glossary
This chapter lists the frequently used terms in this document and corresponding English full
names.
Glossary Description
3G terminal Terminals used in the third generation network, such as WCDMA
handsets.
B
business code Business contents defined by carriers. The code is composed of
characters (case sensitive) or numbers with the maximum size as 10
bits.
C
check box Multiple boxes are selected at the same time.
clock offset Time offset between the local clock and the reference clock..
E
enterprise code Address and identification of an enterprise in the network. Address
translation and accounting are based on this code.
K
key word Characters that describe the features of a product. Key words are
separated by "|". The product name and the author can be key words.
L
long number A destination number of the messages sent by handset users.
Glossary Description
R
roundtrip delay A value that measures the ability of the local clock to send a message
to the reference clock during the specified time.
S
service code Service provided to subscribers of on demand service in SM mode or
codes provides by carriers..
This chapter lists the frequently used acronyms in this document and corresponding English full
names.
A
AAA Authentication, Authorization and Accounting
ACL Access Control List
ADSL Asymmetric Digital Subscriber Line
AH Authentication Header
APPN Advanced Peer-to-Peer Networking
ARP Address Resolution Protocol
AS Autonomous System; Access Server
ASCII American Standard Code for Information Interchange
ASPF Application Specific Packet Filter
ATM Asynchronous Transfer Mode
AUX Auxiliary (port)
B
BGP Border Gateway Protocol
BRI Basic Rate Interface
C
CBQ Class Based Queue
CD Carrier Detect
CHAP Challenge Handshake Authentication Protocol
CON Console (port)
cPOS channelized-POS
CQ Custom Queueing
CRC Cyclic Redundancy Check
D
DCC Data Communication Channel
DCE Data Circuit-terminating Equipment
DD Database Description
DES Data Encryption Standard
DHCP Dynamic Host Configuration Protocol
DNS Domain Name System
DOD Downstream-on-Demand
DOS Denial of Service
DTE Data Terminal Equipment
DU Downstream Unsolicited
E
EIA Electronics Industry Association
ESP Encapsulating Security Payload
F
FEC Forward Error Correction
FIFO First In First Out
FLASH FLASH memory
FR Frame Relay
G
GE Gigabit Ethernet
GNS Get Nearest Server
GRE Generic Routing Encapsulation
H
HDLC High level Data Link Control
HTTP Hyper Text Transport Protocol
I
IBGP Internal BGP
ICMP Internet Control Message Protocol
ID IDentification
IETF Internet Engineering Task Force
IF Information Frame
IGP Interior Gateway Protocol
IKE Internet Key Exchange
IP Internet Protocol
IPHC IP Header compression
IPoA Internet Protocols over ATM
IPoEoA IP over Ethernet over AAL5
IPSec Internet Protocol SECurity extensions
ISDN Integrated Services Digital Network
IS-IS Intermediate System-Intermediate System
ISP Internet Service Provider
ITU-T International Telecommunication Union - Telecommunication
Standardization Sector
M
MAC Medium Access Control
MD5 Message Digest 5
MFR Multiple Frame Relay
MIB Management Information Base
MODEM Modulator DEModulator
MP Multilink PPP
MPLS Multi-Protocol Label Switching
MSDP Multicast Source Discovery Protocol
MTU Maximum Transmission Unit
N
NAT Network Address Translation
NDA NetStream Data Analyzer
NetBIOS Network Basic Input/Output System
NLRI Network Layer Reachable Information
NMS Network Management System
NQA Network Quality Analysis
O
OSI Open System Interconnection
OSPF Open Shortest Path First
P
PAD Packet Assembler/Disassembler
PAP Password Authentication Protocol
PC Personal Computer
PDU Protocol Data Unit
PHY Physical Sublayer & Physical Layer
POS Packet Over SDH/SONET
PPP Point-to-Point Protocol
PPPoA PPP over ATM
PPPoE PPP over Ethernet
PQ Priority Queue
PRI Primary Rate Interface
PSTN Public Switched Telephone Network
PU Payload Unit
PVC Permanent Virtual Circuit
Q
QoS Quality of Service
R
RADIUS Remote Authentication Dial in User Service
REJ REJect(ion)
S
SA Security Association
SAP Service Advertising Protocol
SDLC Synchronous Data Link Control
SLIP Serial Line Internet Protocol
SLA Service Level Agreement
SNA Systems Network Architecture
SNAP Sub Network Access Point
SNMP Simple Network Management Protocol
SSH Secure Shell
SSP Service Switching Point
STM-1 SDH Transport Module -1
T
TCP Transmission Control Protocol
TE Traffic Engineering
TFTP Trivial File Transfer Protocol
TOS Type of Service
TS Traffic Shaping
TTL Time To Live
U
UDP User Datagram Protocol
UP User Plane
V
VACM View-based Access Control Model
VIU Versatile Interface Unit
VLAN Virtual Local Area Network
VOS Virtual Operating System
VPDN Virtual Private Dial Network
VPN Virtual Private Network
VRP Versatile Routing Platform
VRRP Virtual Router Redundancy Protocol
W
WAN Wide Area Network
WFQ Weighted Fair Queuing
WRED Weighted Random Early Detection
WWW World Wide Web
X
XOT X.25 Over TCP