Download as pdf or txt
Download as pdf or txt
You are on page 1of 210

H3C SR8800 10G Core Routers

Network Management and Monitoring


Configuration Guide

Hangzhou H3C Technologies Co., Ltd.


http://www.h3c.com

Software version: SR8800-CMW520-R3347


Document version: 6W103-20120224
Copyright © 2011-2012, Hangzhou H3C Technologies Co., Ltd. and its licensors

All rights reserved

No part of this manual may be reproduced or transmitted in any form or by any means without prior
written consent of Hangzhou H3C Technologies Co., Ltd.
Trademarks

H3C, , Aolynk, , H3Care, , TOP G, , IRF, NetPilot, Neocean, NeoVTL,


SecPro, SecPoint, SecEngine, SecPath, Comware, Secware, Storware, NQA, VVG, V2G, VnG, PSPT,
XGbus, N-Bus, TiGem, InnoVision and HUASAN are trademarks of Hangzhou H3C Technologies Co.,
Ltd.
All other trademarks that may be mentioned in this manual are the property of their respective owners
Notice

The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute the warranty of any kind, express or implied.
Preface

The H3C SR8800 documentation set includes 13 configuration guides, which describe the software
features for the H3C SR8800 10G Core Routers and guide you through the software configuration
procedures. These configuration guides also provide configuration examples to help you apply software
features to different network scenarios.
The Network Management and Monitoring Configuration Guide describes how to manage the network,
view system information, collect traffic statistics, sample packets, analyze network quality, and use the
ping, tracert, and debugging commands to examine and debug network connectivity.
This preface includes:
• Audience
• Conventions
• About the H3C SR8800 documentation set
• Obtaining documentation
• Technical support
• Documentation feedback

Audience
This documentation is intended for:
• Network planners
• Field technical support and servicing engineers
• Network administrators working with the SR8800 series

Conventions
This section describes the conventions used in this documentation set.

Command conventions

Convention Description
Boldface Bold text represents commands and keywords that you enter literally as shown.

Italic Italic text represents arguments that you replace with actual values.

[] Square brackets enclose syntax choices (keywords or arguments) that are optional.

Braces enclose a set of required syntax choices separated by vertical bars, from which
{ x | y | ... }
you select one.

Square brackets enclose a set of optional syntax choices separated by vertical bars, from
[ x | y | ... ]
which you select one or none.

Asterisk marked braces enclose a set of required syntax choices separated by vertical
{ x | y | ... } *
bars, from which you select at least one.

[ x | y | ... ] * Asterisk marked square brackets enclose optional syntax choices separated by vertical
Convention Description
bars, from which you select one choice, multiple choices, or none.

The argument or keyword and argument combination before the ampersand (&) sign can
&<1-n>
be entered 1 to n times.

# A line that starts with a pound (#) sign is comments.

GUI conventions

Convention Description
Window names, button names, field names, and menu items are in Boldface. For
Boldface
example, the New User window appears; click OK.

> Multi-level menus are separated by angle brackets. For example, File > Create > Folder.

Symbols

Convention Description
An alert that calls attention to important information that if not understood or followed can
WARNING result in personal injury.

An alert that calls attention to important information that if not understood or followed can
CAUTION result in data loss, data corruption, or damage to hardware or software.

IMPORTANT An alert that calls attention to essential information.

NOTE An alert that contains additional or supplementary information.

TIP An alert that provides helpful information.

Network topology icons

Represents a generic network device, such as a router, switch, or firewall.

Represents a routing-capable device, such as a router or Layer 3 switch.

Represents a generic switch, such as a Layer 2 or Layer 3 switch, or a router that supports
Layer 2 forwarding and other Layer 2 features.

Port numbering in examples


The port numbers in this document are for illustration only and might be unavailable on your router.

About the H3C SR8800 documentation set


The H3C SR8800 documentation set includes:
Category Documents Purposes
Marketing brochures Describe product specifications and benefits.
Product description and
specifications Provide an in-depth description of software features
Technology white papers
and technologies.
Category Documents Purposes
Card datasheets Describe card specifications, features, and standards.

Compliance and safety Provides regulatory information and the safety


manual instructions that must be followed during installation.

Provides a complete guide to hardware installation


Installation guide
and hardware specifications.

H3C N68 Cabinet


Guides you through installing and remodeling H3C
Installation and Remodel
N68 cabinets.
Hardware specifications Introduction
and installation
H3C Pluggable SFP
[SFP+][XFP] Transceiver Guides you through installing SFP/SFP+/XFP
Modules Installation transceiver modules.
Guide

H3C High-End Network Describes the hot-swappable modules available for


Products Hot-Swappable the H3C high-end network products, their external
Module Manual views, and specifications.

Describe software features and configuration


Configuration guides
Software configuration procedures.

Command references Provide a quick reference to all available commands.

Provide information about the product release,


including the version history, hardware and software
Operations and
Release notes compatibility matrix, version upgrade information,
maintenance
technical support information, and software
upgrading.

Obtaining documentation
You can access the most up-to-date H3C product documentation on the World Wide Web
at http://www.h3c.com.
Click the links on the top navigation bar to obtain different categories of product documentation:
[Technical Support & Documents > Technical Documents] – Provides hardware installation, software
upgrading, and software feature configuration and maintenance documentation.
[Products & Solutions] – Provides information about products and technologies, as well as solutions.
[Technical Support & Documents > Software Download] – Provides the documentation released with the
software version.

Technical support
service@h3c.com
http://www.h3c.com

Documentation feedback
You can e-mail your comments about product documentation to info@h3c.com.
We appreciate your comments.
Contents

Using ping, tracert, and system debugging ··············································································································· 1


Ping ····················································································································································································· 1
Introduction ······························································································································································· 1
Configuring ping ······················································································································································ 1
Ping configuration example ···································································································································· 2
Tracert ················································································································································································ 3
Introduction ······························································································································································· 3
Configuring tracert ··················································································································································· 4
System debugging ···························································································································································· 5
Introduction to system debugging ···························································································································5
Configuring system debugging ······························································································································· 6
Ping and tracert configuration example ························································································································· 6

Configuring NQA ························································································································································ 8


NQA overview ·································································································································································· 8
NQA features ··························································································································································· 8
NQA concepts ······················································································································································· 10
NQA probe operation procedure ······················································································································· 11
NQA configuration task list ·········································································································································· 11
Configuring the NQA server ········································································································································ 12
Enabling the NQA client ··············································································································································· 12
Creating an NQA test group ········································································································································ 13
Configuring an NQA test group ·································································································································· 13
Configuring ICMP echo tests ································································································································ 13
Configuring DHCP tests ········································································································································ 15
Configuring DNS tests ·········································································································································· 15
Configuring FTP tests ············································································································································· 16
Configuring HTTP tests ·········································································································································· 17
Configuring UDP jitter tests··································································································································· 18
Configuring SNMP tests ······································································································································· 20
Configuring TCP tests ············································································································································ 21
Configuring UDP echo tests ·································································································································· 22
Configuring voice tests ········································································································································· 23
Configuring DLSw tests ········································································································································· 25
Configuring the collaboration function ························································································································ 26
Configuring threshold monitoring································································································································· 26
Configuring the NQA statistics collection function ····································································································· 28
Configuring the history records saving function ········································································································· 29
Configuring optional parameters for an NQA test group ························································································· 29
Scheduling an NQA test group ···································································································································· 30
Displaying and maintaining NQA ······························································································································· 31
NQA configuration examples ······································································································································ 32
ICMP echo test configuration example ··············································································································· 32
DHCP test configuration example ························································································································ 34
DNS test configuration example ·························································································································· 35
FTP test configuration example ···························································································································· 36
HTTP test configuration example·························································································································· 37
UDP jitter test configuration example ·················································································································· 39
SNMP test configuration example ······················································································································· 41

i
TCP test configuration example ··························································································································· 43
UDP echo test configuration example ················································································································· 44
Voice test configuration example ························································································································ 45
DLSw test configuration example ························································································································· 48
NQA collaboration configuration example········································································································ 49

Configuring NTP ························································································································································ 52


NTP overview ································································································································································· 52
NTP applications ··················································································································································· 52
How NTP works ····················································································································································· 52
NTP message format ············································································································································· 53
Operation modes of NTP ····································································································································· 55
NTP-supported MPLS L3VPN ································································································································ 57
NTP configuration task list ············································································································································· 57
Configuring the operation modes of NTP ··················································································································· 57
Configuring NTP client/server mode ·················································································································· 58
Configuring the NTP symmetric mode ················································································································ 59
Configuring NTP broadcast mode······················································································································· 59
Configuring NTP multicast mode ························································································································· 60
Configuring the local clock as a reference source ····································································································· 61
Configuring optional parameters of NTP ···················································································································· 61
Specifying the source interface for NTP messages ···························································································· 61
Disabling an interface from receiving NTP messages ······················································································· 62
Configuring the maximum number of dynamic sessions allowed ···································································· 62
Configuring access-control rights ································································································································· 62
Configuration prerequisites ·································································································································· 63
Configuration procedure ······································································································································ 63
Configuring NTP authentication ··································································································································· 63
Configuring NTP authentication in client/server mode ····················································································· 64
Configuring NTP authentication in symmetric peers mode ··············································································· 65
Configuring NTP authentication in broadcast mode ························································································· 66
Configuring NTP authentication in multicast mode ··························································································· 67
Displaying and maintaining NTP ································································································································· 68
NTP configuration examples ········································································································································· 69
Configuring NTP client/server mode ·················································································································· 69
Configuring the NTP symmetric mode ················································································································ 70
Configuring NTP broadcast mode······················································································································· 72
Configuring NTP multicast mode ························································································································· 73
Configuring NTP client/server mode with authentication ················································································· 76
Configuring NTP broadcast mode with authentication ····················································································· 77
Configuring MPLS VPN time synchronization in client/server mode ······························································ 79
Configuring MPLS VPN time synchronization in symmetric peers mode ························································ 81

Configuring Clock monitoring ··································································································································· 83


Clock monitoring module overview······························································································································ 83
Reference source level ·········································································································································· 83
Working mode of the clock monitoring module ································································································ 84
Working mode of the port clock·························································································································· 84
Clock monitoring module configuration task list ········································································································· 85
Configuring working mode of the clock monitoring module of the SRPU ································································ 85
Configuring reference source priority ·························································································································· 86
Configuring SSM for reference sources ······················································································································· 86
Setting the ways of deriving SSM level··············································································································· 86
Setting the bit position for transmitting bits clock source information ······························································ 86
Configuring SSM levels for the reference sources ····························································································· 87

ii
Activating/deactivating SSM ······························································································································· 87
Setting the input port of the line clock (LPU port) ········································································································ 87
Displaying and maintaining the clock monitoring module ························································································ 88
Clock monitoring module configuration example······································································································· 89

Configuring IPC ·························································································································································· 91


IPC overview ··································································································································································· 91
Node······································································································································································· 91
Link ·········································································································································································· 91
Channel ·································································································································································· 91
Packet sending modes ·········································································································································· 92
Enabling IPC performance statistics ····························································································································· 92
Displaying and maintaining IPC ··································································································································· 93

Configuring SNMP····················································································································································· 94
SNMP overview······························································································································································ 94
SNMP framework ·················································································································································· 94
SNMP operations ·················································································································································· 94
SNMP protocol versions ······································································································································· 94
MIB and view-based MIB access control ············································································································ 95
Configuring SNMP basic parameters ·························································································································· 95
Configuring SNMPv3 basic parameters ············································································································· 95
Configuring SNMPv1 or SNMPv2c basic parameters ······················································································ 96
Configuring SNMP logging ·········································································································································· 98
Introduction to SNMP logging ····························································································································· 98
Enabling SNMP logging ······································································································································· 98
Configuring SNMP traps ··············································································································································· 98
Enabling SNMP traps ··········································································································································· 99
Configuring the SNMP agent to send traps to a host ······················································································· 99
Displaying and maintaining SNMP ··························································································································· 100
SNMP configuration examples ··································································································································· 101
SNMPv1/SNMPv2c configuration example ···································································································· 101
SNMPv3 configuration example························································································································ 102
SNMP logging configuration example ············································································································· 103

Configuring MIB style ············································································································································· 106


Overview······································································································································································· 106
Setting the MIB style····················································································································································· 106
Displaying and maintaining MIB style ······················································································································· 106

Configuring RMON ················································································································································ 107


RMON overview ·························································································································································· 107
Introduction ·························································································································································· 107
Working mechanism ··········································································································································· 107
RMON groups ····················································································································································· 108
Configuring the RMON statistics function ················································································································· 109
Configuring the RMON Ethernet statistics function·························································································· 109
Configuring the RMON history statistics function ···························································································· 110
Configuring the RMON alarm function ····················································································································· 110
Configuration prerequisites ································································································································ 110
Configuration procedure ···································································································································· 111
Displaying and maintaining RMON ·························································································································· 111
RMON configuration examples·································································································································· 112
Ethernet statistics group configuration example ······························································································· 112
History group configuration example················································································································ 113
Alarm group configuration example ················································································································· 115

iii
Configuring a sampler ············································································································································ 117
Sampler overview ························································································································································ 117
Creating a sampler ······················································································································································ 117
Displaying and maintaining sampler ························································································································· 118
Sampler configuration examples ································································································································ 118
Using the sampler for NetStream ······················································································································ 118

Configuring port mirroring ····································································································································· 120


Introduction ··································································································································································· 120
Terminologies of port mirroring ························································································································· 120
Port mirroring classification and implementation ····························································································· 121
Configuring local port mirroring ································································································································ 122
Configuring remote port mirroring ····························································································································· 123
Configuration prerequisites ································································································································ 123
Configuring a remote source mirroring group ································································································· 124
Configuring a remote destination mirroring group·························································································· 125
Displaying and maintaining port mirroring ··············································································································· 125
Port mirroring configuration examples······················································································································· 126
Local port mirroring configuration example (in source port mode) ······························································· 126
Layer 2 remote port mirroring configuration example (reflector port configurable) ···································· 127

Configuring traffic mirroring ·································································································································· 130


Traffic mirroring overview ··········································································································································· 130
Configuring traffic mirroring ······································································································································· 130
Displaying and maintaining traffic mirroring ············································································································ 131
Traffic mirroring configuration example ···················································································································· 131

Configuring NetStream··········································································································································· 133


NetStream overview ···················································································································································· 133
NetStream basic concepts ·········································································································································· 133
Flow ······································································································································································ 133
NetStream operation··········································································································································· 134
NetStream key technologies ······································································································································· 134
Flow aging ··························································································································································· 134
NetStream data export ······································································································································· 135
NetStream export formats ·································································································································· 137
NetStream sampling ···················································································································································· 138
NetStream configuration task list ································································································································ 138
Configuring NetStream ··············································································································································· 139
Configuring QoS-based NetStream ·················································································································· 139
Configuring NetStream on an SPC card ·········································································································· 140
Configuring NetStream sampling ······························································································································· 141
Configuring NetStream sampling ······················································································································ 141
Configuring attributes of NetStream export data ····································································································· 142
Configuring NetStream export format··············································································································· 142
Configuring refresh rate for NetStream version 9 templates ·········································································· 143
Configuring MPLS-aware NetStream ················································································································ 144
Configuring NetStream flow aging ···························································································································· 144
Configuring NetStream data export ·························································································································· 145
Configuring NetStream common data export ·································································································· 145
Configuring NetStream aggregation data export ··························································································· 146
Displaying and maintaining NetStream ···················································································································· 147
NetStream configuration examples ···························································································································· 147
QoS-based NetStream configuration example ································································································ 147
NetStream aggregation data export configuration example ········································································· 148
NetStream aggregation data export on an SPC card configuration example ············································· 150

iv
Configuring IPv6 NetStream ·································································································································· 153
IPv6 NetStream overview ············································································································································ 153
IPv6 NetStream basic concepts ·································································································································· 153
IPv6 flow ······························································································································································· 153
IPv6 NetStream operation ·································································································································· 154
IPv6 NetStream key technologies ······························································································································· 154
Flow aging ··························································································································································· 154
IPv6 NetStream data export ······························································································································· 155
IPv6 NetStream export format ···························································································································· 156
IPv6 NetStream configuration task list ······················································································································· 156
Configuring IPv6 NetStream ······································································································································· 157
Configuring QoS-based IPv6 NetStream ·········································································································· 157
Configuring IPv6 NetStream on an SPC card ·································································································· 157
Configuring attributes of IPv6 NetStream data export ····························································································· 158
Configuring IPv6 NetStream export format ······································································································ 158
Configuring refresh rate for IPv6 NetStream version 9 templates ································································· 159
Configuring MPLS-aware NetStream ················································································································ 159
Configuring IPv6 NetStream flow aging ··················································································································· 160
Flow aging approaches ······································································································································ 160
Configuring IPv6 NetStream flow aging··········································································································· 161
Configuring IPv6 NetStream data export ·················································································································· 161
Configuring IPv6 NetStream traditional data export ······················································································· 161
Configuring IPv6 NetStream aggregation data export ··················································································· 162
Displaying and maintaining IPv6 NetStream ············································································································ 163
IPv6 NetStream configuration examples ··················································································································· 163
IPv6 NetStream traditional data export configuration example ····································································· 163
IPv6 NetStream aggregation data export configuration example ································································· 165
Configuration example for IPv6 NetStream aggregation data export on an SPC card ······························ 167

Configuring protocol packet statistics ···················································································································· 170


Protocol packet statistics overview ····························································································································· 170
Displaying and maintaining protocol packet statistics ····························································································· 170

Configuring the information center ························································································································ 171


Information center overview ········································································································································ 171
Introduction to information center ······················································································································ 171
Classification of system information ·················································································································· 172
Eight levels of system information ······················································································································ 172
Eight output destinations and ten channels of system information ································································· 172
Outputting system information by source module ···························································································· 173
Default output rules of system information ········································································································ 173
System information format ·································································································································· 174
Configuring information center··································································································································· 177
Information center configuration task list ·········································································································· 177
Outputting system information to the console ·································································································· 178
Outputting system information to a monitor terminal ······················································································ 179
Outputting system information to a log host ····································································································· 180
Outputting system information to the trap buffer······························································································ 180
Outputting system information to the log buffer ······························································································· 181
Outputting system information to the SNMP module ······················································································· 182
Outputting system information to the web interface ························································································ 183
Saving system information to a log file ············································································································· 184
Configuring synchronous information output ··································································································· 185
Disabling a port from generating link up/down logging information ··························································· 185
Displaying and maintaining information center ······································································································· 186

v
Information center configuration examples ··············································································································· 186
Outputting log information to a Unix log host·································································································· 186
Outputting log information to a Linux log host ································································································· 188
Outputting log information to the console ········································································································ 190

Flow logging configuration ···································································································································· 191


Flow logging overview ················································································································································ 191
Introduction to flow logging ······························································································································· 191
Flow logging versions ········································································································································· 191
Flow logging configuration task list ··························································································································· 192
Configuring flow logging version ······························································································································· 192
Configuring the source address for flow logging packets ······················································································· 193
Exporting flow logs ······················································································································································ 193
Exporting flow logs to log server ······················································································································· 193
Exporting flow logs to information center ········································································································· 194
Displaying and maintaining flow logging ················································································································· 194
Flow logging configuration example ························································································································· 195
Troubleshooting flow logging ····································································································································· 196

Index ········································································································································································ 197

vi
Using ping, tracert, and system debugging

You can use the ping command to check the connectivity to a destination, use the tracert command to
trace the path to a destination, and use the debug command to diagnose system faults based on the
debugging information.

Ping
Introduction
Use the ping utility to check the reachability to a specific address.
Ping sends ICMP echo requests (ECHO-REQUEST) to the destination device. Upon receiving the requests,
the destination device responds with ICMP echo replies (ECHO-REPLY) to the source device. The source
device outputs statistics about the ping operation, including the number of packets sent, number of echo
replies received, and the round-trip time. You can measure the network performance by analyzing these
statistics.

Configuring ping
To configure the ping function:

Task Command Remarks


• For an IPv4 network:
ping [ ip ] [ -a source-ip | -c count | -f |
-h ttl | -i interface-type interface-number
| -m interval | -n | -p pad | -q | -r | -s
packet-size | -t timeout | -tos tos | -v |
Check whether a specified -vpn-instance vpn-instance-name ] *
Use either approach.
address in an IP network is host
reachable. Available in any view.
• For an IPv6 network:
ping ipv6 [ -a source-ipv6 | -c count |
-m interval | -s packet-size | -t timeout |
-vpn-instance vpn-instance-name ] *
host [ -i interface-type
interface-number ]

NOTE:
• For a low-speed network, H3C recommends that you set a larger value for the timeout timer (indicated
by the -t parameter in the command) when configuring the ping command.
• Only the directly connected segment address can be pinged if the outgoing interface is specified with
the -i argument.
• For more information about the ping lsp command, see MPLS Command Reference.

1
Ping configuration example
Network requirements
As shown in Figure 1, check whether Device A and Device C can reach each other. If the two devices can
reach each other, get the detailed information about routes from Device A to Device C.
Figure 1 Network diagram

Configuration procedure
# Use the ping command to display whether Device A and Device C can reach each other.
<DeviceA> ping 1.1.2.2
PING 1.1.2.2: 56 data bytes, press CTRL_C to break
Reply from 1.1.2.2: bytes=56 Sequence=1 ttl=254 time=205 ms
Reply from 1.1.2.2: bytes=56 Sequence=2 ttl=254 time=1 ms
Reply from 1.1.2.2: bytes=56 Sequence=3 ttl=254 time=1 ms
Reply from 1.1.2.2: bytes=56 Sequence=4 ttl=254 time=1 ms
Reply from 1.1.2.2: bytes=56 Sequence=5 ttl=254 time=1 ms

--- 1.1.2.2 ping statistics ---


5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 1/41/205 ms

# Get detailed information about routes from Device A to Device C.


<DeviceA> ping -r 1.1.2.2
PING 1.1.2.2: 56 data bytes, press CTRL_C to break
Reply from 1.1.2.2: bytes=56 Sequence=1 ttl=254 time=53 ms
Record Route:
1.1.2.1
1.1.2.2
1.1.1.2
1.1.1.1
Reply from 1.1.2.2: bytes=56 Sequence=2 ttl=254 time=1 ms
Record Route:
1.1.2.1
1.1.2.2
1.1.1.2

2
1.1.1.1
Reply from 1.1.2.2: bytes=56 Sequence=3 ttl=254 time=1 ms
Record Route:
1.1.2.1
1.1.2.2
1.1.1.2
1.1.1.1
Reply from 1.1.2.2: bytes=56 Sequence=4 ttl=254 time=1 ms
Record Route:
1.1.2.1
1.1.2.2
1.1.1.2
1.1.1.1
Reply from 1.1.2.2: bytes=56 Sequence=5 ttl=254 time=1 ms
Record Route:
1.1.2.1
1.1.2.2
1.1.1.2
1.1.1.1
--- 1.1.2.2 ping statistics ---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 1/11/53 ms

The principle of ping –r is as shown in Figure 1.


1. The source device (Device A) sends an ICMP echo request with the RR option being empty to the
destination device (Device C).
2. The intermediate device (Device B) adds the IP address (1.1.2.1) of its outbound interface to the RR
option of the ICMP echo request, and forwards the packet.
3. Upon receiving the request, the destination device copies the RR option in the request and adds the
IP address (1.1.2.2) of its outbound interface to the RR option. Then the destination device sends
an ICMP echo reply.
4. The intermediate device adds the IP address (1.1.1.2) of its outbound interface to the RR option in
the ICMP echo reply, and then forwards the reply.
5. Upon receiving the reply, the source device adds the IP address (1.1.1.1) of its inbound interface
to the RR option. Finally, you can get detailed information about routes from Device A to Device C:
1.1.1.1 <-> {1.1.1.2; 1.1.2.1} <-> 1.1.2.2.

Tracert
Introduction
Traceroute enables you to get the IP addresses of Layer 3 devices in the path to a specific destination. You
can use traceroute to check network connectivity and identify the failed nodes in the event of network
failure.

3
Figure 2 Tracert operation

Traceroute uses received ICMP error messages to get the IP addresses of devices. As shown in Figure 2,
traceroute works as follows:
1. The source device (Device A) sends a UDP packet with a TTL value of 1 to the destination device
(Device D). The destination UDP port is not used by any application on the destination device.
2. The first hop (Device B, the first Layer 3 device that receives the packet) responds by sending a
TTL-expired ICMP error message to the source device, with its IP address encapsulated. In this way,
the source device can get the address of the first Layer 3 device (1.1.1.2).
3. The source device sends a packet with a TTL value of 2 to the destination device.
4. The second hop (Device C) responds with a TTL-expired ICMP error message, which gives the
source device the address of the second Layer 3 device (1.1.2.2).
5. The above process continues until the packet sent by the source device reaches the ultimate
destination device. Since no application uses the destination port specified in the packet, the
destination device responds with a port-unreachable ICMP message to the source device, with its
IP address encapsulated. In this way, the source device gets the IP address of the destination
device (1.1.3.2).
6. The source device thinks that the packet has reached the destination device after receiving the
port-unreachable ICMP message, and the path to the destination device is 1.1.1.2 to 1.1.2.2 to
1.1.3.2.

Configuring tracert
Configuration prerequisites
Enable sending of ICMP timeout packets on the intermediate device (the device between the source and
destination devices). If the intermediate device is an H3C device, execute the ip ttl-expires enable
command on the device. For more information about this command, see Layer 3—IP Services Command
Reference.
Enable sending of ICMP destination unreachable packets on the destination device. If the destination
device is an H3C device, execute the ip unreachables enable command. For more information about this
command, see Layer 3—IP Services Command Reference.

Tracert configuration
To configure tracert:

4
Step Command Remarks
1. Enter system view. system-view N/A
• For an IPv4 network:
tracert [ -a source-ip | -f first-ttl | -m
max-ttl | -p port | -q packet-number
| -vpn-instance vpn-instance-name |
2. Display the routes from -w timeout ] * host Use either approach.
source to destination. • For an IPv6 network: Available in any view.
tracert ipv6 [ -f first-ttl | -m max-ttl |
-p port | -q packet-number |
-vpn-instance vpn-instance-name |
-w timeout ] * host

NOTE:
For more information about the tracert lsp command, see MPLS Command Reference.

System debugging
Introduction to system debugging
The router provides various debugging functions. For the majority of protocols and features supported,
the system provides corresponding debugging information to help users diagnose errors.
The following two switches control the display of debugging information:
• Protocol debugging switch—Controls protocol-specific debugging information.
• Screen output switch—Controls whether to display the debugging information on a certain screen.
As Figure 3 illustrates, assume the router can provide debugging for the three modules 1, 2, and 3. The
debugging information is output on a terminal only when both the protocol debugging switch and the
screen output switch are turned on.
Figure 3 The relationship between the protocol and screen output switch

5
Configuring system debugging
Output of the debugging information may reduce system efficiency. Administrators usually use the
debugging commands to diagnose network failure. After completing the debugging, disable the
corresponding debugging function, or use the undo debugging all command to disable all the
debugging functions.
Output of debugging information depends on the configurations of the information center and the
debugging commands of each protocol and functional module. Displaying the debugging information
on a terminal (including console or VTY) is a common way to output debugging information. You can
also output debugging information to other destinations. For more information, see the chapter
“Configuring the information center.”
To enable the output of debugging information to a terminal:

Step Command Remarks


Optional.
By default, the terminal monitoring
1. Enable the terminal
on the console is enabled and that
monitoring of system terminal monitor
on the monitoring terminal is
information.
disabled.
Available in user view.

By default, terminal display of


2. Enable the terminal debugging information is
display of debugging terminal debugging disabled.
information.
Available in user view.

By default, debugging for a


3. Enable debugging for a debugging { all [ timeout time ] | specified module is disabled.
specified module. module-name [ option ] }
Available in user view

display debugging [ interface


4. Display the enabled interface-type interface-number ] Optional.
debugging functions. [ module-name ] [ | { begin | exclude Available in any view.
| include } regular-expression ]

NOTE:
You must configure the debugging, terminal debugging and terminal monitor commands first to display
the detailed debugging information on the terminal. For more information about the terminal debugging
and terminal monitor commands, see Network Management and Monitoring Command Reference.

Ping and tracert configuration example


Network requirements
As shown in Figure 4, Device A failed to telnet Device C. Check whether Device A and Device C can
reach each other. If they cannot reach each other, locate the failed nodes in the network.

6
Figure 4 Network diagram

Configuration procedure
# Use the ping command to display whether Device A and Device C can reach each other.
<DeviceA> ping 1.1.2.2
PING 1.1.2.2: 56 data bytes, press CTRL_C to break
Request time out
Request time out
Request time out
Request time out
Request time out

--- 1.1.2.2 ping statistics ---


5 packet(s) transmitted
0 packet(s) received
100.00% packet loss

# Device A and Device C cannot reach each other. Use the tracert command to determine failed nodes.
<DeviceA> system-view
[DeviceA] ip ttl-expires enable
[DeviceA] ip unreachables enable
[DeviceA] tracert 1.1.2.2
traceroute to 1.1.2.2(1.1.2.2) 30 hops max,40 bytes packet, press CTRL_C to bre
ak
1 1.1.1.2 14 ms 10 ms 20 ms
2 * * *
3 * * *
4 * * *
5
<DeviceA>

The output shows that Device A and Device C cannot reach other, Device A and Device B can reach each
other, and an error occurred on the connection between Device B and Device C. In this case, use the
debugging ip icmp command to enable ICMP debugging on Device A and Device C to check whether
the devices send or receive the specified ICMP packets, or use the display ip routing-table command to
display whether Device A and Device C can reach each other.

7
Configuring NQA

NQA overview
Network Quality Analyzer (NQA) can perform various types of tests and collect network performance
and service quality parameters such as delay jitter, time for establishing a TCP connection, time for
establishing an FTP connection, and file transfer rate.
With the NQA test results, you can diagnose and locate network faults, know network performance in
time and take proper actions.

NQA features
Supporting multiple test types
Ping can use only the Internet Control Message Protocol (ICMP) to test the reachability of the destination
host and the round-trip time. As an enhancement to Ping, NQA provides more test types and functions.
NQA supports 11 test types: ICMP echo, DHCP, DNS, FTP, HTTP, UDP jitter, SNMP, TCP, UDP echo, voice
and DLSw.
NQA enables the client to send probe packets of different test types to detect the protocol availability
and response time of the peer. The test result helps you understand network performance.

Supporting the collaboration function


Collaboration is implemented by establishing reaction entries to monitor the detection results of NQA
probes. If the number of consecutive probe failures reaches a limit, NQA informs the track module of the
detection result, and the track module triggers other application modules to take predefined.
Figure 5 Implementing collaboration

The collaboration comprises the following parts: the application modules, the track module, and the
detection modules.
• A detection module monitors specific objects, such as the link status, and network performance,
and informs the track module of detection results.
• Upon the detection results, the track module changes the status of the track entry and informs the
associated application module. The track module works between the application modules and the
detection modules. It hides the differences among detection modules from application modules.

8
• The application module takes actions when the tracked object changes its state.
The following describes how a static route is monitored through collaboration:
1. NQA monitors the reachability to 192.168.0.88.
2. When 192.168.0.88 becomes unreachable, NQA notifies it to the track module.
3. The track module notifies the state change to the static routing module
4. The static routing module sets the static route as invalid.

NOTE:
For more information about the collaboration and the track module, see High Availability Configuration
Guide.

Supporting threshold monitoring


NQA supports threshold monitoring for performance parameters such as average delay jitter and packet
round-trip time. The performance parameters to be monitored are monitored elements. NQA monitors
threshold violations for a monitored element, and reacts to certain measurement conditions, for example,
sending trap messages to the network management server. This helps network administrators understand
the network service quality and network performance.
1. Monitored elements
Table 1 describes the monitored elements and the NQA test types in which the elements can be
monitored.
Table 1 Monitored elements and NQA test types

Monitored elements Test type supported


Probe duration Tests excluding UDP jitter test and voice test

Count of probe failures Tests excluding UDP jitter test and voice test

Packet round-trip time UDP jitter test and voice test

Count of discarded packets UDP jitter test and voice test

One-way delay jitter (source-to-destination and


UDP jitter test and voice test
destination-to-source)

One-way delay (source-to-destination and destination-to-source) UDP jitter test and voice test

Calculated Planning Impairment Factor (ICPIF) (see “Configuring


Voice test
voice tests”)

Mean Opinion Scores (MOS) (see “Configuring voice tests”) Voice test

2. Threshold types
The following threshold types are supported:
{ average—Monitors the average value of monitored data in a test. If the average value in a test
exceeds the upper threshold or goes below the lower threshold, a threshold violation occurs.
For example, you can monitor the average probe duration in a test.
{ accumulate—Monitors total number of times the monitored data violates the threshold in a test.
If the total number of times reaches or exceeds a specific value, a threshold violation occurs.
{ consecutive—Monitors the number of consecutive times the monitored data violates the
threshold since the test group starts. If the monitored data violates the threshold consecutively
for a specific number of times, a threshold violation occurs.

9
NOTE:
The counting for the average or accumulate threshold type is performed per test, but that for the
consecutive type is performed since the test group is started.

3. Triggered actions
The following actions may be triggered:
{ none—NQA only records events for terminal display; it does not send trap information to the
network management server.
{ trap-only—NQA records events and sends trap messages to the network management server.

NOTE:
NQA DNS tests do not support the action of sending trap messages. The action to be triggered in DNS
tests can only be the default one, none.

4. Reaction entry
In a reaction entry, a monitored element, a threshold type, and the action to be triggered are
configured to implement threshold monitoring.
The state of a reaction entry can be invalid, over-threshold, or below-threshold. Before an NQA
test group starts, the reaction entry is in the state of invalid. After each test or probe, threshold
violations are counted according to the threshold type and range configured in the entry. If the
threshold is violated consecutively or accumulatively for a specific number of times, the state of the
entry is set to over-threshold; otherwise, the state of the entry is set to below-threshold.
If the action to be triggered is configured as trap-only for a reaction entry, when the state of the
entry changes, a trap message is generated and sent to the network management server.

NQA concepts
Test group
An NQA test group specifies test parameters including the test type, destination address, and destination
port. Each test group is uniquely identified by an administrator name and operation tag. You can
configure and schedule multiple NQA test groups to test different objects.

Test and probe


After the NQA test group starts, tests are performed at a specific interval. During each test, a specific
number of probe operations are performed. Both the test interval and the number of probe operations
per test are configurable. But only one probe operation is performed during one voice test.
Probe operations vary with NQA test types.
• During a TCP or DLSw test, one probe operation means setting up one connection.
• During a UDP jitter or a voice test, one probe operation means continuously sending a specific
number of probe packets. The number of probe packets is configurable.
• During an FTP, HTTP, DHCP or DNS test, one probe operation means uploading or downloading a
file, obtaining a web page, obtaining an IP address through DHCP, or translating a domain name
to an IP address.
• During an ICMP echo or UDP echo test, one probe operation means sending an ICMP echo request
or a UDP packet.

10
• During an SNMP test, one probe operation means sending one SNMPv1 packet, one SNMPv2C
packet, and one SNMPv3 packet.

NQA client and server


A device with NQA test groups configured is an NQA client and the NQA client initiates NQA tests. An
NQA server makes responses to probe packets destined to the specified destination address and port
number.
Figure 6 Relationship between the NQA client and NQA server

Not all test types require the NQA server. Only the TCP, UDP echo, UDP jitter, or voice test requires both
the NQA client and server, as shown in Figure 6.
You can create multiple TCP or UDP listening services on the NQA server. Each listens to a specific
destination address and port number. Make sure the destination IP address and port number for a
listening service on the server are the same as those configured for the test group on the NQA client.
Each listening service must be unique on the NQA server.

NQA probe operation procedure


An NQA probe operation involves the following steps:
1. The NQA client constructs probe packets for the specified type of NQA test, and sends them to the
peer device.
2. Upon receiving the probe packets, the peer sends back responses with timestamps.
3. The NQA client computes the network performance and service quality parameters, such as the
packet loss rate and round-trip time based on the received responses.

NQA configuration task list


Complete the following task to enable the NQA server:

Task Remarks
Configuring the NQA server Required for TCP, UDP echo, UDP jitter and voice tests

To perform NQA tests successfully, make the following configurations on the NQA client:
1. Enable the NQA client.
2. Create a test group and configure test parameters. The test parameters may vary with test types.
3. Schedule the NQA test group.
Complete these tasks to configure NQA client:

Task Remarks
Enabling the NQA client Required

Creating an NQA test group Required

11
Task Remarks

Configuring ICMP echo tests

Configuring DHCP tests

Configuring DNS tests

Configuring FTP tests

Configuring HTTP tests


Required
Configuring an NQA test group
Configuring UDP jitter tests Use any of the approaches
Configuring SNMP tests

Configuring TCP tests

Configuring UDP echo tests

Configuring voice tests

Configuring DLSw tests

Configuring the collaboration function Optional

Configuring threshold monitoring Optional

Configuring the NQA statistics collection function Optional

Configuring the history records saving function Optional

Configuring optional parameters for an NQA test group Optional

Scheduling an NQA test group Required

Configuring the NQA server


To perform TCP, UDP echo, UDP jitter, or voice tests, configure the NQA server on the peer device. The
NQA server responses to the probe packets sent from the NQA client by listening to the specified
destination address and port number.
To configure the NQA server:

Step Command Remarks


1. Enter system view. system-view N/A

2. Enable the NQA server. nqa server enable Disabled by default.

The destination IP address and port


nqa server { tcp-connect | number must be the same as those
3. Configure the listening
udp-echo } ip-address configured on the NQA client. A
service.
port-number listening service must be unique on
the NQA server.

Enabling the NQA client


Configurations on the NQA client take effect only when the NQA client is enabled.

12
To enable the NQA client:

Step Command Remarks


1. Enter system view. system-view N/A

Optional
2. Enable the NQA client. nqa agent enable
Enabled by default

Creating an NQA test group


Create an NQA test group before you configure NQA tests.
To create an NQA test group:

Step Command Remarks


1. Enter system view. system-view N/A

In the NQA test group view, you


can specify the test type.
2. Create an NQA test group
nqa entry admin-name You can use the nqa entry
and enter the NQA test group
operation-tag command to enter the test type
view.
view of an NQA test group with
test type configured.

Configuring an NQA test group


Configuring ICMP echo tests
ICMP echo tests of an NQA test group are used to test reachability of a destination host according to the
ICMP echo response information. An ICMP echo test has the same function as the ping command but
provides more output information. In addition, you can specify the next hop for ICMP echo tests. ICMP
echo tests are used to locate connectivity problems in a network.
To configure ICMP echo tests:

Step Command Remarks

1. Enter system view. system-view N/A

nqa entry admin-name


2. Enter NQA test group view. N/A
operation-tag
3. Configure the test type as
ICMP echo and enter test type type icmp-echo N/A
view.
4. Configure the destination
By default, no destination IP
address of ICMP echo destination ip ip-address
address is configured.
requests.
5. Configure the size of the data
Optional.
field in each ICMP echo data-size size
request. 100 bytes by default.

13
Step Command Remarks

Optional.
6. Configure the string to be
filled in the data field of each data-fill string By default, the string is the
ICMP echo request. hexadecimal number
00010203040506070809.

Optional.
7. Apply ICMP echo tests to the
vpn-instance vpn-instance-name By default, ICMP echo tests apply
specified VPN.
to the public network.

Optional.
By default, no source interface is
configured for probe packets.
The requests take the IP address of
the source interface as their source
8. Configure the source interface source interface interface-type IP address when no source IP
for ICMP echo requests. interface-number address is specified.
The specified source interface must
be a Layer 2 Ethernet interface or a
VLAN interface and the interface
must be up; otherwise, no ICMP
echo requests can be sent out.

Optional.
By default, no source IP address is
configured.
If you configure both the source ip
9. Configure the source IP command and the source interface
address of ICMP echo source ip ip-address command, the source ip command
requests. takes effect.
The source IP address must be the
IP address of a local interface. The
local interface must be up;
otherwise, no ICMP echo requests
can be sent out.

10. Configure the next hop IP Optional.


address of ICMP echo next-hop ip-address By default, no next hop IP address
requests. is configured.

See “Configuring optional


11. Configure optional
parameters for an NQA test Optional.
parameters.
group”

NOTE:
NQA ICMP echo tests are not supported in IPv6 networks. To test the reachability of an IPv6 address, use
the ping ipv6 command. For more information about the command, see Network Management and
Monitoring Command Reference.

14
Configuring DHCP tests
DHCP tests of an NQA test group are used to test if a DHCP server is on the network, and how long it
takes for the DHCP server to respond to a client request and assign an IP address to the client.

Configuration prerequisites
Before you start DHCP tests, configure the DHCP server. If the NQA (DHCP client) and the DHCP server
are not in the same network segment, configure a DHCP relay. For the configuration of DHCP server and
DHCP relay, see Layer 3—IP Services Configuration Guide.

Configuring DHCP tests


To configure DHCP tests:

Step Command Remarks

1. Enter system view. system-view N/A

nqa entry admin-name


2. Enter NQA test group view. N/A
operation-tag
3. Configure the test type as
DHCP and enter test type type dhcp N/A
view.

By default, no interface is
configured to perform DHCP tests.
4. Specify an interface to operation interface interface-type
perform DHCP tests. interface-number The specified interface must be up;
otherwise, no probe packets can
be sent out.

See “Configuring optional


5. Configure optional
parameters for an NQA test Optional.
parameters.
group”

NOTE:
• The interface that performs DHCP tests does not change its IP address. A DHCP test only simulates
address allocation in DHCP.
• When a DHCP test completes, the NQA client sends a DHCP-RELEASE packet to release the obtained IP
address.

Configuring DNS tests


DNS tests of an NQA test group are used to test whether the NQA client can translate a domain name
into an IP address through a DNS server and test the time required for resolution.

Configuration prerequisites
Before you start DNS tests, configure the mapping between a domain name and an IP address on a DNS
server.

Configuring DNS tests


To configure DNS tests:

15
Step Command Remarks
1. Enter system view. system-view N/A
2. Enter NQA test group view. nqa entry admin-name operation-tag N/A
3. Configure the test type as
type dns N/A
DNS and enter test type view.
4. Specify the IP address of the
DNS server as the By default, no destination IP
destination ip ip-address
destination address of DNS address is configured.
packets.
5. Configure the domain name By default, no domain name is
resolve-target domain-name
that needs to be translated. configured.

6. Configure optional See “Configuring optional


Optional.
parameters. parameters for an NQA test group”

NOTE:
A DNS test simulates the domain name resolution. It does not save the mapping between the domain name
and the IP address.

Configuring FTP tests


FTP tests of an NQA test group are used to test the connection between the NQA client and an FTP server
and the time necessary for the FTP client to transfer a file to or download a file from the FTP server.

Configuration prerequisites
Before you start FTP tests, configure the FTP server. For example, configure the username and password
that are used to log in to the FTP server. For more information about FTP server configuration, see
Fundamentals Configuration Guide.

Configuring FTP tests


To configure FTP tests:

Step Command Remarks


1. Enter system view. system-view N/A

nqa entry admin-name


2. Enter NQA test group view. N/A
operation-tag
3. Configure the test type as FTP
type ftp N/A
and enter test type view.
4. Specify the IP address of the
FTP server as the destination By default, no destination IP
destination ip ip-address
address of FTP request address is configured.
packets.

16
Step Command Remarks

By default, no source IP address is


specified.
5. Configure the source IP The source IP address must be the
address of FTP request source ip ip-address IP address of a local interface. The
packets. local interface must be up;
otherwise, no FTP requests can be
sent out.

Optional.

6. Configure the operation type. operation { get | put } By default, the operation type for
the FTP is get, which means
obtaining files from the FTP server.

By default, no login username is


7. Configure a login username. username name
configured.

By default, no login password is


8. Configure a login password. password password
configured.
9. Specify a file to be transferred
between the FTP server and filename file-name By default, no file is specified.
the FTP client.

10. Set the data transmission Optional.


mode { active | passive }
mode for FTP tests. active by default.

See “Configuring optional


11. Configure optional
parameters for an NQA test Optional.
parameters.
group”

NOTE:
• When you execute the put command, a file file-name with fixed size and content is created on the FTP
server. When you execute the get command, the device does not save the files obtained from the FTP
server.
• When you download a file that does not exist on the FTP server, FTP tests fail.
• When you execute the get command, use a file with a small size. A big file may result in test failure due
to timeout, or may affect other services for occupying too much network bandwidth.

Configuring HTTP tests


HTTP tests of an NQA test group are used to test the connection between the NQA client and an HTTP
server and the time required to obtain data from the HTTP server. HTTP tests enable you to detect the
connectivity and performance of the HTTP server.

Configuration prerequisites
Before you start HTTP tests, configure the HTTP server.

Configuring HTTP tests


To configure HTTP tests:

17
Step Command Remarks
1. Enter system view. system-view N/A

nqa entry admin-name


2. Enter NQA test group view. N/A
operation-tag
3. Configure the test type as
type http N/A
HTTP and enter test type view.
4. Configure the IP address of
the HTTP server as the By default, no destination IP
destination ip ip-address
destination address of HTTP address is configured.
request packets.

Optional.
By default, no source IP address is
specified.
5. Configure the source IP
source ip ip-address The source IP address must be the
address of request packets.
IP address of a local interface. The
local interface must be up;
otherwise, no probe packets can
be sent out.

Optional.
By default, the operation type for
6. Configure the operation type. operation { get | post } the HTTP is get, which means
obtaining data from the HTTP
server.
7. Configure the website that an
url url N/A
HTTP test visits.

8. Configure the HTTP version Optional.


http-version v1.0
used in HTTP tests. By default, HTTP 1.0 is used.

See “Configuring optional


9. Configure optional
parameters for an NQA test Optional.
parameters.
group”

NOTE:
The TCP port must be port 80 on the HTTP server for NQA HTTP tests.

Configuring UDP jitter tests


NOTE:
Do not perform NQA UDP jitter tests on known ports, ports from 1 to 1023. Otherwise, UDP jitter tests
might fail or the corresponding services of this port might be unavailable.

Real-time services such as voice and video have high requirements on delay jitters. UDP jitter tests of an
NQA test group obtain uni/bi-directional delay jitters. The test results help you verify whether a network
can carry real-time services.
A UDP jitter test takes the following procedure:
1. The source sends packets at regular intervals to the destination port.

18
2. The destination affixes a time stamp to each packet that it receives, and then sends it back to the
source.
3. Upon receiving the response, the source calculates the delay jitter, which reflects network
performance. Delay refers to the amount of time it takes a packet to be transmitted from source to
destination or from destination to source. Delay jitter is the delay variation over time.

Configuration prerequisites
UDP jitter tests require cooperation between the NQA server and the NQA client. Before you start UDP
jitter tests, configure UDP listening services on the NQA server. For more information about UDP listening
service configuration, see “Configuring the NQA server.”

Configuring UDP jitter tests


To configure UDP jitter tests:

Step Command Remarks


1. Enter system view. system-view N/A

nqa entry admin-name


2. Enter NQA test group view. N/A
operation-tag
3. Configure the test type as UDP
type udp-jitter N/A
jitter and enter test type view.

By default, no destination IP
address is configured.
4. Configure the destination
destination ip ip-address The destination IP address must be
address of UDP packets.
the same as that of the listening
service on the NQA server.

By default, no destination port


number is configured.
5. Configure the destination port
destination port port-number The destination port must be the
of UDP packets.
same as that of the listening service
on the NQA server.

Optional.
6. Specify the source port
source port port-number By default, no source port number
number of UDP packets.
is specified.

7. Configure the size of the data Optional.


data-size size
field in each UDP packet. 100 bytes by default.

Optional.
8. Configure the string to be
filled in the data field of each data-fill string By default, the string is the
probe packet. hexadecimal number
00010203040506070809.
9. Configure the number of
probe packets to be sent probe packet-number Optional.
during each UDP jitter probe packet-number 10 by default.
operation.
10. Configure the interval for
sending probe packets during probe packet-interval Optional.
each UDP jitter probe packet-interval 20 milliseconds by default.
operation.

19
Step Command Remarks
11. Configure the interval the
NQA client must wait for a
probe packet-timeout Optional.
response from the server
packet-timeout 3000 milliseconds by default.
before it regards the response
is timed out.

Optional.
By default, no source IP address is
specified.
12. Configure the source IP
source ip ip-address The source IP address must be the
address for UDP jitter packets.
IP address of a local interface. The
local interface must be up;
otherwise, no probe packets can
be sent out.

See “Configuring optional


13. Configure optional
parameters for an NQA test Optional.
parameters.
group”

NOTE:
The probe count command specifies the number of probe operations during one UDP jitter test. The probe
packet-number command specifies the number of probe packets sent in each UDP jitter probe operation.

Configuring SNMP tests


SNMP tests of an NQA test group are used to test the time the NQA client takes to send an SNMP packet
to the SNMP agent and receive a response.

Configuration prerequisites
Before you start SNMP tests, enable the SNMP agent function on the device that serves as an SNMP
agent. For more information abut SNMP agent configuration, see the chapter “SNMP configuration.”

Configuring SNMP tests


To configure SNMP tests:

Step Command Remarks


1. Enter system view. system-view N/A

nqa entry admin-name


2. Enter NQA test group view. N/A
operation-tag
3. Configure the test type as
SNMP and enter test type type snmp N/A
view.
4. Configure the destination By default, no destination IP
destination ip ip-address
address of SNMP packets. address is configured.

Optional.
5. Specify the source port of
source port port-number By default, no source port number
SNMP packets.
is specified.

20
Step Command Remarks
Optional.
By default, no source IP address is
specified.
6. Configure the source IP
source ip ip-address The source IP address must be the
address of SNMP packets.
IP address of a local interface. The
local interface must be up;
otherwise, no probe packets can
be sent out.

See “Configuring optional


7. Configure optional
parameters for an NQA test Optional.
parameters.
group”

Configuring TCP tests


TCP tests of an NQA test group are used to test the TCP connection between the NQA client and a port
on the NQA server and the time for setting up a connection. The test result helps you understand the
availability and performance of the services provided by the port on the server.

Configuration prerequisites
TCP tests require cooperation between the NQA server and the NQA client. Before you start TCP tests,
configure a TCP listening service on the NQA server. For more information about the TCP listening
service configuration, see “Configuring the NQA server.”

Configuring TCP tests


To configure TCP tests:

Step Command Remarks


1. Enter system view. system-view N/A

nqa entry admin-name


2. Enter NQA test group view. N/A
operation-tag
3. Configure the test type as TCP
type tcp N/A
and enter test type view.

By default, no destination IP
address is configured.
4. Configure the destination The destination address must be
destination ip ip-address
address of TCP probe packets. the same as the IP address of the
listening service configured on the
NQA server.

By default, no destination port


number is configured.
5. Configure the destination port
destination port port-number The destination port number must
of TCP probe packets.
be the same as that of the listening
service on the NQA server.

21
Step Command Remarks
Optional.
By default, no source IP address is
specified.
6. Configure the source IP
source ip ip-address The source IP address must be the
address of TCP probe packets.
IP address of a local interface. The
local interface must be up;
otherwise, no probe packets can
be sent out.

See “Configuring optional


7. Configure optional
parameters for an NQA test Optional.
parameters.
group”

Configuring UDP echo tests


UDP echo tests of an NQA test group are used to test the connectivity and round-trip time of a UDP packet
from the client to the specified UDP port on the NQA server.

Configuration prerequisites
UDP echo tests require cooperation between the NQA server and the NQA client. Before you start UDP
echo tests, configure a UDP listening service on the NQA server. For more information about the UDP
listening service configuration, see “Configuring the NQA server.”

Configuring UDP echo tests


To configure UDP echo tests:

Step Command Remarks


1. Enter system view. system-view N/A

nqa entry admin-name


2. Enter NQA test group view. N/A
operation-tag
3. Configure the test type as UDP
type udp-echo N/A
echo and enter test type view.

By default, no destination IP
address is configured.
4. Configure the destination The destination address must be
destination ip ip-address
address of UDP packets. the same as the IP address of the
listening service configured on the
NQA server.

By default, no destination port


number is configured.
5. Configure the destination port
destination port port-number The destination port number must
of UDP packets.
be the same as that of the listening
service on the NQA server.

6. Configure the size of the data Optional.


data-size size
field in each UDP packet. 100 bytes by default.

22
Step Command Remarks
Optional.
7. Configure the string to be
filled in the data field of each data-fill string By default, the string is the
UDP packet. hexadecimal number
00010203040506070809.

Optional.
8. Specify the source port of UDP
source port port-number By default, no source port number
packets.
is specified.

Optional.
By default, no source IP address is
specified.
9. Configure the source IP
source ip ip-address The source IP address must be that
address of UDP packets.
of an interface on the device and
the interface must be up;
otherwise, no probe packets can
be sent out.

See “Configuring optional


10. Configure optional
parameters for an NQA test Optional.
parameters.
group”

Configuring voice tests


NOTE:
Do not perform voice tests on known ports, ports from 1 to 1023. Otherwise, the NQA test might fail or the
corresponding services of these ports might be unavailable.

Voice tests of an NQA test group are used to test voice over IP (VoIP) network status, and collect VoIP
network parameters so that users can adjust the network.
A voice test takes the following procedure:
1. The source (NQA client) sends voice packets of G.711 A-law, G.711 μ-law or G.729 A-law
codec type at regular intervals to the destination (NQA server).
2. The destination affixes a time stamp to each voice packet that it receives and then sends it back to
the source.
3. Upon receiving the packet, the source calculates results, such as the delay jitter and one-way delay
based on the packet time stamps. The statistics reflect network performance.
Voice test result also includes the following parameters that reflect VoIP network performance:
• Calculated Planning Impairment Factor (ICPIF)—Measures impairment to voice quality in a VoIP
network. It is decided by packet loss and delay. A higher value represents a lower service quality.
• Mean Opinion Scores (MOS)—A MOS value can be evaluated by using the ICPIF value, in the
range 1 to 5. A higher value represents a higher quality of a VoIP network.
The evaluation of voice quality depends on users’ tolerance for voice quality, which should be taken into
consideration. For users with higher tolerance for voice quality, use the advantage-factor command to
configure the advantage factor. When the system calculates the ICPIF value, this advantage factor is
subtracted to modify ICPIF and MOS values and both the objective and subjective factors are considered
when you evaluate the voice quality.

23
Configuration prerequisites
Voice tests require cooperation between the NQA server and the NQA client. Before you start voice tests,
configure a UDP listening service on the NQA server. For more information about UDP listening service
configuration, see “Configuring the NQA server.”

Configuring voice tests


To configure voice tests:

Step Command Remarks


1. Enter system view. system-view N/A

nqa entry admin-name


2. Enter NQA test group view. N/A
operation-tag
3. Configure the test type as
type voice N/A
voice and enter test type view.

By default, no destination IP
address is configured for a test
4. Configure the destination
operation.
address of voice probe destination ip ip-address
packets. The destination IP address must be
the same as that of the listening
service on the NQA server.

By default, no destination port


number is configured.
5. Configure the destination port
destination port port-number The destination port must be the
of voice probe packets.
same as that of the listening service
on the NQA server.

Optional.
codec-type { g711a | g711u |
6. Configure the codec type. By default, the codec type is
g729a }
G.711 A-law.

7. Configure the advantage Optional.


factor for calculating MOS advantage-factor factor By default, the advantage factor is
and ICPIF values. 0.

Optional.
By default, no source IP address is
specified.
8. Specify the source IP address
source ip ip-address The source IP address must be the
of probe packets.
IP address of a local interface. The
local interface must be up;
otherwise, no probe packets can
be sent out.

Optional.
9. Specify the source port
source port port-number By default, no source port number
number of probe packets.
is specified.

24
Step Command Remarks
Optional.
By default, the probe packet size
10. Configure the size of the data depends on the codec type. The
data-size size default packet size is 172 bytes for
field in each probe packet.
G.711A-law and G.711 μ-law
codec type, and is 32 bytes for
G.729 A-law codec type.

Optional.
11. Configure the string to be
filled in the data field of each data-fill string By default, the string is the
probe packet. hexadecimal number
00010203040506070809.
12. Configure the number of
probe packets to be sent probe packet-number Optional.
during each voice probe packet-number 1000 by default.
operation.
13. Configure the interval for
probe packet-interval Optional.
sending probe packets during
packet-interval 20 milliseconds by default.
each voice probe operation.
14. Configure the interval the
NQA client must wait for a
probe packet-timeout Optional.
response from the server
packet-timeout 5000 milliseconds by default.
before it regards the response
times out.

See “Configuring optional


15. Configure optional
parameters for an NQA test Optional.
parameters.
group”

NOTE:
Only one probe operation is performed in one voice test.

Configuring DLSw tests


DLSw tests of an NQA test group are used to test the response time of a DLSw device.

Configuration prerequisites
Before you start DLSw tests, enable the DLSw function on the peer device.

Configuring a DLSw test


To configure DLSw tests:

Step Command Remarks


1. Enter system view. system-view N/A

nqa entry admin-name


2. Enter NQA test group view. N/A
operation-tag
3. Configure the test type as
type dlsw N/A
DLSw and enter test type view.

25
Step Command Remarks
4. Configure the destination By default, no destination IP
destination ip ip-address
address of probe packets. address is configured.

Optional.
By default, no source IP address is
specified.
5. Configure the source IP
source ip ip-address The source IP address must be the
address of probe packets.
IP address of a local interface. The
local interface must be up;
otherwise, no probe packets can
be sent out.

See “Configuring optional


6. Configure optional
parameters for an NQA test Optional.
parameters.
group”

Configuring the collaboration function


Collaboration is implemented by establishing reaction entries to monitor the detection results of a test
group. If the number of consecutive probe failures reaches the threshold, the configured action is
triggered.
To configure the collaboration function:

Step Command Remarks


1. Enter system view. system-view N/A

nqa entry admin-name


2. Enter NQA test group view. N/A
operation-tag

type { dhcp | dlsw | dns | ftp | The collaboration function is not


3. Enter test type view of the test
http | icmp-echo | snmp | tcp | supported in UDP jitter and voice
group.
udp-echo } tests.

reaction item-number
checked-element probe-fail Not created by default.
4. Configure a reaction entry. threshold-type consecutive You cannot modify the content of
consecutive-occurrences an existing reaction entry.
action-type trigger-only
5. Exit to system view. quit N/A
6. Configure a track entry and track entry-number nqa entry
associate it with the reaction admin-name operation-tag Not created by default.
entry of the NQA test group. reaction item-number

Configuring threshold monitoring


Configuration prerequisites
Before you configure threshold monitoring, complete the following tasks:

26
• Configure the destination address of the trap message by using the snmp-agent target-host
command. For more information about the snmp-agent target-host command, see Network
Management and Monitoring Command Reference.
• Create an NQA test group and configure related parameters.

Configuring threshold monitoring


To configure threshold monitoring:

Step Command Remarks


1. Enter system view. system-view N/A
2. Enter NQA test group view. nqa entry admin-name operation-tag N/A
3. Enter test type view of the test type { dhcp | dlsw | dns | ftp | http | icmp-echo
N/A
group. | snmp | tcp | udp-echo | udp-jitter | voice }
4. Configure to send traps to the reaction trap { probe-failure
network management server consecutive-probe-failures | test-complete |
under specified conditions. test-failure cumulate-probe-failures }

reaction item-number checked-element


5. Configure a reaction entry for probe-duration threshold-type { accumulate
monitoring the probe duration accumulate-occurrences | average | consecutive
of a test (not supported in UDP consecutive-occurrences } threshold-value
jitter and voice tests). upper-threshold lower-threshold [ action-type
{ none | trap-only } ]

reaction item-number checked-element probe-fail


6. Configure a reaction entry for
threshold-type { accumulate
monitoring the probe failure
accumulate-occurrences | consecutive
times (not supported in UDP
consecutive-occurrences } [ action-type { none |
jitter and voice tests).
trap-only } ]

reaction item-number checked-element rtt


7. Configure a reaction entry for Configure to send
threshold-type { accumulate
monitoring packet round-trip traps.
accumulate-occurrences | average }
time (only supported in UDP
threshold-value upper-threshold lower-threshold No traps are sent to
jitter and voice tests)
[ action-type { none | trap-only } ] the network
management server
8. Configure a reaction entry for reaction item-number checked-element by default.
monitoring the packet loss in packet-loss threshold-type accumulate
each test (only supported in accumulate-occurrences [ action-type { none |
UDP jitter and voice tests). trap-only } ]

reaction item-number checked-element { jitter-ds


9. Configure a reaction entry for
| jitter-sd } threshold-type { accumulate
monitoring one-way delay
accumulate-occurrences | average }
jitter (only supported in UDP
threshold-value upper-threshold lower-threshold
jitter and voice tests).
[ action-type { none | trap-only } ]
10. Configure a reaction entry for
reaction item-number checked-element { owd-ds
monitoring the one-way delay
| owd-sd } threshold-value upper-threshold
(only supported in UDP jitter
lower-threshold
and voice tests).
11. Configure a reaction entry for reaction item-number checked-element icpif
monitoring the ICPIF value threshold-value upper-threshold lower-threshold
(only supported in voice tests). [ action-type { none | trap-only } ]

27
Step Command Remarks
12. Configure a reaction entry for reaction item-number checked-element mos
monitoring the MOS value threshold-value upper-threshold lower-threshold
(only supported in voice tests). [ action-type { none | trap-only } ]

NOTE:
• NQA DNS tests do not support the action of sending trap messages. The action to be triggered in DNS
tests can only be the default one, none.
• Only the test-complete keyword is supported for the reaction trap command in a voice test.

Configuring the NQA statistics collection function


NQA groups tests completed in a time period for a test group, and calculates the test result statistics. The
statistics form a statistics group. To view information about the statistics groups, use the display nqa
statistics command. To set the interval for collecting statistics, use the statistics interval command.
When the number of statistics groups kept reaches the upper limit and a new statistics group is to be
saved, the earliest statistics group is deleted. To set the maximum number of statistics groups that can be
kept, use the statistics max-group command.
A statistics group is formed after the last test is completed within the specified interval. When its hold time
expires, the statistics group is deleted. To set the hold time of statistics groups for a test group, use the
statistics hold-time command.
To configure the NQA statistics collection function:

Step Command Remarks


1. Enter system view. system-view N/A

nqa entry admin-name


2. Enter NQA test group view. N/A
operation-tag

type { dlsw | dns | ftp | http |


3. Enter test type view of the test
icmp-echo | snmp | tcp | N/A
group.
udp-echo | udp-jitter | voice }
4. Configure the interval for
Optional.
collecting the statistics of test statistics interval interval
results. 60 minutes by default.

Optional.
5. Configure the maximum 2 by default.
number of statistics groups statistics max-group number To disable collecting NQA
that can be kept.
statistics, set the maximum number
to 0.

6. Configure the hold time of Optional.


statistics hold-time hold-time
statistics groups. 120 minutes by default.

28
NOTE:
• The NQA statistics collection function is not supported in DHCP tests.
• If you use the frequency command to set the frequency between two consecutive tests to 0, only one test
is performed, and no statistics group information is collected.

Configuring the history records saving function


The history records saving function enables the system to save the history records of NQA tests. To view
the history records of a test group, use the display nqa history command.
In addition, you can configure the following elements:
• Lifetime of the history records—The records are removed when the lifetime is reached.
• The maximum number of history records that can be saved in a test group—If the number of history
records in a test group exceeds the maximum number, the earliest history records are removed.
To configure the history records saving function of an NQA test group:

Step Command Remarks


1. Enter system view. system-view N/A
2. Enter NQA test group
nqa entry admin-name operation-tag N/A
view.

type { dhcp | dlsw | dns | ftp | http |


3. Enter NQA test type
icmp-echo | snmp | tcp | udp-echo | N/A
view.
udp-jitter | voice }
4. Enable the saving of the
By default, history records of the
history records of the history-record enable
NQA test group are not saved.
NQA test group.

Optional.
5. Set the lifetime of the
history records in an history-record keep-time keep-time By default, the history records in
NQA test group. the NQA test group are kept for
120 minutes.

6. Configure the maximum Optional.


number of history By default, the maximum
history-record number number
records that can be number of records that can be
saved for a test group. saved for a test group is 50

Configuring optional parameters for an NQA test


group
Optional parameters for an NQA test group are valid only for tests in this test group.
Unless otherwise specified, the following optional parameters are applicable to all test types.
To configure optional parameters for an NQA test group:

29
Step Command Remarks
1. Enter system view. system-view N/A

nqa entry admin-name


2. Enter NQA test group view. N/A
operation-tag

type { dhcp | dlsw | dns | ftp |


3. Enter test type view of a test
http | icmp-echo | snmp | tcp | N/A
group.
udp-echo | udp-jitter | voice }

Optional.
4. Configure the description for
description text By default, no description is
a test group.
available for a test group.

Optional.
By default, the interval between
two consecutive tests for a test
5. Configure the interval group is 0 milliseconds. Only one
between two consecutive tests frequency interval test is performed.
for a test group.
If the last test is not completed
when the interval specified by the
frequency command is reached, a
new test does not start.

Optional.
By default, one probe operation is
6. Configure the number of
performed in one test.
probe operations to be probe count times
performed in one test. Not available for voice tests, Only
one probe operation can be
performed in one voice test.

Optional.
7. Configure the NQA probe By default, the timeout time is 3000
probe timeout timeout
timeout time. milliseconds.
Not available for UDP jitter tests.
8. Configure the maximum Optional.
number of hops a probe
ttl value 20 by default.
packet traverses in the
network. Not available for DHCP tests.

9. Configure the ToS field in an Optional.


IP packet header in an NQA tos value 0 by default.
probe packet.
Not available for DHCP tests.

Optional.
10. Enable the routing table
route-option bypass-route Disabled by default.
bypass function.
Not available for DHCP tests.

Scheduling an NQA test group


You can schedule an NQA test group by setting the start time and test duration for a test group.
A test group performs tests between the scheduled start time and the end time (the start time plus test
duration). If the scheduled start time is ahead of the system time, the test group starts testing immediately.

30
If both the scheduled start and end time are behind the system time, no test will start. To view the current
system time, use the display clock command.

Configuration prerequisites
Before you schedule an NQA test group, complete the following tasks:
• Configure test parameters required for the test type.
• Configure the NQA server for tests that require cooperation with the NQA server.

Scheduling an NQA test group


To schedule an NQA test group:

Step Command Remarks


1. Enter system view. system-view N/A

now specifies the test group starts


nqa schedule admin-name testing immediately.
operation-tag start-time { hh:mm:ss
2. Schedule an NQA test group. forever specifies that the tests do
[ yyyy/mm/dd ] | now } lifetime
{ lifetime | forever } not stop unless you use the undo
nqa schedule command.
3. Configure the maximum
Optional.
number of tests that the NQA
nqa agent max-concurrent number By default, a maximum of 10 tests
client can simultaneously
perform. can be simultaneously performed.

NOTE:
• After an NQA test group is scheduled, you cannot enter the test group view or test type view.
• System adjustment does not affect started or completed test groups. It only affects test groups that have
not started.

Displaying and maintaining NQA


Task Command Remarks
display nqa history [ admin-name
Display history records of NQA
operation-tag ] [ | { begin | exclude | include }
test groups.
regular-expression ]

display nqa reaction counters [ admin-name


Display the current monitoring
operation-tag [ item-number ] ] [ | { begin |
results of reaction entries.
exclude | include } regular-expression ]

display nqa result [ admin-name


Display the results of the last NQA Available in any view
operation-tag ] [ | { begin | exclude | include }
test.
regular-expression ]

display nqa statistics [ admin-name


Display statistics of test results for
operation-tag ] [ | { begin | exclude | include }
the specified or all test groups.
regular-expression ]

display nqa server status [ | { begin | exclude


Display NQA server status.
| include } regular-expression ]

31
NQA configuration examples
ICMP echo test configuration example
Network requirements
As shown in Figure 7, configure NQA ICMP echo tests to test whether the NQA client (Device A) can
send packets through a specific next hop to the specified destination (Device B) and test the round-trip
time of the packets.
Figure 7 Network diagram
Device C

10.1.1.2/24 10.2.2.1/24

NQA client
10.1.1.1/24 10.2.2.2/24

10.3.1.1/24 10.4.1.2/24
Device A Device B

10.3.1.2/24 10.4.1.1/24

Device D

Configuration procedure

NOTE:
Before you make the configuration, make sure the devices can reach each other.

# Create an ICMP echo test group and specify 10.2.2.2 as the destination IP address for ICMP echo
requests to be sent.
<DeviceA> system-view
[DeviceA] nqa entry admin test
[DeviceA-nqa-admin-test] type icmp-echo
[DeviceA-nqa-admin-test-icmp-echo] destination ip 10.2.2.2

# Configure 10.1.1.2 as the next hop IP address for ICMP echo requests. The ICMP echo requests are sent
to Device C to Device B (the destination).
[DeviceA-nqa-admin-test-icmp-echo] next-hop 10.1.1.2

# Configure the device to perform 10 probe operations per test, perform tests at an interval of 5000
milliseconds. Set the NQA probe timeout time as 500 milliseconds.
[DeviceA-nqa-admin-test-icmp-echo] probe count 10
[DeviceA-nqa-admin-test-icmp-echo] probe timeout 500

32
[DeviceA-nqa-admin-test-icmp-echo] frequency 5000

# Enable the saving of history records and configure the maximum number of history records that can be
saved for a test group.
[DeviceA-nqa-admin-test-icmp-echo] history-record enable
[DeviceA-nqa-admin-test-icmp-echo] history-record number 10
[DeviceA-nqa-admin-test-icmp-echo] quit

# Start ICMP echo tests.


[DeviceA] nqa schedule admin test start-time now lifetime forever

# Stop the ICMP echo tests after a period of time.


[DeviceA] undo nqa schedule admin test

# Display the results of the last ICMP echo test.


[DeviceA] display nqa result admin test
NQA entry (admin admin, tag test) test results:
Destination IP address: 10.2.2.2
Send operation times: 10 Receive response times: 10
Min/Max/Average round trip time: 2/5/3
Square-Sum of round trip time: 96
Last succeeded probe time: 2007-08-23 15:00:01.2
Extended results:
Packet loss in test: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to sequence error: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packet(s) arrived late: 0

# Display the history of ICMP echo tests.


[DeviceA] display nqa history admin test
NQA entry (admin admin, tag test) history record(s):
Index Response Status Time
370 3 Succeeded 2007-08-23 15:00:01.2
369 3 Succeeded 2007-08-23 15:00:01.2
368 3 Succeeded 2007-08-23 15:00:01.2
367 5 Succeeded 2007-08-23 15:00:01.2
366 3 Succeeded 2007-08-23 15:00:01.2
365 3 Succeeded 2007-08-23 15:00:01.2
364 3 Succeeded 2007-08-23 15:00:01.1
363 2 Succeeded 2007-08-23 15:00:01.1
362 3 Succeeded 2007-08-23 15:00:01.1
361 2 Succeeded 2007-08-23 15:00:01.1

33
DHCP test configuration example
Network requirements
As shown in Figure 8, configure NQA DHCP tests to test the time required for Router A to obtain an IP
address from the DHCP server (Router B).
Figure 8 Network diagram

Configuration procedure
# Create a DHCP test group and specify interface GigabitEthernet 3/1/1 to perform NQA DHCP tests.
<RouterA> system-view
[RouterA] nqa entry admin test
[RouterA-nqa-admin-test] type dhcp
[RouterA-nqa-admin-test-dhcp] operation interface GigabitEthernet 3/1/1

# Enable the saving of history records.


[RouterA-nqa-admin-test-dhcp] history-record enable
[RouterA-nqa-admin-test-dhcp] quit

# Start DHCP tests.


[RouterA] nqa schedule admin test start-time now lifetime forever

# Stop DHCP tests after a period of time.


[RouterA] undo nqa schedule admin test

# Display the results of the last DHCP test.


[RouterA] display nqa result admin test
NQA entry (admin admin, tag test) test results:
Send operation times: 1 Receive response times: 1
Min/Max/Average round trip time: 512/512/512
Square-Sum of round trip time: 262144
Last succeeded probe time: 2007-11-22 09:54:03.8
Extended results:
Packet loss in test: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to sequence error: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packet(s) arrived late: 0

# Display the history of DHCP tests.


[RouterA] display nqa history admin test
NQA entry (admin admin, tag test) history record(s):
Index Response Status Time

34
1 512 Succeeded 2007-11-22 09:54:03.8

DNS test configuration example


Network requirements
As shown in Figure 9, configure NQA DNS tests to test whether Device A can translate the domain name
host.com into an IP address through the DNS server and test the time required for resolution.
Figure 9 Network diagram

Configuration procedure

NOTE:
Before you make the configuration, make sure the devices can reach each other.

# Create a DNS test group.


<DeviceA> system-view
[DeviceA] nqa entry admin test
[DeviceA-nqa-admin-test] type dns

# Specify the IP address of the DNS server 10.2.2.2 as the destination address for DNS tests, and specify
the domain name that needs to be translated as host.com.
[DeviceA-nqa-admin-test-dns] destination ip 10.2.2.2
[DeviceA-nqa-admin-test-dns] resolve-target host.com

# Enable the saving of history records.


[DeviceA-nqa-admin-test-dns] history-record enable
[DeviceA-nqa-admin-test-dns] quit

# Start DNS tests.


[DeviceA] nqa schedule admin test start-time now lifetime forever

# Stop the DNS tests after a period of time.


[DeviceA] undo nqa schedule admin test

# Display the results of the last DNS test.


[DeviceA] display nqa result admin test
NQA entry (admin admin, tag test) test results:
Destination IP address: 10.2.2.2
Send operation times: 1 Receive response times: 1
Min/Max/Average round trip time: 62/62/62
Square-Sum of round trip time: 3844
Last succeeded probe time: 2008-11-10 10:49:37.3
Extended results:
Packet loss in test: 0%
Failures due to timeout: 0
Failures due to disconnect: 0

35
Failures due to no connection: 0
Failures due to sequence error: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packet(s) arrived late: 0

# Display the history of DNS tests.


[DeviceA] display nqa history admin test
NQA entry (admin admin, tag test) history record(s):
Index Response Status Time
1 62 Succeeded 2008-11-10 10:49:37.3

FTP test configuration example


Network requirements
As shown in Figure 10, configure NQA FTP tests to test the connection with a specific FTP server and the
time required for Device A to upload a file to the FTP server. The login username is admin, the login
password is systemtest, and the file to be transferred to the FTP server is config.txt.
Figure 10 Network diagram

Configuration procedure

NOTE:
Before you make the configuration, make sure the devices can reach each other.

# Create an FTP test group.


<DeviceA> system-view
[DeviceA] nqa entry admin test
[DeviceA-nqa-admin-test] type ftp

# Specify the IP address of the FTP server 10.2.2.2 as the destination IP address for FTP tests.
[DeviceA-nqa-admin-test-ftp] destination ip 10.2.2.2

# Specify 10.1.1.1 as the source IP address for probe packets.


[DeviceA-nqa-admin-test-ftp] source ip 10.1.1.1

# Set the FTP username to admin, and password to systemtest.


[DeviceA-nqa-admin-test-ftp] username admin
[DeviceA-nqa-admin-test-ftp] password systemtest

# Configure the device to upload file config.txt to the FTP server for each probe operation.
[DeviceA-nqa-admin-test-ftp] operation put
[DeviceA-nqa-admin-test-ftp] filename config.txt

# Enable the saving of history records.


[DeviceA-nqa-admin-test-ftp] history-record enable
[DeviceA-nqa-admin-test-ftp] quit

36
# Start FTP tests.
[DeviceA] nqa schedule admin test start-time now lifetime forever

# Stop the FTP tests after a period of time.


[DeviceA] undo nqa schedule admin test

# Display the results of the last FTP test.


[DeviceA] display nqa result admin test
NQA entry (admin admin, tag test) test results:
Destination IP address: 10.2.2.2
Send operation times: 1 Receive response times: 1
Min/Max/Average round trip time: 173/173/173
Square-Sum of round trip time: 29929
Last succeeded probe time: 2007-11-22 10:07:28.6
Extended results:
Packet loss in test: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to sequence error: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packet(s) arrived late: 0

# Display the history of FTP tests.


[DeviceA] display nqa history admin test
NQA entry (admin admin, tag test) history record(s):
Index Response Status Time
1 173 Succeeded 2007-11-22 10:07:28.6

HTTP test configuration example


Network requirements
As shown in Figure 11, configure NQA HTTP tests to test the connection with a specific HTTP server and
the time required to obtain data from the HTTP server.
Figure 11 Network diagram

Configuration procedure

NOTE:
Before you make the configuration, make sure the devices can reach each other.

# Create an HTTP test group.


<DeviceA> system-view
[DeviceA] nqa entry admin test

37
[DeviceA-nqa-admin-test] type http

# Specify the IP address of the HTTP server 10.2.2.2 as the destination IP address for HTTP tests.
[DeviceA-nqa-admin-test-http] destination ip 10.2.2.2

# Configure the device to get data from the HTTP server for each HTTP probe operation. (get is the default
HTTP operation type, and this step can be omitted.)
[DeviceA-nqa-admin-test-http] operation get

# Configure HTTP tests to visit website /index.htm.


[DeviceA-nqa-admin-test-http] url /index.htm

# configure the HTTP version 1.0 to be used in HTTP tests. (Version 1.0 is the default version, and this step
can be omitted.)
[DeviceA-nqa-admin-test-http] http-version v1.0

# Enable the saving of history records.


[DeviceA-nqa-admin-test-http] history-record enable
[DeviceA-nqa-admin-test-http] quit

# Start HTTP tests.


[DeviceA] nqa schedule admin test start-time now lifetime forever

# Stop HTTP tests after a period of time.


[DeviceA] undo nqa schedule admin test

# Display results of the last HTTP test.


[DeviceA] display nqa result admin test
NQA entry (admin admin, tag test) test results:
Destination IP address: 10.2.2.2
Send operation times: 1 Receive response times: 1
Min/Max/Average round trip time: 64/64/64
Square-Sum of round trip time: 4096
Last succeeded probe time: 2007-11-22 10:12:47.9
Extended results:
Packet loss in test: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to sequence error: 0
Failures due to internal error: 0
Failures due to other errors:
Packet(s) arrived late: 0

# Display the history of HTTP tests.


[DeviceA] display nqa history admin test
NQA entry (admin admin, tag test) history record(s):
Index Response Status Time
1 64 Succeeded 2007-11-22 10:12:47.9

38
UDP jitter test configuration example
Network requirements
As shown in Figure 12, configure NQA UDP jitter tests to test the delay jitter of packet transmission
between Device A and Device B.
Figure 12 Network diagram

Configuration procedure

NOTE:
Before you make the configuration, make sure the devices can reach each other.

1. Configure Device B:
# Enable the NQA server and configure a listening service to listen to IP address 10.2.2.2 and
UDP port 9000.
<DeviceB> system-view
[DeviceB] nqa server enable
[DeviceB] nqa server udp-echo 10.2.2.2 9000
2. Configure Device A:
# Create a UDP jitter test group.
<DeviceA> system-view
[DeviceA] nqa entry admin test
[DeviceA-nqa-admin-test] type udp-jitter
# Configure UDP jitter packets to use 10.2.2.2 as the destination IP address and port 9000 as the
destination port.
[DeviceA-nqa-admin-test-udp-jitter] destination ip 10.2.2.2
[DeviceA-nqa-admin-test-udp-jitter] destination port 9000
# Configure the device to perform UDP jitter tests at an interval of 1000 milliseconds.
[DeviceA-nqa-admin-test-udp-jitter] frequency 1000
[DeviceA-nqa-admin-test-udp-jitter] quit
# Start UDP jitter tests.
[DeviceA] nqa schedule admin test start-time now lifetime forever
# Stop UDP jitter tests after a period of time.
[DeviceA] undo nqa schedule admin test
# Display the result of the last UDP jitter test.
[DeviceA] display nqa result admin test
NQA entry (admin admin, tag test) test results:
Destination IP address: 10.2.2.2
Send operation times: 10 Receive response times: 10
Min/Max/Average round trip time: 15/32/17
Square-Sum of round trip time: 3235

39
Last succeeded probe time: 2008-05-29 13:56:17.6
Extended results:
Packet loss in test: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to sequence error: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packet(s) arrived late: 0
UDP-jitter results:
RTT number: 10
Min positive SD: 4 Min positive DS: 1
Max positive SD: 21 Max positive DS: 28
Positive SD number: 5 Positive DS number: 4
Positive SD sum: 52 Positive DS sum: 38
Positive SD average: 10 Positive DS average: 10
Positive SD square sum: 754 Positive DS square sum: 460
Min negative SD: 1 Min negative DS: 6
Max negative SD: 13 Max negative DS: 22
Negative SD number: 4 Negative DS number: 5
Negative SD sum: 38 Negative DS sum: 52
Negative SD average: 10 Negative DS average: 10
Negative SD square sum: 460 Negative DS square sum: 754
One way results:
Max SD delay: 15 Max DS delay: 16
Min SD delay: 7 Min DS delay: 7
Number of SD delay: 10 Number of DS delay: 10
Sum of SD delay: 78 Sum of DS delay: 85
Square sum of SD delay: 666 Square sum of DS delay: 787
SD lost packet(s): 0 DS lost packet(s): 0
Lost packet(s) for unknown reason: 0
# Display the statistics of UDP jitter tests.
[DeviceA] display nqa statistics admin test
NQA entry (admin admin, tag test) test statistics:
NO. : 1
Destination IP address: 10.2.2.2
Start time: 2008-05-29 13:56:14.0
Life time: 47 seconds
Send operation times: 410 Receive response times: 410
Min/Max/Average round trip time: 1/93/19
Square-Sum of round trip time: 206176
Extended results:
Packet loss in test: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to sequence error: 0

40
Failures due to internal error: 0
Failures due to other errors: 0
Packet(s) arrived late: 0
UDP-jitter results:
RTT number: 410
Min positive SD: 3 Min positive DS: 1
Max positive SD: 30 Max positive DS: 79
Positive SD number: 186 Positive DS number: 158
Positive SD sum: 2602 Positive DS sum: 1928
Positive SD average: 13 Positive DS average: 12
Positive SD square sum: 45304 Positive DS square sum: 31682
Min negative SD: 1 Min negative DS: 1
Max negative SD: 30 Max negative DS: 78
Negative SD number: 181 Negative DS number: 209
Negative SD sum: 181 Negative DS sum: 209
Negative SD average: 13 Negative DS average: 14
Negative SD square sum: 46994 Negative DS square sum: 3030
One way results:
Max SD delay: 46 Max DS delay: 46
Min SD delay: 7 Min DS delay: 7
Number of SD delay: 410 Number of DS delay: 410
Sum of SD delay: 3705 Sum of DS delay: 3891
Square sum of SD delay: 45987 Square sum of DS delay: 49393
SD lost packet(s): 0 DS lost packet(s): 0
Lost packet(s) for unknown reason: 0

NOTE:
The display nqa history command does not show the results of UDP jitter tests. To know the result of a UDP
jitter test, use the display nqa result command to view the probe results of the latest NQA test, or use the
display nqa statistics command to view the statistics of NQA tests.

SNMP test configuration example


Network requirements
As shown in Figure 13, configure NQA SNMP tests to test the time it takes for Device A to send an SNMP
query packet to the SNMP agent and receive a response packet.
Figure 13 Network diagram

Configuration procedure

NOTE:
Before you make the configuration, make sure the devices can reach each other.

41
1. Configurations on SNMP agent (Device B):
# Enable the SNMP agent service and set the SNMP version to all, the read community to public,
and the write community to private.
<DeviceB> system-view
[DeviceB] snmp-agent sys-info version all
[DeviceB] snmp-agent community read public
[DeviceB] snmp-agent community write private
2. Configurations on Device A:
# Create an SNMP test group and configure SNMP packets to use 10.2.2.2 as their destination
IP address.
<DeviceA> system-view
[DeviceA] nqa entry admin test
[DeviceA-nqa-admin-test] type snmp
[DeviceA-nqa-admin-test-snmp] destination ip 10.2.2.2
# Enable the saving of history records.
[DeviceA-nqa-admin-test-snmp] history-record enable
[DeviceA-nqa-admin-test-snmp] quit
# Start SNMP tests.
[DeviceA] nqa schedule admin test start-time now lifetime forever
# Stop the SNMP tests after a period of time.
[DeviceA] undo nqa schedule admin test
# Display the results of the last SNMP test.
[DeviceA] display nqa result admin test
NQA entry (admin admin, tag test) test results:
Destination IP address: 10.2.2.2
Send operation times: 1 Receive response times: 1
Min/Max/Average round trip time: 50/50/50
Square-Sum of round trip time: 2500
Last succeeded probe time: 2007-11-22 10:24:41.1
Extended results:
Packet loss in test: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to sequence error: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packet(s) arrived late: 0
# Display the history of SNMP tests.
[DeviceA] display nqa history admin test
NQA entry (admin admin, tag test) history record(s):
Index Response Status Time
1 50 Timeout 2007-11-22 10:24:41.1

42
TCP test configuration example
Network requirements
As shown in Figure 14, configure NQA TCP tests to test the time for establishing a TCP connection
between Device A and Device B.
Figure 14 Network diagram

Configuration procedure

NOTE:
Before you make the configuration, make sure the devices can reach each other.

1. Configure Device B:
# Enable the NQA server and configure a listening service to listen to IP address 10.2.2.2 and
TCP port 9000.
<DeviceB> system-view
[DeviceB] nqa server enable
[DeviceB] nqa server tcp-connect 10.2.2.2 9000
2. Configure Device A:
# Create a TCP test group.
<DeviceA> system-view
[DeviceA] nqa entry admin test
[DeviceA-nqa-admin-test] type tcp
# Configure TCP probe packets to use 10.2.2.2 as the destination IP address and port 9000 as
the destination port.
[DeviceA-nqa-admin-test-tcp] destination ip 10.2.2.2
[DeviceA-nqa-admin-test-tcp] destination port 9000
# Enable the saving of history records.
[DeviceA-nqa-admin-test-tcp] history-record enable
[DeviceA-nqa-admin-test-tcp] quit
# Start TCP tests.
[DeviceA] nqa schedule admin test start-time now lifetime forever
# Stop the TCP tests after a period of time.
[DeviceA] undo nqa schedule admin test
# Display the results of the last TCP test.
[DeviceA] display nqa result admin test
NQA entry (admin admin, tag test) test results:
Destination IP address: 10.2.2.2
Send operation times: 1 Receive response times: 1
Min/Max/Average round trip time: 13/13/13
Square-Sum of round trip time: 169

43
Last succeeded probe time: 2007-11-22 10:27:25.1
Extended results:
Packet loss in test: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to sequence error: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packet(s) arrived late: 0
# Display the history of TCP tests.
[DeviceA] display nqa history admin test
NQA entry (admin admin, tag test) history record(s):
Index Response Status Time
1 13 Succeeded 2007-11-22 10:27:25.1

UDP echo test configuration example


Network requirements
As shown in Figure 15, configure NQA UDP echo tests to test the round-trip time between Device A and
Device B. The destination port number is 8000.
Figure 15 Network diagram

Configuration procedure

NOTE:
Before you make the configuration, make sure the devices can reach each other.

1. Configure Device B:
# Enable the NQA server and configure a listening service to listen to IP address 10.2.2.2 and
UDP port 8000.
<DeviceB> system-view
[DeviceB] nqa server enable
[DeviceB] nqa server udp-echo 10.2.2.2 8000
2. Configure Device A:
# Create a UDP echo test group.
<DeviceA> system-view
[DeviceA] nqa entry admin test
[DeviceA-nqa-admin-test] type udp-echo
# Configure UDP packets to use 10.2.2.2 as the destination IP address and port 8000 as the
destination port.
[DeviceA-nqa-admin-test-udp-echo] destination ip 10.2.2.2

44
[DeviceA-nqa-admin-test-udp-echo] destination port 8000
# Enable the saving of history records.
[DeviceA-nqa-admin-test-udp-echo] history-record enable
[DeviceA-nqa-admin-test-udp-echo] quit
# Start UDP echo tests.
[DeviceA] nqa schedule admin test start-time now lifetime forever
# Stop UDP echo tests after a period of time.
[DeviceA] undo nqa schedule admin test
# Display the results of the last UDP echo test.
[DeviceA] display nqa result admin test
NQA entry (admin admin, tag test) test results:
Destination IP address: 10.2.2.2
Send operation times: 1 Receive response times: 1
Min/Max/Average round trip time: 25/25/25
Square-Sum of round trip time: 625
Last succeeded probe time: 2007-11-22 10:36:17.9
Extended results:
Packet loss in test: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to sequence error: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packet(s) arrived late: 0
# Display the history of UDP echo tests.
[DeviceA] display nqa history admin test
NQA entry (admin admin, tag test) history record(s):
Index Response Status Time
1 25 Succeeded 2007-11-22 10:36:17.9

Voice test configuration example


Network requirements
As shown in Figure 16, configure NQA voice tests to test the delay jitter of voice packet transmission and
voice quality between Device A and Device B.
Figure 16 Network diagram

Configuration procedure

45
NOTE:
Before you make the configuration, make sure the devices can reach each other.

1. Configure Device B:
# Enable the NQA server and configure a listening service to listen to IP address 10.2.2.2 and
UDP port 9000.
<DeviceB> system-view
[DeviceB] nqa server enable
[DeviceB] nqa server udp-echo 10.2.2.2 9000
2. Configure Device A:
# Create a voice test group.
<DeviceA> system-view
[DeviceA] nqa entry admin test
[DeviceA-nqa-admin-test] type voice
# Configure voice probe packets to use 10.2.2.2 as the destination IP address and port 9000 as
the destination port.
[DeviceA-nqa-admin-test-voice] destination ip 10.2.2.2
[DeviceA-nqa-admin-test-voice] destination port 9000
[DeviceA-nqa-admin-test-voice] quit
# Start voice tests.
[DeviceA] nqa schedule admin test start-time now lifetime forever
# Stop the voice tests after a period of time.
[DeviceA] undo nqa schedule admin test
# Display the result of the last voice test.
[DeviceA] display nqa result admin test
NQA entry (admin admin, tag test) test results:
Destination IP address: 10.2.2.2
Send operation times: 1000 Receive response times: 1000
Min/Max/Average round trip time: 31/1328/33
Square-Sum of round trip time: 2844813
Last succeeded probe time: 2008-06-13 09:49:31.1
Extended results:
Packet loss in test: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to sequence error: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packet(s) arrived late: 0
Voice results:
RTT number: 1000
Min positive SD: 1 Min positive DS: 1
Max positive SD: 204 Max positive DS: 1297
Positive SD number: 257 Positive DS number: 259
Positive SD sum: 759 Positive DS sum: 1797
Positive SD average: 2 Positive DS average: 6

46
Positive SD square sum: 54127 Positive DS square sum: 1691967
Min negative SD: 1 Min negative DS: 1
Max negative SD: 203 Max negative DS: 1297
Negative SD number: 255 Negative DS number: 259
Negative SD sum: 759 Negative DS sum: 1796
Negative SD average: 2 Negative DS average: 6
Negative SD square sum: 53655 Negative DS square sum: 1691776
One way results:
Max SD delay: 343 Max DS delay: 985
Min SD delay: 343 Min DS delay: 985
Number of SD delay: 1 Number of DS delay: 1
Sum of SD delay: 343 Sum of DS delay: 985
Square sum of SD delay: 117649 Square sum of DS delay: 970225
SD lost packet(s): 0 DS lost packet(s): 0
Lost packet(s) for unknown reason: 0
Voice scores:
MOS value: 4.38 ICPIF value: 0
# Display the statistics of voice tests.
[DeviceA] display nqa statistics admin test
NQA entry (admin admin, tag test) test statistics:
NO. : 1
Destination IP address: 10.2.2.2
Start time: 2008-06-13 09:45:37.8
Life time: 331 seconds
Send operation times: 4000 Receive response times: 4000
Min/Max/Average round trip time: 15/1328/32
Square-Sum of round trip time: 7160528
Extended results:
Packet loss in test: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to sequence error: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packet(s) arrived late: 0
Voice results:
RTT number: 4000
Min positive SD: 1 Min positive DS: 1
Max positive SD: 360 Max positive DS: 1297
Positive SD number: 1030 Positive DS number: 1024
Positive SD sum: 4363 Positive DS sum: 5423
Positive SD average: 4 Positive DS average: 5
Positive SD square sum: 497725 Positive DS square sum: 2254957
Min negative SD: 1 Min negative DS: 1
Max negative SD: 360 Max negative DS: 1297
Negative SD number: 1028 Negative DS number: 1022
Negative SD sum: 1028 Negative DS sum: 1022

47
Negative SD average: 4 Negative DS average: 5
Negative SD square sum: 495901 Negative DS square sum: 5419
One way results:
Max SD delay: 359 Max DS delay: 985
Min SD delay: 0 Min DS delay: 0
Number of SD delay: 4 Number of DS delay: 4
Sum of SD delay: 1390 Sum of DS delay: 1079
Square sum of SD delay: 483202 Square sum of DS delay: 973651
SD lost packet(s): 0 DS lost packet(s): 0
Lost packet(s) for unknown reason: 0
Voice scores:
Max MOS value: 4.38 Min MOS value: 4.38
Max ICPIF value: 0 Min ICPIF value: 0

NOTE:
The display nqa history command cannot show you the results of voice tests. To know the result of a voice
test, use the display nqa result command to view the probe results of the latest NQA test, or use the
display nqa statistics command to view the statistics of NQA tests.

DLSw test configuration example


Network requirements
As shown in Figure 17, configure NQA DLSw tests to test the response time of the DLSw device.
Figure 17 Network diagram

Configuration procedure

NOTE:
Before you make the configuration, make sure the devices can reach each other.

# Create a DLSw test group and configure DLSw probe packets to use 10.2.2.2 as the destination IP
address.
<DeviceA> system-view
[DeviceA] nqa entry admin test
[DeviceA-nqa-admin-test] type dlsw
[DeviceA-nqa-admin-test-dlsw] destination ip 10.2.2.2

# Enable the saving of history records.


[DeviceA-nqa-admin-test-dlsw] history-record enable
[DeviceA-nqa-admin-test-dlsw] quit

# Start DLSw tests.


[DeviceA] nqa schedule admin test start-time now lifetime forever

48
# Stop the DLSw tests after a period of time.
[DeviceA] undo nqa schedule admin test

# Display the result of the last DLSw test.


[DeviceA] display nqa result admin test
NQA entry (admin admin, tag test) test results:
Destination IP address: 10.2.2.2
Send operation times: 1 Receive response times: 1
Min/Max/Average round trip time: 19/19/19
Square-Sum of round trip time: 361
Last succeeded probe time: 2007-11-22 10:40:27.7
Extended results:
Packet loss in test: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to sequence error: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packet(s) arrived late: 0

# Display the history of DLSw tests.


[DeviceA] display nqa history admin test
NQA entry (admin admin, tag test) history record(s):
Index Response Status Time
1 19 Succeeded 2007-11-22 10:40:27.7

NQA collaboration configuration example


Network requirements
As shown in Figure 18, configure a static route to Router C on Router A, with Router B as the next hop.
Associate the static route, track entry, and NQA test group to verify whether the static route is active in
real time.
Figure 18 Network diagram
Router B

GE3/1//1 GE3/1//2
10.2.1.1/24 10.1.1.1/24

GE3/1//1 GE3/1//1
10.2.1.2/24 10.1.1.2/24

Router A Router C

Configuration procedure
1. Assign each interface an IP address. (Details not shown)
2. On Router A, configure a unicast static route and associate the static route with a track entry.

49
# Configure a static route, whose destination address is 10.2.1.1, and associate the static route
with track entry 1.
<RouterA> system-view
[RouterA] ip route-static 10.1.1.2 24 10.2.1.1 track 1
3. On Router A, create an NQA test group:
# Create an NQA test group with the administrator name being admin and operation tag being
test.
[RouterA] nqa entry admin test
# Configure the test type of the NQA test group as ICMP echo.
[RouterA-nqa-admin-test] type icmp-echo
# Configure ICMP echo requests to use 10.2.2.1 as the destination IP address.
[RouterA-nqa-admin-test-icmp-echo] destination ip 10.2.1.1
# Configure the device to perform tests at an interval of 100 milliseconds.
[RouterA-nqa-admin-test-icmp-echo] frequency 100
# Create reaction entry 1. If the number of consecutive probe failures reaches 5, collaboration
with other modules is triggered.
[RouterA-nqa-admin-test-icmp-echo] reaction 1 checked-element probe-fail
threshold-type consecutive 5 action-type trigger-only
[RouterA-nqa-admin-test-icmp-echo] quit
# Configure the test start time and test duration for the test group.
[RouterA] nqa schedule admin test start-time now lifetime forever
4. On Router A, create the track entry:
# Create track entry 1 to associate it with Reaction entry 1 of the NQA test group (admin-test).
[RouterA] track 1 nqa entry admin test reaction 1

Verifying the configuration


# On Router A, display information about all the track entries.
[RouterA] display track all
Track ID: 1
Status: Positive
Notification delay: Positive 0, Negative 0 (in seconds)
Reference object:
NQA entry: admin test
Reaction: 1

# Display brief information about active routes in the routing table on Router A.
[RouterA] display ip routing-table
Routing Tables: Public
Destinations : 5 Routes : 5

Destination/Mask Proto Pre Cost NextHop Interface

10.1.1.0/24 Static 60 0 10.2.1.1 GE3/1/1


10.2.1.0/24 Direct 0 0 10.2.1.2 GE3/1/1
10.2.1.2/32 Direct 0 0 127.0.0.1 InLoop0
127.0.0.0/8 Direct 0 0 127.0.0.1 InLoop0
127.0.0.1/32 Direct 0 0 127.0.0.1 InLoop0

50
The output shows that the static route with the next hop 10.2.1.1 is active, and the status of the track entry
is positive. The static route configuration works.
# Remove the IP address of GigabitEthernet 3/1/1 on Router B.
<RouterB> system-view
[RouterB] interface GigabitEthernet 3/1/1
[RouterB- GigabitEthernet3/1/1] undo ip address

# On Router A, display information about all the track entries.


[RouterA] display track all
Track ID: 1
Status: Negative
Notification delay: Positive 0, Negative 0 (in seconds)
Reference object:
NQA entry: admin test
Reaction: 1

# Display brief information about active routes in the routing table on Router A.
[RouterA] display ip routing-table
Routing Tables: Public
Destinations : 4 Routes : 4

Destination/Mask Proto Pre Cost NextHop Interface

10.2.1.0/24 Direct 0 0 10.2.1.2 GE3/1/1


10.2.1.2/32 Direct 0 0 127.0.0.1 InLoop0
127.0.0.0/8 Direct 0 0 127.0.0.1 InLoop0
127.0.0.1/32 Direct 0 0 127.0.0.1 InLoop0

The output shows that the next hop 10.2.1.1 of the static route is not reachable, and the status of the track
entry is negative. The static route does not work.

51
Configuring NTP

NTP overview
Defined in RFC 1305, the Network Time Protocol (NTP) synchronizes timekeeping among distributed
time servers and clients. NTP runs over the User Datagram Protocol (UDP), using UDP port 123.
The purpose of using NTP is to keep consistent timekeeping among all clock-dependent devices within
the network so that the devices can provide diverse applications based on the consistent time.
For a local system running NTP, its time can be synchronized by other reference sources and can be used
as a reference source to synchronize other clocks.

NTP applications
An administrator can by no means keep synchronized time among all the devices within a network by
changing the system clock on each station, because this is a huge amount of workload and cannot
guarantee the clock precision. NTP, however, allows quick clock synchronization within the entire
network while it ensures a high clock precision.
NTP is used when all devices within the network must be consistent in timekeeping, for example:
• In analysis of the log information and debugging information collected from different devices in
network management, time must be used as reference basis.
• All devices must use the same reference clock in a charging system.
• To implement certain functions, such as scheduled restart of all devices within the network, all
devices must be consistent in timekeeping.
• When multiple systems process a complex event in cooperation, these systems must use that same
reference clock to ensure the correct execution sequence.
• For increment backup between a backup server and clients, timekeeping must be synchronized
between the backup server and all the clients.
Advantages of NTP:
• NTP uses a stratum to describe the clock precision, and is able to synchronize time among all
devices within the network.
• NTP supports access control and MD5 authentication.
• NTP can unicast, multicast or broadcast protocol messages.

How NTP works


Figure 19 shows the basic work flow of NTP. Device A and Device B are interconnected over a network.
They have their own independent system clocks, which need to be automatically synchronized through
NTP. For an easy understanding, we assume that:
• Prior to system clock synchronization between Device A and Device B, the clock of Device A is set
to 10:00:00 am while that of Device B is set to 11:00:00 am.
• Device B is used as the NTP time server, namely Device A synchronizes its clock to that of Device B.
• It takes 1 second for an NTP message to travel from one device to the other.

52
Figure 19 Basic work flow of NTP

The process of system clock synchronization is as follows:


• Device A sends Device B an NTP message, which is timestamped when it leaves Device A. The time
stamp is 10:00:00 am (T1).
• When this NTP message arrives at Device B, it is timestamped by Device B. The timestamp is
11:00:01 am (T2).
• When the NTP message leaves Device B, Device B timestamps it. The timestamp is 11:00:02 am
(T3).
• When Device A receives the NTP message, the local time of Device A is 10:00:03 am (T4).
Up to now, Device A has sufficient information to calculate the following two important parameters:
• The roundtrip delay of NTP message: Delay = (T4–T1) – (T3-T2) = 2 seconds.
• Time difference between Device A and Device B: Offset = ((T2-T1) + (T3-T4))/2 = 1 hour.
Based on these parameters, Device A can synchronize its own clock to the clock of Device B.
This is only a rough description of the work mechanism of NTP. For details, refer to RFC 1305.

NTP message format


NTP uses two types of messages, clock synchronization message and NTP control message. An NTP
control message is used in environments where network management is needed. As it is not a must for
clock synchronization, it will not be discussed in this document.

NOTE:
All NTP messages mentioned in this document refer to NTP clock synchronization messages.

53
A clock synchronization message is encapsulated in a UDP message, in the format shown in Figure 20.
Figure 20 Clock synchronization message format
0 1 4 7 15 23 31
LI VN Mode Stratum Poll Precision

Root delay (32 bits)

Root dispersion (32 bits)

Reference identifier (32 bits)

Reference timestamp (64 bits)

Originate timestamp (64 bits)

Receive timestamp (64 bits)

Transmit timestamp (64 bits)

Authenticator (optional 96 bits)

Main fields are described as follows:


• LI—A 2-bit leap indicator. When set to 11, it warns of an alarm condition (clock unsynchronized);
when set to any other value, it is not to be processed by NTP.
• VN—A 3-bit version number, indicating the version of NTP. The latest version is version 3.
• Mode—A 3-bit code indicating the work mode of NTP. This field can be set to these values: 0 –
reserved; 1 – symmetric active; 2 – symmetric passive; 3 – client; 4 – server; 5 – broadcast or
multicast; 6 – NTP control message; 7 – reserved for private use.
• Stratum—An 8-bit integer indicating the stratum level of the local clock, with the value ranging from
1 to 16. The clock precision decreases from stratum 1 through stratum 16. A stratum 1 clock has the
highest precision, and a stratum 16 clock is not synchronized and cannot be used as a reference
clock.
• Poll—An 8-bit signed integer indicating the poll interval, namely the maximum interval between
successive messages.
• Precision—An 8-bit signed integer indicating the precision of the local clock.
• Root Delay—Roundtrip delay to the primary reference source.
• Root Dispersion—The maximum error of the local clock relative to the primary reference source.
• Reference Identifier—Identifier of the particular reference source.
• Reference Timestamp—The local time at which the local clock was last set or corrected.
• Originate Timestamp—The local time at which the request departed the client for the service host.
• Receive Timestamp—The local time at which the request arrived at the service host.
• Transmit Timestamp—The local time at which the reply departed the service host for the client.
• Authenticator—Authentication information.

54
Operation modes of NTP
Devices running NTP can implement clock synchronization in one of the following modes:
• Client/server mode
• Symmetric peers mode
• Broadcast mode
• Multicast mode
You can select operation modes of NTP as needed. In case that the IP address of the NTP server or peer
is unknown and many devices in the network need to be synchronized, you can adopt the broadcast or
multicast mode; while in the client/server and symmetric peers modes, a device is synchronized from the
specified server or peer, and thus clock reliability is enhanced.

Client/server mode
Figure 21 Client/server mode

When working in the client/server mode, a client sends a clock synchronization message to servers, with
the Mode field in the message set to 3 (client mode). Upon receiving the message, the servers
automatically work in the server mode and send a reply, with the Mode field in the messages set to 4
(server mode). Upon receiving the replies from the servers, the client performs clock filtering and selection,
and synchronizes its local clock to that of the optimal reference source.
In this mode, a client can be synchronized to a server, but not vice versa.

Symmetric peers mode


Figure 22 Symmetric peers mode

55
A device working in the symmetric active mode periodically sends clock synchronization messages, with
the Mode field in the message set to 1 (symmetric active); the device that receives this message
automatically enters the symmetric passive mode and sends a reply, with the Mode field in the message
set to 2 (symmetric passive). By exchanging messages, the symmetric peers mode is established between
the two devices. Then, the two devices can synchronize, or be synchronized by, each other. If the clocks
of both devices have been already synchronized, the device whose local clock has a lower stratum level
will synchronize the clock of the other device.

Broadcast mode
Figure 23 Broadcast mode

In the broadcast mode, a server periodically sends clock synchronization messages to the broadcast
address 255.255.255.255, with the Mode field in the messages set to 5 (broadcast mode). Clients listen
to the broadcast messages from servers. After a client receives the first broadcast message, the client and
the server start to exchange messages, with the Mode field set to 3 (client mode) and 4 (server mode) to
calculate the network delay between client and the server. Then, the client enters the broadcast client
mode and continues listening to broadcast messages, and synchronizes its local clock based on the
received broadcast messages.

Multicast mode
Figure 24 Multicast mode

In the multicast mode, a server periodically sends clock synchronization messages to the user-configured
multicast address, or, if no multicast address is configured, to the default NTP multicast address 224.0.1.1,
with the Mode field in the messages set to 5 (multicast mode). Clients listen to the multicast messages
from servers. After a client receives the first multicast message, the client and the server start to exchange
messages, with the Mode field set to 3 (client mode) and 4 (server mode) to calculate the network delay

56
between client and the server. Then, the client enters the multicast client mode and continues listening to
multicast messages, and synchronizes its local clock based on the received multicast messages.

NOTE:
In symmetric peers mode, broadcast mode and multicast mode, the client (or the symmetric active peer)
and the server (the symmetric passive peer) can work in the specified NTP working mode only after they
exchange NTP messages with the Mode field being 3 (client mode) and the Mode field being 4 (server
mode). During this message exchange process, NTP clock synchronization can be implemented.

NTP-supported MPLS L3VPN


When operating in client/server mode or symmetric mode, NTP supports MPLS L3VPN, and thus realizes
clock synchronization within an MPLS VPN network. In other words, network devices (CEs and PEs) at
different physical location can get their clocks synchronized through NTP, as long as they are in the same
VPN. The specific functions are as follows:
• The NTP client on a customer edge device (CE) can be synchronized to the NTP server on another
CE.
• The NTP client on a CE can be synchronized to the NTP server on a provider edge device (PE).
• The NTP client on a PE can be synchronized to the NTP server on a CE through a designated VPN.
• The NTP client on a PE can be synchronized to the NTP server on another PE through a designated
VPN.
• The NTP server on a PE can synchronize the NTP clients on multiple CEs in different VPNs.

NOTE:
• A CE is a device that has an interface directly connecting to the service provider (SP). A CE is not “aware
of” the presence of the VPN.
• A PE is a device that directly connecting to CEs. In an MPLS network, all events related to VPN
processing occur on the PE.

NTP configuration task list


Complete these tasks to configure NTP:

Task Remarks
Configuring the operation modes of NTP Required

Configuring the local clock as a reference source Optional

Configuring optional parameters of NTP Optional

Configuring access-control rights Optional

Configuring NTP authentication Optional

Configuring the operation modes of NTP


Routers can implement clock synchronization in one of the following modes:
• Client/server mode

57
• Symmetric mode
• Broadcast mode
• Multicast mode
For the client/server mode or symmetric mode, you need to configure only clients or symmetric-active
peers; for the broadcast or multicast mode, you need to configure both servers and clients.

NOTE:
A single router can have a maximum of 128 associations at the same time, including static associations
and dynamic associations. A static association refers to an association that a user has manually created
by using an NTP command, while a dynamic association is a temporary association created by the system
during operation. A dynamic association will be removed if the system fails to receive messages from it
over a specific long time. In the client/server mode, for example, when you carry out a command to
synchronize the time to a server, the system will create a static association, and the server will just respond
passively upon the receipt of a message, rather than creating an association (static or dynamic). In the
symmetric mode, static associations will be created at the symmetric-active peer side, and dynamic
associations will be created at the symmetric-passive peer side; in the broadcast or multicast mode, static
associations will be created at the server side, and dynamic associations will be created at the client side.

Configuring NTP client/server mode


For routers working in the client/server mode, make the following configurations on the clients.
To configure an NTP client:

Step Command Remarks


1. Enter system view. system-view N/A

ntp-service unicast-server
[ vpn-instance vpn-instance-name ]
{ ip-address | server-name }
2. Specify an NTP server for the No NTP server is specified by
[ authentication-keyid keyid |
router. default.
priority | source-interface
interface-type interface-number |
version number ] *

NOTE:
• In the ntp-service unicast-server command, ip-address must be a unicast address, rather than a
broadcast address, a multicast address or the IP address of the local clock.
• When the source interface for NTP messages is specified by the source-interface argument, the source
IP address of the NTP messages is configured as the primary IP address of the specified interface.
• A router can act as a server to synchronize the clock of other routers only after its clock has been
synchronized. If the clock of a server has a stratum level higher than or equal to that of a client’s clock,
the client will not synchronize its clock to the server’s.
• You can configure multiple servers by repeating the ntp-service unicast-server command. The clients
will choose the optimal reference source.

58
Configuring the NTP symmetric mode
For routers working in the symmetric mode, you need to specify a symmetric-passive on a symmetric-active
peer.
To configure a symmetric-active router:

Step Command Remarks


1. Enter system view. system-view N/A

ntp-service unicast-peer
[ vpn-instance vpn-instance-name ]
{ ip-address | peer-name }
2. Specify a symmetric-passive No symmetric-passive peer is
[ authentication-keyid keyid |
peer for the router. specified by default.
priority | source-interface
interface-type interface-number |
version number ] *

NOTE:
• In symmetric peers mode, you should use the ntp-service refclock-master command or any NTP
configuration command in Configuring the operation modes of NTP to enable NTP; otherwise, a
symmetric-passive peer will not process NTP messages from a symmetric-active peer.
• In the ntp-service unicast-peer command, ip-address must be a unicast address, rather than a
broadcast address, a multicast address or the IP address of the local clock.
• When the source interface for NTP messages is specified by the source-interface argument, the source
IP address of the NTP message is configured as the primary IP address of the specified interface.
• Typically, at least one of the symmetric-active and symmetric-passive peers has been synchronized;
otherwise the clock synchronization will not proceed.
• You can configure multiple symmetric-passive peers by repeating the ntp-service unicast-peer
command.

Configuring NTP broadcast mode


The broadcast server periodically sends NTP broadcast messages to the broadcast address
255.255.255.255. After receiving the messages, the router working in NTP broadcast mode sends a
reply and synchronizes its local clock.
For routers working in the broadcast mode, you need to configure both the server and clients. Because
an interface need to be specified on the broadcast server for sending NTP broadcast messages and an
interface also needs to be specified on each broadcast client for receiving broadcast messages, the NTP
broadcast mode can be configured only in the specific interface view.

Configuring a broadcast client

Step Command Remarks


1. Enter system view. system-view N/A

interface interface-type Enter the interface used to receive


2. Enter interface view.
interface-number NTP broadcast messages

59
Step Command Remarks
3. Configure the router to work in
the NTP broadcast client ntp-service broadcast-client N/A
mode.

Configuring the broadcast server

Step Command Remarks


1. Enter system view. system-view N/A

interface interface-type Enter the interface used to send


2. Enter interface view.
interface-number NTP broadcast messages
3. Configure the router to work in ntp-service broadcast-server
the NTP broadcast server [ authentication-keyid keyid | N/A
mode. version number ]*

NOTE:
A broadcast server can synchronize broadcast clients only after its clock has been synchronized.

Configuring NTP multicast mode


The multicast server periodically sends NTP multicast messages to multicast clients, which send replies
after receiving the messages and synchronize their local clocks.
For routers working in multicast mode, configure both the server and clients. The NTP multicast mode
must be configured in the specific interface view.

Configuring a multicast client

Step Command Remarks


1. Enter system view. system-view N/A

interface interface-type Enter the interface used to receive


2. Enter interface view.
interface-number NTP multicast messages
3. Configure the router to work in ntp-service multicast-client
N/A
the NTP multicast client mode. [ ip-address ]

Configuring the multicast server

Step Command Remarks


1. Enter system view. system-view N/A

interface interface-type Enter the interface used to send


2. Enter interface view.
interface-number NTP multicast message

ntp-service multicast-server
3. Configure the router to work in
[ ip-address ]
the NTP multicast server N/A
[ authentication-keyid keyid | ttl
mode.
ttl-number | version number ] *

60
NOTE:
• A multicast server can synchronize broadcast clients only after its clock has been synchronized.
• You can configure up to 1024 multicast clients, among which 128 can take effect at the same time.

Configuring the local clock as a reference source


A network router can get its clock synchronized in one of the following two ways:
• Synchronized to the local clock, which as the reference source.
• Synchronized to another router on the network in any of the four NTP operation modes previously
described.
If you configure two synchronization modes, the router will choose the optimal clock as the reference
source.
To configure the local clock as a reference source:

Step Command
1. Enter system view. system-view
2. Configure the local clock as a reference source. ntp-service refclock-master [ ip-address ] [ stratum ]

NOTE:
• In this command, ip-address must be 127.127.1.u, where u ranges from 0 to 3, representing the NTP
process ID.
• Typically, the stratum level of the NTP server which is synchronized from an authoritative clock (such as
an atomic clock) is set to 1. This NTP server operates as the primary reference source on the network;
and other routers synchronize themselves to it. The synchronization distances between the primary
reference source and other routers on the network, namely, the number of NTP servers on the NTP
synchronization paths, determine the clock stratum levels of the routers.
• After you have configured the local clock as a reference clock, the local router can act as a reference
clock to synchronize other routers in the network. Therefore, perform this configuration with caution to
avoid clock errors of the routers in the network.

Configuring optional parameters of NTP


Specifying the source interface for NTP messages
If you specify the source interface for NTP messages, the router sets the source IP address of the NTP
messages as the primary IP address of the specified interface when sending the NTP messages.
When the router responds to an NTP request received, the source IP address of the NTP response is
always the IP address of the interface that received the NTP request.
To specify the source interface for NTP messages:

Step Command Remarks


1. Enter system view. system-view N/A

61
Step Command Remarks
By default, no source interface is
specified for NTP messages, and
2. Specify the source interface ntp-service source-interface the system uses the IP address of
for NTP message. interface-type interface-number the interface determined by the
matching route as the source IP
address of NTP messages.

CAUTION:
• If you have specified the source interface for NTP messages in the ntp-service unicast-server or
ntp-service unicast-peer command, the interface specified in the ntp-service unicast-server or
ntp-service unicast-peer command serves as the source interface of NTP messages.
• If you have configured the ntp-service broadcast-server or ntp-service multicast-server command, the
source interface of the broadcast or multicast NTP messages is the interface configured with the
respective command.
• If the specified source interface for NTP messages is down, the source IP address for an NTP message
that is sent out is the primary IP address of the outgoing interface of the NTP message.

Disabling an interface from receiving NTP messages


When NTP is enabled, NTP messages can be received from all the interfaces by default, and you can
disable an interface from receiving NTP messages through the following configuration.

Step Command Remarks


1. Enter system view. system-view N/A

interface interface-type
2. Enter interface view. N/A
interface-number
3. Disable the interface from An interface is enabled to receive
ntp-service in-interface disable
receiving NTP messages. NTP messages by default

Configuring the maximum number of dynamic sessions


allowed
Step Command Remarks
1. Enter system view. system-view N/A
2. Configure the maximum
number of dynamic sessions ntp-service max-dynamic-sessions
100 by default
allowed to be established number
locally.

Configuring access-control rights


With the following command, you can configure the NTP service access-control right to the local router.
There are four access-control rights, as follows:

62
• query—Control query permitted. This level of right permits the peer router to perform control query
to the NTP service on the local router but does not permit the peer router to synchronize its clock to
the local router. The so-called “control query” refers to query of some states of the NTP service,
including alarm information, authentication status, clock source information, and so on.
• synchronization—Server access only. This level of right permits the peer router to synchronize its
clock to the local router but does not permit the peer router to perform control query.
• server—Server access and query permitted. This level of right permits the peer router to perform
synchronization and control query to the local router but does not permit the local router to
synchronize its clock to the peer router.
• peer—Full access. This level of right permits the peer router to perform synchronization and control
query to the local router and also permits the local router to synchronize its clock to the peer router.
From the highest NTP service access-control right to the lowest one are peer, server, synchronization,
and query. When a router receives an NTP request, it performs an access-control right match and uses
the first matched right. If no matched right is found, the router discards the NTP request.

Configuration prerequisites
Prior to configuring the NTP service access-control right to the local router, you need to create and
configure an ACL associated with the access-control right. For more information about ACLs, see ACL
and QoS Configuration Guide.

Configuration procedure
To configure the NTP service access-control right to the local router:

Step Command Remarks


1. Enter system view. system-view N/A
2. Configure the NTP service
ntp-service access { peer | query |
access-control right for a peer
server | synchronization } peer by default
router to access the local
acl-number
router.

NOTE:
The access-control right mechanism provides only a minimum degree of security protection for the system
running NTP. A more secure method is identity authentication.

Configuring NTP authentication


The NTP authentication feature should be enabled for a system running NTP in a network where there is
a high security demand. This feature enhances the network security by means of client-server key
authentication, which prohibits a client from synchronizing with a router that has failed authentication.
NTP authentication configuration includes the following tasks:
• Enable NTP authentication
• Configure an authentication key
• Configure the key as a trusted key
• Associate the specified key with an NTP server or a symmetric peer

63
The above tasks are required. If any task is missed, the NTP authentication cannot function.

Configuring NTP authentication in client/server mode


When configuring NTP authentication in client/server mode, you need to configure the required tasks on
both the client and server, and associate the key with the NTP server on the client.
• If NTP authentication is not enabled or no key is associated with the NTP server on the client, no
NTP authentication is performed when the client synchronizes its clock to the server. No matter NTP
authentication is enabled on the server or not, the clock synchronization between the server and
client can be performed.
• If NTP authentication is enabled and a key is associated with the NTP server on the client, but the
key is a trusted key, no matter NTP authentication is enabled on the server or not, the client does not
synchronize its clock to the server.

Configuring NTP authentication for a client


To configure NTP authentication for a client:

Step Command Remarks


1. Enter system view. system-view N/A
2. Enable NTP authentication. ntp-service authentication enable Disabled by default

ntp-service authentication-keyid
3. Configure an NTP No NTP authentication key by
keyid authentication-mode md5
authentication key. default
value
4. Configure the key as a trusted ntp-service reliable No authentication key is
key. authentication-keyid keyid configured to be trusted by default

You can associate a non-existing


key with an NTP server. To enable
ntp-service unicast-server
5. Associate the specified key NTP authentication, you must
{ ip-address | server-name }
with an NTP server. configure the key and specify it as
authentication-keyid keyid
a trusted key after associating the
key with the NTP server.

NOTE:
After you enable the NTP authentication feature for the client, make sure that you configure for the client
an authentication key that is the same as on the server and specify that the authentication key is trusted.

Configuring NTP authentication for a server


To configure NTP authentication for a server:

Step Command Remarks


1. Enter system view. system-view N/A
2. Enable NTP authentication. ntp-service authentication enable Disabled by default

64
Step Command Remarks
ntp-service authentication-keyid
3. Configure an NTP No NTP authentication key by
keyid authentication-mode md5
authentication key. default
value
4. Configure the key as a trusted ntp-service reliable No authentication key is
key. authentication-keyid keyid configured to be trusted by default

NOTE:
The same authentication key must be configured on both the server and client sides.

Configuring NTP authentication in symmetric peers mode


When configuring NTP authentication in symmetric peers mode, configure the required tasks on both the
active and passive peers, and on the active peer associate the key with the passive peer.
1. When the active peer has a greater stratum level than the passive peer:
{ If NTP authentication is not enabled or no key is associated with the passive peer on the active
peer, the active peer synchronizes its clock to the passive peer as long as NTP authentication
is disabled on the passive peer.
{ If NTP authentication is enabled and a key is associated with the passive peer on the active
peer, but the key is not a trusted key, no matter the NTP authentication is enabled on the passive
peer or not, the active peer does not synchronize its clock to the passive peer.
2. When the active peer has a smaller stratum level than the passive peer:
If NTP authentication is not enabled, no key is associated with the passive peer on the active peer,
or the key is not a trusted key, the clock of the active peer can be synchronized to the passive peer
as long as NTP authentication is disabled on the passive peer.

Configuring NTP authentication for an active peer


To configure NTP authentication for an active peer:

Step Command Remarks


1. Enter system view. system-view N/A
2. Enable NTP authentication. ntp-service authentication enable Disabled by default

ntp-service authentication-keyid
3. Configure an NTP No NTP authentication key is
keyid authentication-mode md5
authentication key. configured by default.
value
4. Configure the key as a trusted ntp-service reliable No authentication key is
key. authentication-keyid keyid configured to be trusted by default

You can associate a non-existing


key with a passive peer. To enable
ntp-service unicast-peer
5. Associate the specified key NTP authentication, you must
{ ip-address | peer-name }
with the passive peer. configure the key and specify it as
authentication-keyid keyid
a trusted key after associating the
key with the passive peer.

65
NOTE:
After you enable the NTP authentication feature for the active peer, make sure that you configure for the
active peer an authentication key that is the same as on the passive peer and specify that the
authentication key is trusted.

Configuring NTP authentication for a passive peer


To configure NTP authentication for a passive peer:

Step Command Remarks


1. Enter system view. system-view N/A
2. Enable NTP authentication. ntp-service authentication enable Disabled by default

ntp-service authentication-keyid
3. Configure an NTP No NTP authentication key is
keyid authentication-mode md5
authentication key. configured by default.
value
4. Configure the key as a trusted ntp-service reliable No authentication key is
key. authentication-keyid keyid configured to be trusted by default

NOTE:
The same authentication key must be configured on both the active and passive peers.

Configuring NTP authentication in broadcast mode


When configuring NTP authentication in broadcast mode, configure the required tasks on both the
broadcast client and broadcast server, and associate the key with the broadcast server on the server.
• If NTP authentication is not enabled, no key is associated with the broadcast server, or the key is not
a trusted key, the clock of the broadcast server can be synchronized to the broadcast client as long
as NTP authentication is disabled on the client.
• If NTP authentication is enabled and a key is associated with the broadcast server, and the key is
a trusted key, the clock of the broadcast server can be synchronized to the broadcast client as long
as NTP authentication is enabled on the client.

Configuring NTP authentication for a broadcast client


To configure NTP authentication for a broadcast client:

Step Command Remarks


1. Enter system view. system-view N/A
2. Enable NTP authentication. ntp-service authentication enable Disabled by default

ntp-service authentication-keyid
3. Configure an NTP No NTP authentication key is
keyid authentication-mode md5
authentication key. configured by default.
value
4. Configure the key as a trusted ntp-service reliable No authentication key is
key. authentication-keyid keyid configured to be trusted by default.

66
NOTE:
After you enable the NTP authentication feature for the broadcast client, make sure that you configure for
the client an authentication key that is the same as on the broadcast server and specify that the
authentication key is trusted.

Configuring NTP authentication for a broadcast server


To configure NTP authentication for a broadcast server:

Step Command Remarks


1. Enter system view. system-view N/A
2. Enable NTP authentication. ntp-service authentication enable Disabled by default

ntp-service authentication-keyid
3. Configure an NTP No NTP authentication key is
keyid authentication-mode md5
authentication key. configured by default.
value
4. Configure the key as a trusted ntp-service reliable No authentication key is
key. authentication-keyid keyid configured to be trusted by default

interface interface-type
5. Enter interface view. N/A
interface-number

You can associate a non-existing


key with the broadcast server. To
6. Associate the specified key ntp-service broadcast-server enable NTP authentication, you
with the broadcast server. authentication-keyid keyid must configure the key and specify
it as a trusted key after associating
the key with the broadcast server.

NOTE:
The same authentication key must be configured on both the broadcast server and broadcast client sides.

Configuring NTP authentication in multicast mode


When configuring NTP authentication in multicast mode, configure the required tasks on both the
multicast client and multicast server, and associate the key with the multicast server on the server.
• If the NTP authentication is not enabled, no key is associated with the multicast server on the
multicast server, or the key is not a trusted key, the clock of the multicast server can be synchronized
to the multicast client as long as NTP authentication is disabled on the client.
• If the NTP authentication is enabled, a key is associated with the multicast server on the multicast
server, and the key is a trusted key, the clock of the multicast server can be synchronized to the
multicast client as long as NTP authentication is enabled on the client.

Configuring NTP authentication for a multicast client


To configure NTP authentication for a multicast client:

Step Command Remarks


1. Enter system view. system-view N/A
2. Enable NTP authentication. ntp-service authentication enable Disabled by default

67
Step Command Remarks
ntp-service authentication-keyid
3. Configure an NTP No NTP authentication key is
keyid authentication-mode md5
authentication key. configured by default.
value
4. Configure the key as a trusted ntp-service reliable No authentication key is
key. authentication-keyid keyid configured to be trusted by default.

NOTE:
After you enable the NTP authentication feature for the multicast client, make sure that you configure for
the client an authentication key that is the same as on the multicast server and specify that the
authentication key is trusted.

Configuring NTP authentication for a multicast server


To configure NTP authentication for a multicast server:

Step Command Remarks


1. Enter system view. system-view N/A
2. Enable NTP authentication. ntp-service authentication enable Disabled by default

ntp-service authentication-keyid
3. Configure an NTP No NTP authentication key is
keyid authentication-mode md5
authentication key. configured by default.
value
4. Configure the key as a trusted ntp-service reliable No authentication key is
key. authentication-keyid keyid configured to be trusted by default.

interface interface-type
5. Enter interface view. N/A
interface-number

You can associate a non-existing


key with the multicast server. To
6. Associate the specified key ntp-service multicast-server enable NTP authentication, you
with the multicast server. authentication-keyid keyid must configure the key and specify
it as a trusted key after associating
the key with the multicast server.

NOTE:
The same authentication key must be configured on both the multicast server and multicast client sides.

Displaying and maintaining NTP


Task Command Remarks
display ntp-service status [ |
1. View the information of NTP
{ begin | exclude | include } Available in any view
service status.
regular-expression ]

display ntp-service sessions


2. View the information of NTP
[ verbose ] [ | { begin | exclude | Available in any view
sessions.
include } regular-expression ]

68
Task Command Remarks
3. View the brief information of
display ntp-service trace [ | { begin
the NTP servers from the local
| exclude | include } Available in any view
router back to the primary
regular-expression ]
reference source.

NTP configuration examples


NOTE:
Unless otherwise specified, the examples given in this section apply to all switches and routers that support
NTP.

Configuring NTP client/server mode


Network requirements
Perform the following configurations to synchronize the time between Device B and Device A:
• As shown in Figure 25, the local clock of Device A is to be used as a reference source, with the
stratum level of 2.
• Device B works in the client/server mode and Device A is to be used as the NTP server of Device
B.
Figure 25 Network diagram

Configuration procedure
1. Set the IP address for each interface as shown in Figure 25. (Details not shown)
2. Configure Device A:
# Specify the local clock as the reference source, with the stratum level of 2.
<DeviceA> system-view
[DeviceA] ntp-service refclock-master 2
3. Configure Device B:
# View the NTP status of Device B before clock synchronization.
<DeviceB> display ntp-service status
Clock status: unsynchronized
Clock stratum: 16
Reference clock ID: none
Nominal frequency: 64.0000 Hz
Actual frequency: 64.0000 Hz
Clock precision: 2^7
Clock offset: 0.0000 ms
Root delay: 0.00 ms
Root dispersion: 0.00 ms

69
Peer dispersion: 0.00 ms
Reference time: 00:00:00.000 UTC Jan 1 1900 (00000000.00000000)
# Specify Device A as the NTP server of Device B so that Device B is synchronized to Device A.
<DeviceB> system-view
[DeviceB] ntp-service unicast-server 1.0.1.11
# View the NTP status of Device B after clock synchronization.
[DeviceB] display ntp-service status
Clock status: synchronized
Clock stratum: 3
Reference clock ID: 1.0.1.11
Nominal frequency: 64.0000 Hz
Actual frequency: 64.0000 Hz
Clock precision: 2^7
Clock offset: 0.0000 ms
Root delay: 31.00 ms
Root dispersion: 1.05 ms
Peer dispersion: 7.81 ms
Reference time: 14:53:27.371 UTC Sep 19 2005 (C6D94F67.5EF9DB22)
As shown above, Device B has been synchronized to Device A, and the clock stratum level of
Device B is 3, while that of Device A is 2.
# View the NTP session information of Device B, which shows that an association has been set up
between Device B and Device A.
[DeviceB] display ntp-service sessions
source reference stra reach poll now offset delay disper
**************************************************************************
[12345] 1.0.1.11 127.127.1.0 2 63 64 3 -75.5 31.0 16.5
note: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured
Total associations : 1

Configuring the NTP symmetric mode


Network requirements
Perform the following configurations to synchronize time among routers:
• As shown in Figure 26, the local clock of Device A is to be configured as a reference source, with
the stratum level of 2.
• Device B works in the client mode and Device A is to be used as the NTP server of Device B.
• Device C works in the symmetric-active mode and Device B will act as peer of Device C. Device C
is the symmetric-active peer while Device B is the symmetric-passive peer.

70
Figure 26 Network diagram

Configuration procedure
1. Set the IP address for each interface as shown in Figure 26. (Details not shown)
2. Configure Device A:
# Specify the local clock as the reference source, with the stratum level of 2.
<DeviceA> system-view
[DeviceA] ntp-service refclock-master 2
3. Configure Device B:
# Specify Device A as the NTP server of Device B.
<DeviceB> system-view
[DeviceB] ntp-service unicast-server 3.0.1.31
4. Configure Device C (after Device B is synchronized to Device A):
# Specify the local clock as the reference source, with the stratum level of 1.
<DeviceC> system-view
[DeviceC] ntp-service refclock-master 1
# Configure Device B as a symmetric peer after local synchronization.
[DeviceC] ntp-service unicast-peer 3.0.1.32
In the step above, Device B and Device C are configured as symmetric peers, with Device C in the
symmetric-active mode and Device B in the symmetric-passive mode. Because the stratus level of
Device C is 1 while that of Device B is 3, Device B is synchronized to Device C.
# View the NTP status of Device B after clock synchronization.
[DeviceB] display ntp-service status
Clock status: synchronized
Clock stratum: 2
Reference clock ID: 3.0.1.33
Nominal frequency: 64.0000 Hz
Actual frequency: 64.0000 Hz
Clock precision: 2^7
Clock offset: -21.1982 ms
Root delay: 15.00 ms
Root dispersion: 775.15 ms
Peer dispersion: 34.29 ms
Reference time: 15:22:47.083 UTC Sep 19 2005 (C6D95647.153F7CED)

71
As shown above, Device B has been synchronized to Device C, and the clock stratum level of
Device B is 2, while that of Device C is 1.
# View the NTP session information of Device B, which shows that an association has been set up
between Device B and Device C.
[DeviceB] display ntp-service sessions
source reference stra reach poll now offset delay disper
**************************************************************************
[245] 3.0.1.31 127.127.1.0 2 15 64 24 10535.0 19.6 14.5
[1234] 3.0.1.33 LOCL 1 14 64 27 -77.0 16.0 14.8
note: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured
Total associations : 2

Configuring NTP broadcast mode


Network requirements
As shown in Figure 27, Router C functions as the NTP server for multiple routers on a network segment
and synchronizes the time among multiple routers. More specifically:
• Router C’s local clock is to be used as a reference source, with the stratum level of 2.
• Router C works in the broadcast server mode and sends out broadcast messages from
GigabitEthernet 3/1/10.
• Router D and Router A work in the broadcast client mode and receive broadcast messages through
their respective GigabitEthernet 3/1/10.
Figure 27 Network diagram

Configuration procedure
1. Set the IP address for each interface as shown in Figure 27. (Details not shown)
2. Configure Router C:
# Specify the local clock as the reference source, with the stratum level of 2.
<RouterC> system-view
[RouterC] ntp-service refclock-master 2
# Configure Router C to work in the broadcast server mode and send broadcast messages through
GigabitEthernet 3/1/10.
[RouterC] interface GigabitEthernet 3/1/10

72
[RouterC-GigabitEthernet3/1/10] ntp-service broadcast-server
3. Configure Router A:
# Configure Router A to work in the broadcast client mode and receive broadcast messages on
GigabitEthernet 3/1/10.
<RouterA> system-view
[RouterA] interface GigabitEthernet 3/1/10
[RouterA-GigabitEthernet3/1/10] ntp-service broadcast-client
4. Configure Router B:
# Configure Router B to work in broadcast client mode and receive broadcast messages on
GigabitEthernet 3/1/10.
<RouterB> system-view
[RouterB] interface GigabitEthernet 3/1/10
[RouterB-GigabitEthernet3/1/10] ntp-service broadcast-client
Router A and Router B get synchronized upon receiving a broadcast message from Router C.
# Take Router A as an example. View the NTP status of Router A after clock synchronization.
[RouterA] display ntp-service status
Clock status: synchronized
Clock stratum: 3
Reference clock ID: 3.0.1.31
Nominal frequency: 64.0000 Hz
Actual frequency: 64.0000 Hz
Clock precision: 2^7
Clock offset: 0.0000 ms
Root delay: 31.00 ms
Root dispersion: 8.31 ms
Peer dispersion: 34.30 ms
Reference time: 16:01:51.713 UTC Sep 19 2005 (C6D95F6F.B6872B02)
As shown above, Router A has been synchronized to Router C and the clock stratum level of Router
A is 3, while that of Router C is 2.
# View the NTP session information of Router A, which shows that an association has been set up
between Router A and Router C.
[RouterA-GigabitEthernet3/1/10] display ntp-service sessions
source reference stra reach poll now offset delay disper
**************************************************************************
[1234] 3.0.1.31 127.127.1.0 2 254 64 62 -16.0 32.0 16.6
note: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured
Total associations : 1

Configuring NTP multicast mode


Network requirements
As shown in Figure 28, Router C functions as the NTP server for multiple routers on different network
segments and synchronizes the time among multiple routers. More specifically:
• Router C’s local clock is to be used as a reference source, with the stratum level of 2.
• Router C works in the multicast serve mode and sends out multicast messages from GigabitEthernet
3/1/10.

73
• Router D and Router A work in the multicast client mode and receive multicast messages through
their respective GigabitEthernet 3/1/10.
Figure 28 Network diagram
GE3/1/10
3.0.1.31/24

Router C

GE3/1/10 GE3/1/1 GE3/1/10


1.0.1.11/24 1.0.1.10/24 3.0.1.30/24

Router A Router B

GE3/1/10
3.0.1.32/24

Router D

Configuration procedure
1. Set the IP address for each interface as shown in Figure 28. (Details not shown)
2. Configure Router C:
# Specify the local clock as the reference source, with the stratum level of 2.
<RouterC> system-view
[RouterC] ntp-service refclock-master 2
# Configure Router C to work in the multicast server mode and send multicast messages through
GigabitEthernet 3/1/10.
[RouterC] interface GigabitEthernet 3/1/10
[RouterC-GigabitEthernet3/1/10] ntp-service multicast-server
3. Configure Router D:
# Configure Router D to work in the multicast client mode and receive multicast messages on
GigabitEthernet 3/1/10.
<RouterD> system-view
[RouterD] interface GigabitEthernet 3/1/10
[RouterD-GigabitEthernet3/1/10] ntp-service multicast-client
Because Router D and Router C are on the same subnet, Router D can receive the multicast
messages from Router C without being IGMP-enabled and can be synchronized to Router C.
# View the NTP status of Router D after clock synchronization.
[RouterD] display ntp-service status
Clock status: synchronized
Clock stratum: 3
Reference clock ID: 3.0.1.31
Nominal frequency: 64.0000 Hz
Actual frequency: 64.0000 Hz
Clock precision: 2^7
Clock offset: 0.0000 ms
Root delay: 31.00 ms
Root dispersion: 8.31 ms

74
Peer dispersion: 34.30 ms
Reference time: 16:01:51.713 UTC Sep 19 2005 (C6D95F6F.B6872B02)
As shown above, Router D has been synchronized to Router C and the clock stratum level of Router
D is 3, while that of Router C is 2.
# View the NTP session information of Router D, which shows that an association has been set up
between Router D and Router C.
[RouterD-GigabitEthernet 3/1/10] display ntp-service sessions
source reference stra reach poll now offset delay disper
**************************************************************************
[1234] 3.0.1.31 127.127.1.0 2 254 64 62 -16.0 31.0 16.6
note: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured
Total associations : 1
4. Configure Router B:
Because Router A and Router C are on different subnets, you must enable the multicast functions on
Router B before Router A can receive multicast messages from Router C.
# Enable the IP multicast function.
<RouterB> system-view
[RouterB] multicast routing-enable
[RouterB] interface GigabitEthernet 3/1/1
[RouterB-GigabitEthernet3/1/1] igmp enable
[RouterB-GigabitEthernet3/1/1] igmp static-group 224.0.1.1
[RouterB-GigabitEthernet3/1/1] quit
[RouterB] interface GigabitEthernet 3/1/2
[RouterB-GigabitEthernet3/1/2] pim dm
5. Configure Router A:
<RouterA> system-view
[RouterA] interface GigabitEthernet 3/1/1
# Configure Router A to work in multicast client mode and receive multicast messages on
GigabitEthernet 3/1/1.
[RouterA-GigabitEthernet3/1/1] ntp-service multicast-client
# View the NTP status of Router A after clock synchronization.
[RouterA-GigabitEthernet3/1/1] display ntp-service status
Clock status: synchronized
Clock stratum: 3
Reference clock ID: 3.0.1.31
Nominal frequency: 64.0000 Hz
Actual frequency: 64.0000 Hz
Clock precision: 2^7
Clock offset: 0.0000 ms
Root delay: 40.00 ms
Root dispersion: 10.83 ms
Peer dispersion: 34.30 ms
Reference time: 16:02:49.713 UTC Sep 19 2005 (C6D95F6F.B6872B02)
As shown above, Router A has been synchronized to Router C and the clock stratum level of Router
A is 3, while that of Router C is 2.
# View the NTP session information of Router A, which shows that an association has been set up
between Router A and Router C.

75
[RouterA-GigabitEthernet3/1/1] display ntp-service sessions
source reference stra reach poll now offset delay disper
**************************************************************************
[1234] 3.0.1.31 127.127.1.0 2 255 64 26 -16.0 40.0 16.6
note: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured
Total associations : 1

NOTE:
For more information about how to configuration IGMP and PIM, see IP Multicast Configuration Guide.

Configuring NTP client/server mode with authentication


Network requirements
As shown in Figure 29, perform the following configurations to synchronize the time between Device B
and Device A and ensure network security. More specifically:
• The local clock of Device A is to be configured as a reference source, with the stratum level of 2.
• Device B works in the client mode and Device A is to be used as the NTP server of Device B, with
Device B as the client.
• NTP authentication is to be enabled for Device A and Device B at the same time.
Figure 29 Network diagram

Configuration procedure
1. Set the IP address for each interface as shown in Figure 29. (Details not shown)
2. Configure Device A
# Specify the local clock as the reference source, with the stratum level of 2.
<DeviceA> system-view
[DeviceA] ntp-service refclock-master 2
3. Configure Device B:
<DeviceB> system-view
# Enable NTP authentication on Device B.
[DeviceB] ntp-service authentication enable
# Set an authentication key.
[DeviceB] ntp-service authentication-keyid 42 authentication-mode md5 aNiceKey
# Specify the key as key as a trusted key.
[DeviceB] ntp-service reliable authentication-keyid 42
# Specify Device A as the NTP server of Device B.
[DeviceB] ntp-service unicast-server 1.0.1.11 authentication-keyid 42
Before Device B can synchronize its clock to that of Device A, you need to enable NTP
authentication for Device A.
Perform the following configuration on Device A:

76
# Enable NTP authentication.
[DeviceA] ntp-service authentication enable
# Set an authentication key.
[DeviceA] ntp-service authentication-keyid 42 authentication-mode md5 aNiceKey
# Specify the key as a trusted key.
[DeviceA] ntp-service reliable authentication-keyid 42
# View the NTP status of Device B after clock synchronization.
[DeviceB] display ntp-service status
Clock status: synchronized
Clock stratum: 3
Reference clock ID: 1.0.1.11
Nominal frequency: 64.0000 Hz
Actual frequency: 64.0000 Hz
Clock precision: 2^7
Clock offset: 0.0000 ms
Root delay: 31.00 ms
Root dispersion: 1.05 ms
Peer dispersion: 7.81 ms
Reference time: 14:53:27.371 UTC Sep 19 2005 (C6D94F67.5EF9DB22)
As shown above, Device B has been synchronized to Device A, and the clock stratum level of
Device B is 3, while that of Device A is 2.
# View the NTP session information of Device B, which shows that an association has been set up
Device B and Device A.
[DeviceB] display ntp-service sessions
source reference stra reach poll now offset delay disper
**************************************************************************
[12345] 1.0.1.11 127.127.1.0 2 63 64 3 -75.5 31.0 16.5
note: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured
Total associations : 1

Configuring NTP broadcast mode with authentication


Network requirements
As shown in Figure 30, Router C functions as the NTP server for multiple routers on different network
segments and synchronizes the time among multiple routers. More specifically:
• Router C’s local clock is to be used as a reference source, with the stratum level of 3.
• Router C works in the broadcast server mode and sends out broadcast messages from
GigabitEthernet 3/1/10.
• Router D works in the broadcast client mode and receives broadcast client through GigabitEthernet
3/1/10.
• NTP authentication is enabled on both Router C and Router D.

77
Figure 30 Network diagram
GE3/1/10
3.0.1.31/24

Router C

GE3/1/10 GE3/1/1 GE3/1/10


1.0.1.11/24 1.0.1.10/24 3.0.1.30/24

Router A Router B

GE3/1/10
3.0.1.32/24

Router D

Configuration procedure
1. Set the IP address for each interface as shown in Figure 30. (Details not shown)
2. Configure Router C:
# Specify the local clock as the reference source, with the stratum level of 3.
<RouterC> system-view
[RouterC] ntp-service refclock-master 3
# Configure NTP authentication
[RouterC] ntp-service authentication enable
[RouterC] ntp-service authentication-keyid 88 authentication-mode md5 123456
[RouterC] ntp-service reliable authentication-keyid 88
# Specify Router C as an NTP broadcast server, and specify an authentication key.
[RouterC] interface GigabitEthernet 3/1/10
[RouterC-GigabitEthernet3/1/10] ntp-service broadcast-server authentication-keyid
88
3. Configure Router D:
# Configure NTP authentication
<RouterD> system-view
[RouterD] ntp-service authentication enable
[RouterD] ntp-service authentication-keyid 88 authentication-mode md5 123456
[RouterD] ntp-service reliable authentication-keyid 88
# Configure Router D to work in the NTP broadcast client mode
[RouterD] interface GigabitEthernet 3/1/10
[RouterD-GigabitEthernet3/1/10] ntp-service broadcast-client
Now, Router D can receive broadcast messages through GigabitEthernet 3/1/10, and Router C
can send broadcast messages through GigabitEthernet 3/1/10. Upon receiving a broadcast
message from Router C, Router D synchronizes its clock with that of Router C.
# View the NTP status of Router D after clock synchronization.
[RouterD-GigabitEthernet3/1/10] display ntp-service status
Clock status: synchronized
Clock stratum: 4
Reference clock ID: 3.0.1.31

78
Nominal frequency: 64.0000 Hz
Actual frequency: 64.0000 Hz
Clock precision: 2^7
Clock offset: 0.0000 ms
Root delay: 31.00 ms
Root dispersion: 8.31 ms
Peer dispersion: 34.30 ms
Reference time: 16:01:51.713 UTC Sep 19 2005 (C6D95F6F.B6872B02)
As shown above, Router D has been synchronized to Router C and the clock stratum level of Router
D is 4, while that of Router C is 3.
# View the NTP session information of Router D, which shows that an association has been set up
between Router D and Router C.
[RouterD-GigabitEthernet3/1/10] display ntp-service sessions
source reference stra reach poll now offset delay disper
**************************************************************************
[1234] 3.0.1.31 127.127.1.0 3 254 64 62 -16.0 32.0 16.6
note: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured
Total associations : 1

Configuring MPLS VPN time synchronization in client/server


mode
Network requirements
As shown in Figure 31, two VPNs are present on PE 1 and PE 2: VPN 1 and VPN 2. CE 1 and CE 3 are
routers in VPN 1. To synchronize the time between CE 1 and CE 3 in VPN 1, perform the following
configurations:
• CE 1’s local clock is to be used as a reference source, with the stratum level of 1.
• CE 3 is synchronized to CE 1 in the client/server mode.

NOTE:
At present, MPLS L3VPN time synchronization can be implemented only in the unicast mode (client/server
mode or symmetric peers mode), but not in the multicast or broadcast mode.

79
Figure 31 Network diagram

Device Interface IP address Device Interface IP address


CE 1 GE4/1/1 10.1.1.1/24 PE 1 GE4/1/1 10.1.1.2/24
CE 2 GE4/1/1 10.2.1.1/24 GE4/1/2 172.1.1.1/24
CE 3 GE4/1/1 10.3.1.1/24 GE4/1/3 10.2.1.2/24
CE 4 GE4/1/1 10.4.1.1/24 PE 2 GE4/1/1 10.3.1.2/24
P GE4/1/1 172.1.1.2/24 GE4/1/2 172.2.1.2/24
GE4/1/2 172.2.1.1/24 GE4/1/3 10.4.1.2/24

Configuration procedure

NOTE:
Prior to performing the following configuration, be sure you have completed MPLS VPN-related
configurations and make sure of the reachability between CE 1 and PE 1, between PE 1 and PE 2, and
between PE 2 and CE 3. For more information about MPLS VPN, see MPLS Configuration Guide.

1. Set the IP address for each interface as shown in Figure 31. (Details not shown)
2. Configure CE 1:
# Specify the local clock as the reference source, with the stratum level of 1.
<CE1> system-view
[CE1] ntp-service refclock-master 1
3. Configure CE 3:
# Specify CE 1 in VPN 1 as the NTP server of CE 3.
<CE3> system-view
[CE3] ntp-service unicast-server 10.1.1.1
# View the NTP session information and status information on CE 3 a certain period of time later.
The information should show that CE 3 has been synchronized to CE 1, with the clock stratum level
of 2.
[CE3] display ntp-service status
Clock status: synchronized
Clock stratum: 2

80
Reference clock ID: 10.1.1.1
Nominal frequency: 63.9100 Hz
Actual frequency: 63.9100 Hz
Clock precision: 2^7
Clock offset: 0.0000 ms
Root delay: 47.00 ms
Root dispersion: 0.18 ms
Peer dispersion: 34.29 ms
Reference time: 02:36:23.119 UTC Jan 1 2001(BDFA6BA7.1E76C8B4)
[CE2] display ntp-service sessions
source reference stra reach poll now offset delay disper
**************************************************************************
[12345]10.1.1.1 LOCL 1 7 64 15 0.0 47.0 7.8
note: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured
Total associations : 1
[CE3] display ntp-service trace
server 127.0.0.1,stratum 2, offset -0.013500, synch distance 0.03154
server 10.1.1.1,stratum 1, offset -0.506500, synch distance 0.03429
refid 127.127.1.0

Configuring MPLS VPN time synchronization in symmetric


peers mode
Network requirements
As shown in Figure 31, two VPNs are present on PE 1 and PE 2: VPN 1 and VPN 2. To synchronize the
time between PE 1 and PE 2 in VPN 1, perform the following configurations:
• PE 1’s local clock is to be used as a reference source, with the stratum level of 1.
• PE 2 is synchronized to PE 1 in the symmetric peers mode, and specify that the VPN is VPN 1.

Configuration procedure
1. Set the IP address for each interface as shown in Figure 31. (Details not shown)
2. Configure PE 1:
# Specify the local clock as the reference source, with the stratum level of 1.
<PE1> system-view
[PE1] ntp-service refclock-master 1
3. Configure PE 2:
# Specify PE 1 in VPN 1 as the symmetric-passive peer of PE 2.
<PE2> system-view
[PE2] ntp-service unicast-peer vpn-instance vpn1 10.1.1.2
# View the NTP session information and status information on PE 2 a certain period of time later.
The information should show that PE 2 has been synchronized to PE 1, with the clock stratum level
of 2.
[PE2] display ntp-service status
Clock status: synchronized
Clock stratum: 2
Reference clock ID: 10.1.1.2

81
Nominal frequency: 63.9100 Hz
Actual frequency: 63.9100 Hz
Clock precision: 2^7
Clock offset: 0.0000 ms
Root delay: 32.00 ms
Root dispersion: 0.60 ms
Peer dispersion: 7.81 ms
Reference time: 02:44:01.200 UTC Jan 1 2001(BDFA6D71.33333333)
[PE2] display ntp-service sessions
source reference stra reach poll now offset delay disper
**************************************************************************
[12345]10.1.1.2 LOCL 1 1 64 29 -12.0 32.0 15.6
note: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured
Total associations : 1
[PE2] display ntp-service trace
server 127.0.0.1,stratum 2, offset -0.012000, synch distance 0.02448
server 10.1.1.2,stratum 1, offset 0.003500, synch distance 0.00781
refid 127.127.1.0

82
Configuring Clock monitoring

Clock monitoring module overview


Clock monitoring module provides highly-precise, highly-reliable synchronous digital hierarchy (SDH)
line interface 38.88 MHz clock signals for different line processing units (LPUs). It implements such
functions as input clock source automatic selection, software phase-lock, and real-time monitoring of the
clock status of the interface card. The clock monitoring module supports hardware reset of the clock card.
The clock monitoring module supports 14 reference clock sources (referred to as reference source
hereinafter), among which the first and the second are Bits clock sources and the others are line clock
sources. The first and the second reference sources respectively correspond to external interfaces 1 and
2 for receiving clock signals on the active switching and routing processing unit (SRPU). Figure 32 shows
the reference-slot relationship: On the SR8802, the clock in slot 2 corresponds to the fifth reference
source, and the clock in slot 3 corresponds to the sixth reference source.
Figure 32 Relationship between reference source and slot

Classification of clock sources


Clock sources can be classified into three categories according to different sources of the clock source:
• Local clock source—38.88 MHz clock signals generated by a crystal oscillator inside the clock
monitoring module.
• Bits clock source—Clock signals generated by a specific Bits clock device. The signals are sent to
the clock monitoring module through a specific interface on the SRPU (switch and router processing
unit) and then sent to all cards by the clock monitoring module.
• Line clock source—Provided by the upper level device, whose precision is lower than that of the Bits
clock source. The signals are derived from the specified WAN interface and sent to the clock
monitoring module, which then sends the signals to all cards.

Reference source level


The reference source level is determined by the priority and the synchronization status marker (SSM) level
of the reference source.

Priority of the reference source


You can set a high priority for the reference source with high-precision and high-reliability to make it be
first selected as the clock source.

83
SSM level of the reference source
SSM, also known as synchronization quality marker, indicates the synchronization timing signal level on
a synchronization timing transmission link.
The priority of SSM level of the reference source, ranging from high to low, includes:
• PRC—G.811 clock signal.
• TNC—G.812 transit node clock signal.
• LNC—G.812 local node clock signal.
• SETS—SDH device clock source signal.
• unknown—Unknown synchronization quality.
• DNU—Cannot be used as a clock source.

NOTE:
The reference source whose SSM level is DNU cannot be used as a clock source.

Working mode of the clock monitoring module


The clock monitoring module can be configured to work in either of the following two modes:

Manual mode
In this mode, the clock source is configured manually. The clock monitoring module does not
automatically switch the clock source, but just tracks the primary reference source. If the primary
reference source is lost, the clock monitoring module enters a holdover state.

Auto mode
In auto mode, the clock source is automatically selected by the system. When the primary clock source
is lost or not available, the clock monitoring module selects another clock source based on the following
rules:
• If SSM level is not activated, the clock source is determined by reference source priority. If two
reference sources have the same priority, the clock source is selected according to the reference
source number (1 to 18). When the reference source with the highest priority is lost, the next
available reference source with the highest priority is selected. When the former clock source
becomes available, the system switches to that clock again.
• If SSM is activated, the clock source is decided by the SSM level. If two reference sources have the
same SSM level, the reference source priority takes effect, in the way described above.

NOTE:
The following clock sources are excluded in clock selection (when SSM is activated):
• Clock sources whose signals are lost.
• Clock sources whose priority is 255.
• Clock sources whose SSM level is DNU (DoNotUse).

Working mode of the port clock


Depending on the source of the port clock, a port supports two modes of clocks:

84
Master mode
In this mode, the system uses the clock signals provided by the clock monitoring module. The signals
include local clock signals and clock signals derived from LPU Port. If you have configured on the device
to derive clock signals from LPU Port, these derived signals are adopted; otherwise, local clock signals
are adopted.

Slave mode
In this mode, the system uses line clock signals. Only when you specify LPU Port of the device as the
current port can the system derive the clock source from the line signals received on the current port and
then send the clock source information to the clock monitoring module, which then sends the information
to all cards.

IMPORTANT:
When connected to SONET/SDH devices, a device should be set to work in slave clock mode, because the
SONET/SDH clock is more precise than that of the device.

Clock monitoring module configuration task list


Complete these tasks to configure clock monitoring module:

Task Remarks
Configuring working mode of the clock monitoring module of the SRPU Optional

Configuring reference source priority Optional

Setting the ways of deriving SSM level Optional


Configuring SSM Setting the bit position for transmitting bits clock source information Optional
for reference
sources Configuring SSM levels for the reference sources Optional

Activating/deactivating SSM Optional

Setting the input port of the line clock (LPU port) Optional

Configuring working mode of the clock monitoring


module of the SRPU
Step Command Remarks
1. Enter system view. system-view N/A

2. Set the working mode of the clock { auto | manual source Optional.
clock monitoring module. source-number } auto by default.

NOTE:
After you set the working mode of the clock monitoring module, it takes a period of time for device
response. You can check whether your configuration takes effect through the display clock device
command and the logging information.

85
Configuring reference source priority
In auto mode, the clock monitoring module selects and switches to a reference source with a higher
priority based on SSM level and reference source priority. The smaller the value, the higher the priority.
To configure reference source priority:

Step Command Remarks


1. Enter system view. system-view N/A

2. Configure the reference clock priority value source


255 by default.
source priority. source-number

Configuring SSM for reference sources


Setting the ways of deriving SSM level
You can use the following two ways to derive SSM level:
• For bits clock source, SSM level can be derived from the interface card and reported to the SRPU,
which then sends the SSM level to the clock monitoring module.
• Or you can configure SSM level as needed, as shown in the following table:
To set the ways of deriving SSM level:

Step Command Remarks


1. Enter system view. system-view N/A

Optional.

2. Set the ways to derive SSM clock forcessm { on | off } source By default, no SSM level is derived
level. source-number from any clock source, which
means the SSM level is set by
users.

Setting the bit position for transmitting bits clock source


information
The bit position for transmitting Bits clock source information can be configured as sa4, sa5, sa6, sa7
and sa8. They are 5 bits in timeslot 0 of the even frame in a multi-frame as specified by ITU-TG.704 CRC4.
You can choose one from the five bits to carry SSM information.
To set the bit for transmitting Bits clock source information:

Step Command Remarks


1. Enter system view. system-view N/A

2. Set the bit for transmitting the clock sa-bit { sa4 | sa5 | sa6 | sa7 Optional.
Bits clock source information. | sa8 } source source-number sa4 by default.

86
Configuring SSM levels for the reference sources
Follow these rules to manually set the SSM level for the clock source:
• For the line clock source, the SSM level configured is that of the clock source.
• For Bits clock source, if the input signal is a 2048 kbps (E1) signal and the clock forcessm off source
source-number command is executed, the clock source adopts the SSM level derived from the input
signals and the SSM configured is omitted.
• For Bits clock source, if the input signal is a 2048 kHz signal or a 2048 kbps signal and the clock
forcesssm on source source-number command is executed, the clock source adopts the SSM level
configured.
To set the SSM levels:

Step Command Remarks


1. Enter system view. system-view N/A

clock ssm { dnu | lnc | prc | sets |


2. Set the SSM levels of the
tnc | unknown } source unknown by default.
reference source.
source-number

NOTE:
• The reference source with the SSM level DNU cannot be used as a clock source. Therefore, it does not
participate in clock source switch when the clock monitoring module works in auto mode.
• After you set the SSM levels of the reference source, it takes a period of time for device response. In this
case, you can check whether your configuration takes effect through the display clock ssm-level
command and the logging information.

Activating/deactivating SSM
Whether the SSM levels are obtained through clock signals or configured manually, you have to activate
or deactivate them before they can take effect.
• When SSM is activated, the reference source level is decided by the SMM level first and then the
priority of the reference source in automatic clock source selection.
• When SSM is not activated, you can still set and view the SSM levels, but they are ignored and the
reference source level is decided by the priority of the reference source in automatic clock source
selection.
To activate or deactivate SSM:

Step Command Remarks


1. Enter system view. system-view N/A

Optional.
2. Activate/deactivate SSM. clock ssmcontrol { on | off }
By default, SSM is deactivated.

Setting the input port of the line clock (LPU port)


To set the input port of the line clock (LPU port):

87
Step Command Remarks
1. Enter system view. system-view N/A

Optional.

2. Set the input port of the line clock lpuport interface-type By default, the clock input port is
clock. interface-number the first configurable port by port
name in alphabetical order of the
interface card.
3. Enter interface view (ATM
interface/POS interface interface-type
N/A
interface/CPOS interface/E1 interface-number
interface/T1 interface).
Optional.
4. Set the input interface to work
clock slave By default, the input interface
in Slave mode.
works in Slave mode.

NOTE:
• For a POS interface card, if you set the port clock to work in Slave mode, you must use the clock lpuport
command to set the input port of the card clock source.
• For more information about the clock slave command, see Interface Command Reference.

Displaying and maintaining the clock monitoring


module
Task Command Remarks
display clock config [ | { begin |
Display the current configuration of
exclude | include } Available in any view
the clock monitoring module.
regular-expression ]

display clock device [ | { begin |


Display the detailed information of
exclude | include } Available in any view
the clock monitoring module.
regular-expression ]

display clock lpuport [ | { begin |


Display the input clock source port
exclude | include } Available in any view
of the line card.
regular-expression ]

display clock phase-lock-state [ |


Display the lock state of the clock
{ begin | exclude | include } Available in any view
monitoring module.
regular-expression ]

display clock priority [ | { begin |


Display the priority of all reference
exclude | include } Available in any view
sources.
regular-expression ]

display clock self-test-result [ |


Display the self-test result of the
{ begin | exclude | include } Available in any view
clock monitoring module.
regular-expression ]

88
Task Command Remarks
display clock source [ | { begin |
Display the state of all reference
exclude | include } Available in any view
sources.
regular-expression ]

display clock ssm-level [ | { begin |


Display the SSM level of all
exclude | include } Available in any view
reference sources.
regular-expression ]

display clock ssm-output [ | { begin


Display the SSM level of the output
| exclude | include } Available in any view
clock signal.
regular-expression ]

display clock version [ | { begin |


Display the clock monitoring
exclude | include } Available in any view
module version.
regular-expression ]

Display the working mode of the display clock work-mode [ |


clock monitoring module of the { begin | exclude | include } Available in any view
SRPU. regular-expression ]

Clock monitoring module configuration example


Network requirements
• Device A and Device B are connected through the POS interfaces. Device A is equipped with clock
monitoring module on its SRPU.
• The synchronized clock of Device A is provided by the clock monitoring module on its SRPU.
• Device B adopts the line clock from Device A to synchronize with the SDH line of Device A.
Figure 33 Network diagram

Configuration procedure
1. Configure Device A (master clock):
# Set the interface POS 3/1/1 to work in master clock mode, using local clock signals.
<DeviceA> system-view
[DeviceA] interface pos 3/1/1
[DeviceA-Pos3/1/1] clock master
2. Configure Device B (slave clock):
# Set the LPU port on Device B to POS 3/1/1.
<DeviceB> system-view
[DeviceB] clock lpuport pos 3/1/1
# Set the interface POS 3/1/1 to work in slave clock mode.
[DeviceB] interface pos 3/1/1
[DeviceB-Pos3/1/1] clock slave
[DeviceB-Pos3/1/1] quit

89
# Enable the clock source from POS 3/1/1, set the clock to work in manual mode and adopt clock
source 8, which corresponds to slot 3 on an SR8808 router (see Figure 32 for the relationship
between reference source and slot).
[DeviceB] clock manual source 3
Through the above configurations, all the other WAN interface cards get the same clock frequency
derived by the clock card from the port 1 line clock of the POS interface card. In this way, all the
service cards on the device can get precise, reliable, synchronized SDH line interface clock
signals.

90
Configuring IPC

IPC overview
Inter-Process Communication (IPC) is a reliable communication mechanism among different nodes. The
following are the basic concepts in IPC.

Node
An IPC node is an independent IPC-capable processing unit, typically, a CPU.
• Typically a router is available with multiple boards, each having one CPU, and some boards are
available with multiple CPUs. Some routers may be available with multiple CPUs, for example
service CPU and OAM CPU. Therefore, a router corresponds to multiple nodes.
• An Intelligent Resilient Framework (IRF) virtual device is an interconnection of several centralized
routers, with each member router corresponding to one or more nodes. Therefore, an IRF virtual
device corresponds to multiple nodes.
Therefore, in actual application, IPC provides a reliable transmission mechanism between different
routers and boards.

Link
An IPC link is a connection between any two IPC nodes. There is one and only one link between any two
nodes for sending and receiving packets. All IPC nodes are fully connected.
IPC links are created when the system is initialized. An IPC node, upon startup, sends handshake packets
to other nodes. If the handshake succeeds, a connection is established.
The system uses link status to identify the link connectivity between two nodes. An IPC node can have
multiple links, each having its own status.

Channel
A channel is a communication interface for an upper layer application module of a node to
communicate with an application module of a peer node. Each node assigns a locally unique channel
number to each upper layer application module for identification.
Data of an upper layer application module is sent to the IPC module through a channel, and the IPC
module sends the data to a peer node through the link. The relationship between a node, link and
channel is as shown in Figure 34.

91
Figure 34 Relationship between a node, link and channel

Packet sending modes


IPC supports three packet sending modes: unicast, multicast (including broadcast, a special multicast),
and mixcast, each having a corresponding queue. The upper layer application modules can select one
as needed.
• Unicast—One node sends packets to another node.
• Multicast—One node sends packets to multiple nodes. To use multicast mode, an application
module must create a multicast group that includes a set of nodes. Multicasts destined for this group
are sent to all the nodes in the group. An application module can create multiple multicast groups.
Creation and deletion of a multicast group and multicast group members depend on the
application module.
• Mixcast—Supports both unicast and multicast.

Enabling IPC performance statistics


When IPC performance statistics is enabled, the system collects input and output statistics for a node in
a specified time range (for example, in the last 10 seconds, or in the last 1 minute). When IPC
performance statistics is disabled, statistics collection is stopped. If you execute the display command,
the system displays the statistics at the time when IPC performance statistics was disabled.
To enable IPC performance statistics:

Task Command Remarks


ipc performance enable { node
Enable IPC performance statistics
node-id | self-node } [ channel By default, the function is disabled.
in user view.
channel-id ]

92
Displaying and maintaining IPC
Task Command Remarks
display ipc node [ | { begin | exclude | include } Available in any
Display IPC node information.
regular-expression ] view

Display channel information for a display ipc channel { node node-id | self-node } [ | Available in any
node. { begin | exclude | include } regular-expression ] view

Display queue information for a display ipc queue { node node-id | self-node } [ | Available in any
node. { begin | exclude | include } regular-expression ] view

display ipc multicast-group { node node-id |


Display multicast group Available in any
self-node } [ | { begin | exclude | include }
information for a node. view
regular-expression ]

Display packet information for a display ipc packet { node node-id | self-node } [ | Available in any
node. { begin | exclude | include } regular-expression ] view

Display link status information for a display ipc link { node node-id | self-node } [ | Available in any
node. { begin | exclude | include } regular-expression ] view

display ipc performance { node node-id |


Display IPC performance statistics Available in any
self-node } [ channel channel-id ] [ | { begin |
of a node. view
exclude | include } regular-expression ]

Clear IPC performance statistics of reset ipc performance [ node node-id | self-node ] Available in user
a node. [ channel channel-id ] view

93
Configuring SNMP

This chapter provides an overview of the Simple Network Management Protocol (SNMP) and guides you
through the configuration procedure.

SNMP overview
SNMP is an Internet standard protocol widely used for a management station to access and operate the
devices on a network, regardless of their vendors, physical characteristics and interconnect technologies.
SNMP enables network administrators to read and set the variables on managed devices for state
monitoring, troubleshooting, statistics collection, and other management purposes.

SNMP framework
The SNMP framework comprises the following elements:
• SNMP manager—Works on a network management system (NMS) to monitor and manage the
SNMP-capable devices in the network.
• SNMP agent—Works on a managed device to receive and handle requests from the NMS, and
send traps to the NMS when some events, such as interface state change, occur.
• Management Information Base (MIB)—Specifies the variables (for example, interface status and
CPU usage) maintained by the SNMP agent for the SNMP manager to read and set.

SNMP operations
SNMP provides the following basic operations:
• Get—The NMS retrieves SNMP object nodes in an agent MIB.
• Set—The NMS modifies the value of an object node in the agent MIB.
• Notifications—Includes traps and informs. SNMP agent sends traps or informs to report events to
the NMS. The difference between these two types of notification is that informs require
acknowledgement but traps do not. The device supports only traps.

SNMP protocol versions


H3C supports SNMPv1, SNMPv2c, and SNMPv3.
• SNMPv1 uses community names for authentication. To access an SNMP agent, an NMS must use
the same community name as set on the SNMP agent. If the community name used by the NMS is
different from the community name set on the agent, the NMS cannot establish an SNMP session to
access the agent or receive traps and notifications from the agent.
• SNMPv2c also uses community names for authentication. SNMPv2c is compatible with SNMPv1,
but supports more operation modes, data types, and error codes.
• SNMPv3 uses a user-based security model (USM) to secure SNMP communication. You can
configure authentication and privacy mechanisms to authenticate and encrypt SNMP packets for
integrity, authenticity, and confidentiality.

94
IMPORTANT:
An NMS and an SNMP agent must use the same SNMP version to communicate with each other.

MIB and view-based MIB access control


A MIB view represents a set of MIB objects (or MIB object hierarchies) with certain access privilege and
is identified by a view name. The MIB objects included in the MIB view are accessible while those
excluded from the MIB view are inaccessible.
A MIB view can have multiple view records each identified by a view-name oid-tree pair.
You can control access to the MIB by assigning MIB views to SNMP groups or communities. The
relationship between an NMS, agent and MIB is shown in Figure 35.
Figure 35 Relationship between an NMS, agent and MIB

A MIB stores variables called “nodes” or “objects” in a tree hierarchy and identifies each node with a
unique OID. An OID is a string of numbers that describes the path from the root node to a leaf node. For
example, the object B in Figure 36 is uniquely identified by the OID {1.2.1.1}.
Figure 36 MIB tree

1 2

1 2

1 2
B
5 6
A

Configuring SNMP basic parameters


SNMPv3 differs from SNMPv1 and SNMPv2c in many aspects. Their configuration procedures are
described in separate sections.

Configuring SNMPv3 basic parameters


To configure SNMPv3 basic parameters:

Step Command Remarks


1. Enter system view. system-view N/A

95
Step Command Remarks
Optional.

2. Enable the SNMP Disabled by default.


agent. snmp-agent You can also enable the SNMP
agent by using any command that
begin with snmp-agent.

Optional.
The defaults are as follows:
3. Configure system snmp-agent sys-info { contact sys-contact | • Hangzhou H3C Technologies
information for the location sys-location | version { all | { v1 | Co., Ltd. for contact.
SNMP agent. v2c | v3 }* } }
• Hangzhou China for location.
• SNMP v3 for the version.

Optional.
4. Configure the local
engine ID. snmp-agent local-engineid engineid The default local engine ID is the
company ID plus the device ID.

Optional.
5. Create or update a snmp-agent mib-view { excluded |
included } view-name oid-tree [ mask By default, the MIB view
MIB view. ViewDefault is predefined and its
mask-value ]
OID is 1.

snmp-agent group v3 group-name


6. Configure an [ authentication | privacy ] [ read-view
SNMPv3 group. N/A
read-view ] [ write-view write-view ]
[ notify-view notify-view ] [ acl acl-number ]

7. Convert a plaintext snmp-agent calculate-password


key to an encrypted plain-password mode { 3desmd5 |
Optional.
key. 3dessha | md5 | sha } { local-engineid |
specified-engineid string }

snmp-agent usm-user v3 user-name


group-name [ [ cipher ] If the cipher keyword is specified,
8. Add a user to the authentication-mode { md5 | sha } the arguments auth-password and
SNMPv3 group. auth-password [ privacy-mode { 3des | priv-password are considered as
aes128 | des56 } priv-password ] ] [ acl encrypted keys.
acl-number ]
9. Configure the
maximum SNMP
Optional.
packet size (in bytes) snmp-agent packet max-size byte-count
that the SNMP agent 1,500 bytes by default.
can handle.

Configuring SNMPv1 or SNMPv2c basic parameters


To configure SNMPv1 or SNMPv2c basic parameters:

Step Command Remarks


1. Enter system view. system-view N/A

96
Step Command Remarks
Optional.
Disabled by default.
2. Enable the SNMP agent. snmp-agent You can also enable the SNMP
agent by using any command
that begins with snmp-agent.

The defaults are as follows:


• Hangzhou H3C
3. Configure system snmp-agent sys-info { contact sys-contact | Technologies Co., Ltd. for
information for the location sys-location | version { { v1 | v2c contact.
SNMP agent. | v3 }* | all } } • Hangzhou China for
location.
• SNMP v3 for the version.

Optional.
4. Configure the local
engine ID. snmp-agent local-engineid engineid The default local engine ID is the
company ID plus the device ID.

Optional.
5. Create or update a MIB snmp-agent mib-view { excluded |
included } view-name oid-tree [ mask By default, the MIB view
view. ViewDefault is predefined and
mask-value ]
its OID is 1.
• (Approach 1) Create an SNMP
community:
snmp-agent community { read | write }
community-name [ acl acl-number |
mib-view view-name ]* Use either approach.
• (Approach 2) Create an SNMP group, To be compatible with SNMPv3,
and add a user to the SNMP group: use the snmp-agent group
6. Configure SNMP access
right. a. snmp-agent group { v1 | v2c } command.
group-name [ read-view Make sure that the username is
read-view ] [ write-view the same as the community
write-view ] [ notify-view name configured on the NMS.
notify-view ] [ acl acl-number ]
b. snmp-agent usm-user { v1 | v2c }
user-name group-name [ acl
acl-number ]

7. Configure the maximum


size (in bytes) of SNMP Optional
packets for the SNMP snmp-agent packet max-size byte-count
1,500 bytes by default.
agent.

NOTE:
Each view-name oid-tree pair represents a view record. If you specify the same record with different MIB
subtree masks multiple times, the last configuration takes effect. Except the four subtrees in the default MIB
view, you can create up to 16 unique MIB view records.

97
Configuring SNMP logging
Introduction to SNMP logging
The SNMP logging function logs the Get requests, Set requests, and Set responses that the NMS has
performed on the SNMP agent, but does not log the Get responses.
• For a Get operation, the agent logs the IP address of the NMS, name of the accessed node, and
node OID.
• For a Set operation, the agent logs the IP address of the NMS, name of the accessed node, node
OID, the assigned value and the error code and error index of the Set response.
The SNMP module sends these logs to the information center as informational messages. You can
configure the information center to output these messages to certain destinations, for example, the
console and the log buffer. For more information about the information center, see the chapter
“Configuring the information center.”

Enabling SNMP logging


Step Command Remarks
1. Enter system view. system-view N/A

2. Enable SNMP logging. snmp-agent log { all |


Disabled by default.
get-operation | set-operation }

Optional.
info-center source { module-name |
default } channel { channel-number By default, SNMP logs are output
3. Configure SNMP log output | channel-name } [ debug { level to loghost and logfile only. To
rules. severity | state state } * | log { level output SNMP logs to other
severity | state state } * | trap destinations such as the console or
{ level severity | state state } * ] * a monitor terminal, set the output
destinations with this command.

NOTE:
• Disable SNMP logging in normal cases to prevent a large amount of SNMP logs from decreasing device
performance.
• The total output size for the node field (MIB node name) and the value field (value of the MIB node) in
each log entry is 1024 bytes. If this limit is exceeded, the information center truncates the data in the
fields.

Configuring SNMP traps


The SNMP agent sends traps to inform the NMS of important events, such as a reboot.
Traps fall into generic traps and vendor-specific traps. Available generic traps include authentication,
coldstart, linkdown, linkup and warmstart. All other traps are vendor-defined.
SNMP traps generated by a module are sent to the information center. You can configure the information
center to enable or disable outputting the traps from a module by their severity and set output
destinations. For more information about the information center, see the chapter “Configuring the
information center.”

98
Enabling SNMP traps
To enable traps:

Step Command Remarks


1. Enter system view. system-view N/A

snmp-agent trap enable [ acfp [ client | policy | rule


| server ] | bfd | bgp | configuration | flash | fr |
mpls | ospf [ process-id ] [ ifauthfail | ifcfgerror |
ifrxbadpkt | ifstatechange | iftxretransmit |
lsdbapproachoverflow | lsdboverflow | maxagelsa Optional.
| nbrstatechange | originatelsa | vifcfgerror | By default, only the trap
2. Enable traps globally. virifauthfail | virifrxbadpkt | virifstatechange | function of the voice
viriftxretransmit | virnbrstatechange ] * | pim module is disabled, and
[ neighborloss | invalidregister | invalidjoinprune | the trap function of other
rpmappingchange | interfaceelection | modules is enabled.
electedbsrlostelection | candidatebsrwinelection ] *
| standard [ authentication | coldstart | linkdown |
linkup | warmstart ] * | system | vrrp [ authfailure
| newmaster ] ]

• interface interface-type interface-number


3. Enter interface view. Use either approach.
• controller { cpos | e1 | e3 | t1 | t3 } number

4. Enable link state Optional.


traps. enable snmp trap updown
Enabled by default.

NOTE:
• To generate linkUp or linkDown traps when the link state of an interface changes, you must enable the
linkUp or linkDown trap function globally by using the snmp-agent trap enable [ standard [ linkdown
| linkup ] * ] command and on the interface by using the enable snmp trap updown command.
• After you enable a trap function for a module, whether the module generates traps also depends on the
configuration of the module. For more information, see the configuration guide for each module.

Configuring the SNMP agent to send traps to a host


Configuration prerequisites
• Complete the basic SNMP settings and check that they are the same as on the NMS. If SNMPv1 or
SNMPv2c is used, you must configure a community name. If SNMPv3 is used, you must configure
an SNMPv3 user and MIB view.
• The device and the NMS can reach each other.

Configuration procedure
The SNMP module buffers the traps received from a module in a trap queue. You can set the size of the
queue, the duration that the queue holds a trap, and trap target (destination) hosts, typically the NMS.
To configure the SNMP agent to send traps to a host:

99
Step Command Remarks
1. Enter system view. system-view N/A

Optional.
snmp-agent target-host trap address
The vpn-instance keyword is
udp-domain { ip-address | ipv6
applicable in an IPv4 network.
2. Configure a target host. ipv6-address } [ udp-port port-number ]
[ vpn-instance vpn-instance-name ] To send the traps to the NMS, this
params securityname security-string [ v1 command is required, and you
| v2c | v3 [ authentication | privacy ] ] must specify ip-address as the IP
address of the NMS.

3. Configure the source snmp-agent trap source interface-type


address for traps. { interface-number | Optional.
interface-number.subnumber }

Optional.
4. Extend the standard
linkUp/linkDown traps. snmp-agent trap if-mib link extended By default, standard
linkUp/linkDown traps are used.

5. Configure the trap Optional.


queue size. snmp-agent trap queue-size size
The default trap queue size is 100.

6. Configure the trap Optional.


holding time. snmp-agent trap life seconds
120 seconds by default.

NOTE:
• Extended linkUp/linkDown traps add interface description and interface type to standard
linkUp/linkDown traps. If the NMS does not support extended SNMP messages, use standard
linkUp/linkDown traps.
• When the trap queue is full, the oldest traps are automatically deleted for new traps.
• A trap is deleted when its holding time expires.

Displaying and maintaining SNMP


Task Command Remarks
Display SNMP agent system
display snmp-agent sys-info [ contact |
information, including the contact,
location | version ]* [ | { begin | exclude | Available in any view
physical location, and SNMP
include } regular-expression ]
version.

display snmp-agent statistics [ | { begin |


Display SNMP agent statistics. Available in any view
exclude | include } regular-expression ]

display snmp-agent local-engineid [ | { begin


Display the local engine ID. Available in any view
| exclude | include } regular-expression ]

display snmp-agent group [ group-name ] [ |


Display SNMP group information. { begin | exclude | include } Available in any view
regular-expression ]

Display basic information about display snmp-agent trap queue [ | { begin |


Available in any view
the trap queue. exclude | include } regular-expression ]

100
Task Command Remarks
Display the modules that can send
display snmp-agent trap-list [ | { begin |
traps and their trap status (enable Available in any view
exclude | include } regular-expression ]
or disable).

display snmp-agent usm-user [ engineid


engineid | username user-name | group
Display SNMPv3 user information. Available in any view
group-name ] * [ | { begin | exclude |
include } regular-expression ]

display snmp-agent community [ read |


Display SNMPv1 or SNMPv2c
write ] [ | { begin | exclude | include } Available in any view
community information.
regular-expression ]

display snmp-agent mib-view [ exclude |


Display MIB view information. include | viewname view-name ] [ | { begin | Available in any view
exclude | include } regular-expression ]

SNMP configuration examples


SNMPv1/SNMPv2c configuration example
Network requirements
As shown in Figure 37, the NMS (1.1.1.2/24) uses SNMPv1 or SNMPv2c to manage the SNMP agent
(1.1.1.1/24), and the agent automatically sends traps to report events to the NMS.
Figure 37 Network diagram

Configuration procedure
1. Configure the SNMP agent:
# Configure the IP address of the agent as 1.1.1.1/24 and make sure that the agent and the NMS
can reach each other. (Details not shown)
# Specify SNMPv1 and SNMPv2c, and create a read-only community public and a read and write
community private.
<Sysname> system-view
[Sysname] snmp-agent sys-info version v1 v2c
[Sysname] snmp-agent community read public
[Sysname] snmp-agent community write private
# Configure contact and physical location information for the switch.
[Sysname] snmp-agent sys-info contact Mr.Wang-Tel:3306
[Sysname] snmp-agent sys-info location telephone-closet,3rd-floor
# Enable SNMP traps, set the NMS at IP address 1.1.1.2/24 as an SNMP trap destination, and
use public as the community name. (To make sure that the NMS can receive traps, specify the same
SNMP version in the snmp-agent target-host command as that on the NMS.)

101
[Sysname] snmp-agent trap enable
[Sysname] snmp-agent target-host trap address udp-domain 1.1.1.2 params securityname
public v1
2. Configure the SNMP NMS:
# Configure the SNMP version for the NMS as v1 or v2c, create a read-only community and name
it public, and create a read and write community and name it private. For how to configure the
NMS, see the manual for the NMS.

NOTE:
The configurations on the agent and the NMS must match.

3. Verify the configuration:


{ After the above configuration, an SNMP connection is established between the NMS and the
agent. The NMS can get and configure the values of some parameters on the agent through
MIB nodes.
{ Execute the shutdown or undo shutdown command to an idle interface on the agent, and the
NMS receives the corresponding trap.

SNMPv3 configuration example


Network requirements
As shown in Figure 38, the NMS (1.1.1.2/24) uses SNMPv3 to monitor and manage the interface status
of the agent (1.1.1.1/24), and the agent automatically sends traps to report events to the NMS.
The NMS and the agent perform authentication when they set up an SNMP session. The authentication
algorithm is MD5 and the authentication key is authkey. The NMS and the agent also encrypt the SNMP
packets between them by using the DES algorithm and the privacy key prikey.
Figure 38 Network diagram

Configuration procedure
1. Configure the SNMP agent:
# Configure the IP address of the agent as 1.1.1.1/24 and make sure that the agent and the NMS
can reach each other. (Details not shown)
# Assign the NMS read and write access to the objects under the snmp node (OID 1.3.6.1.2.1.2),
and deny its access to any other MIB object. Set the user name to managev3user, authentication
algorithm to MD5, authentication key to authkey, the encryption algorithm to DES56, and the
privacy key to prikey.
<Sysname> system-view
[Sysname] undo snmp-agent mib-view ViewDefault
[Sysname] snmp-agent mib-view included test interfaces
[Sysname] snmp-agent group v3 managev3group read-view test write-view test
[Sysname] snmp-agent usm-user v3 managev3user managev3group authentication-mode md5
authkey privacy-mode des56 prikey

102
# Configure the contact person and physical location information for the device.
[Sysname] snmp-agent sys-info contact Mr.Wang-Tel:3306
[Sysname] snmp-agent sys-info location telephone-closet,3rd-floor
# Enable traps, specify the NMS at 1.1.1.2 as a trap destination, and set the username to
managev3user for the traps.
[Sysname] snmp-agent trap enable
[Sysname] snmp-agent target-host trap address udp-domain 1.1.1.2 params securityname
managev3user v3 privacy
2. Configure the SNMP NMS:
{ Specify the SNMP version for the NMS as v3.
{ Create two SNMP users: managev3user and public.
{ Enable both authentication and privacy functions.
{ Use MD5 for authentication and DES for encryption.
{ Set the authentication key to authkey and the privacy key to prikey.
{ Set the timeout time and maximum number of retries.
For information about configuring the NMS, see the NMS manual.

NOTE:
The SNMP settings on the agent and the NMS must match.

3. Verify the configuration:


{ After the above configuration, an SNMP connection is established between the NMS and the
agent. The NMS can get and configure the values of some parameters on the agent through
MIB nodes.
{ Execute the shutdown or undo shutdown command to an idle interface on the agent, and the
NMS receives the corresponding trap.

SNMP logging configuration example


Network requirements
Configure the SNMP agent (1.1.1.1/24) in Figure 39 to log the SNMP operations performed by the NMS.
Figure 39 Network diagram

Agent
1.1.1.1/24

NMS
Console
1.1.1.2/24

Terminal

Configuration procedure

103
NOTE:
This example assumes that you have configured all required SNMP settings for the NMS and the agent
(see “SNMPv1/SNMPv2c configuration example” or “SNMPv3 configuration example”).

# Enable logging display on the terminal. (This function is enabled by default, so that you can omit this
configuration).
<Sysname> terminal monitor
<Sysname> terminal logging

# Enable the information center to output the system information with the severity level equal to or higher
than informational to the console port.
<Sysname> system-view
[Sysname] info-center source snmp channel console log level informational

# Enable SNMP logging on the agent to log the Get and Set operations of the NMS.
[Sysname] snmp-agent log get-operation
[Sysname] snmp-agent log set-operation

# Verify the configuration:


Use the NMS to get a MIB variable from the agent. The following is a sample log message displayed on
the configuration terminal:
%Jan 1 02:49:40:566 2006 Sysname SNMP/6/GET:
seqNO = <10> srcIP = <1.1.1.2> op = <get> node = <sysName(1.3.6.1.2.1.1.5.0)> value=<>

Use the NMS to set a MIB variable on the agent. The following is a sample log message displayed on
the configuration terminal:
%Jan 1 02:59:42:576 2006 Sysname SNMP/6/SET:
seqNO = <11> srcIP = <1.1.1.2> op = <set> errorIndex = <0> errorStatus =<noError> node
= <sysName(1.3.6.1.2.1.1.5.0)> value = <Sysname>

Table 2 SNMP log message field description

Field Description
Jan 1 02:49:40:566 2006 Time when the SNMP log was generated.

seqNO Serial number automatically assigned to the SNMP log, starting from 0.

srcIP IP address of the NMS.

op SNMP operation type (GET or SET).

node MIB node name and OID of the node instance.

erroIndex Error index, with 0 meaning no error.

errorstatus Error status, with noError meaning no error.

Value set when the SET operation is performed (this field is null for a GET
operation).
value If the value is a character string that has characters beyond the ASCII range 0 to
127 or invisible characters, the string is displayed in hexadecimal format, for
example, value = <81-43>[hex].

104
NOTE:
The information center can output system event messages to several destinations, including the terminal
and the log buffer. In this example, SNMP log messages are output to the terminal. To configure other
message destinations, see the chapter “Configuring the information center.”

105
Configuring MIB style

Overview
MIBs fall into public MIBs and private MIBs. A private MIB is attached to a sub-node under the
enterprises MIB node (1.3.6.1.4.1). The H3C private MIB has two styles: the H3C compatible MIB style
and the H3C new MIB style:
• In the H3C compatible MIB style, the device public MIB is under the H3C’s enterprise ID 25506,
and the private MIB is under the enterprise ID 2011.
• In the H3C new MIB style, both the device public MIB and the private MIB are under the H3C’s
enterprise ID 25506.
These two styles of MIBs implement the same management function. Your device comes with a MIB
loaded but the MIB style depends on the device model. You can change the MIB style as needed, but
must make sure that the device is using the same MIB style as the NMS.

Setting the MIB style


To set the MIB style:

Step Command Remarks


1. Enter system view. system-view N/A

Optional.
2. Set the MIB style. mib-style [ new | compatible ] By default, H3C new MIB style is
used.

NOTE:
After changing the MIB style, reboot the device to validate the change.

Displaying and maintaining MIB style


Task Command Remarks
display mib-style [ | { begin |
Display the MIB style. exclude | include } Available in any view
regular-expression ]

106
Configuring RMON

RMON overview
Introduction
Remote Monitoring (RMON) provides functions including statistics collection and alarm generation for
network management stations (NMSs) to monitor and manage devices. The statistics collection function
enables a managed device to periodically or continuously track traffic information, such as the count of
incoming packets and the count of incoming oversized packets, for the networks attached to its ports. The
alarm function enables a managed device to monitor the value of a specified MIB variable, log the event
and send a trap to the management device when the value reaches the threshold, such as the port rate
reaches a certain value or the potion of broadcast packets received in the total packets reaches a certain
value.
Both RMON and SNMP are for remote network management.
• RMON is implemented on the basis of the SNMP, and is an enhancement to SNMP. RMON sends
traps to the management device to notify the abnormality of the alarm variables by using the SNMP
trap packet sending mechanism. Although trap is also defined in SNMP, it is usually used to notify
the management device whether some functions on managed devices operate normally and the
change of physical status of interfaces. Traps in RMON and those in SNMP have different
monitored targets, triggering conditions, and report contents.
• RMON provides an efficient means of monitoring subnets and allows SNMP to monitor remote
network devices in a more proactive, effective way. The RMON protocol defines that when an
alarm threshold is reached on a managed device, the managed device sends a trap to the
management device automatically, so the management device does not need to get the values of
MIB variables for multiple times and compare them, thus greatly reducing the communication traffic
between the management device and the managed device. In this way, you can manage a large
scale of network easily and effectively.

Working mechanism
RMON allows multiple monitors (management devices). A monitor provides the following methods for
data gathering:
• Using RMON probes. Management devices can obtain management information from RMON
probes directly and control network resources. In this approach, RMON NMSs can obtain all
RMON MIB information.
• Embedding RMON agents in network devices such as routers, switches, and hubs to provide the
RMON probe function. Management devices exchange data with RMON agents using basic
SNMP commands to gather network management information, which, due to system resources
limitation, may not cover all MIB information but four groups of information, alarm, event, history,
and statistics, in most cases.
The SR8800 routers adopt the second way and realize the RMON agent function. With the RMON
agent function, the management device can obtain the traffic that flows among the managed devices on

107
each connected network segments; obtain information about error statistics and performance statistics
collection for network management.

RMON groups
Among the RMON groups defined by RMON specifications (RFC 2819), the device uses the statistics
group, history group, event group, and alarm group supported by the public MIB. In addition, H3C also
defines and implements a private alarm group, which enhances the functions of the alarm group. This
section describes the five kinds of groups in general.

Ethernet statistics group


The statistics group defines that the system collects statistics of various traffic information on an interface
(at present, only Ethernet interfaces are supported) and saves the statistics in the Ethernet statistics table
(ethernetStatsTable) for query convenience of the management device. It provides statistics about network
collisions, CRC alignment errors, undersize/oversize packets, broadcasts, multicasts, bytes received,
packets received, and so on.
After the creation of a statistics entry on an interface, the statistics group starts to collect traffic statistics
on the interface. The result of the statistics is a cumulative sum.

History group
The history group defines that the system periodically collects statistics of traffic information on an
interface and saves the statistics in the history record table (ethernetHistoryTable) for query convenience
of the management device. The statistics include bandwidth utilization, number of error packets, and
total number of packets.
A history group collects statistics on packets received on the interface during each period, which can be
configured at the command line interface (CLI).

Event group
The event group defines event indexes and controls the generation and notifications of the events
triggered by the alarms defined in the alarm group and the private alarm group. The events can be
handled in one of the following ways:
• Log—Logging event related information (the occurred events, contents of the event, and so on) in the
event log table of the RMON MIB of the device, and thus the management device can check the
logs through the SNMP Get operation.
• Trap—Sending a trap to notify the occurrence of this event to the network management station
(NMS).
• Log-Trap—Logging event information in the event log table and sending a trap to the NMS.
• None—No action.

Alarm group
The RMON alarm group monitors specified alarm variables, such as total number of received packets
(etherStatsPkts) on an interface. After you define an alarm entry, the system gets the value of the
monitored alarm variable at the specified interval. When the value of the monitored variable is greater
than or equal to the rising threshold, a rising event is triggered; when the value of the monitored variable
is smaller than or equal to the lower threshold, a lower event is triggered. The event is then handled as
defined in the event group.
If the value of a sampled alarm variable overpasses the same threshold multiple times, only the first one
can cause an alarm event. In other words, the rising alarm and falling alarm are alternate. As shown
in Figure 40, the value of an alarm variable (the black curve in the figure) overpasses the threshold value

108
(the blue line in the figure) for multiple times, and multiple crossing points are generated, but only
crossing points marked with the red crosses can trigger alarm events.
Figure 40 Rising and falling alarm events

Private alarm group


The private alarm group calculates the values of alarm variables and compares the result with the defined
threshold, thereby realizing a more comprehensive alarming function.
The system handles the prialarm alarm table entry (as defined by the user) in the following ways:
• Periodically samples the prialarm alarm variables defined in the prialarm formula.
• Calculates the sampled values based on the prialarm formula.
• Compares the result with the defined threshold and generates an appropriate event.

NOTE:
If the count result of the private alarm group overpasses the same threshold multiple times, only the first
one can cause an alarm event. In other words, the rising alarm and falling alarm are alternate.

Configuring the RMON statistics function


RMON statistics function can be implemented by either the Ethernet statistics group or the history group,
but the objects of the statistics are different, you can configure a statistics group or a history group
accordingly.
• A statistics object of the Ethernet statistics group is a variable defined in the Ethernet statistics table,
and the recorded content is a cumulative sum of the variable from the time the statistics entry is
created to the current time. For detailed configuration, see “Configuring the RMON Ethernet
statistics function.”
• A statistics object of the history group is the variable defined in the history record table, and the
recorded content is a cumulative sum of the variable in each period. For detailed configuration, see
“Configuring the RMON history statistics function.”

Configuring the RMON Ethernet statistics function


To configure the RMON Ethernet statistics function:

109
Step Command
1. Enter system view. system-view
2. Enter Ethernet interface view. interface interface-type interface-number
3. Create an entry in the RMON statistics table. rmon statistics entry-number [ owner text ]

NOTE:
• Only one statistics entry can be created on one interface.
• You can create up to 100 statistics entries for the device.

Configuring the RMON history statistics function


To configure the RMON history statistics function:

Step Command
1. Enter system view. system-view
2. Enter Ethernet interface view. interface interface-type interface-number
3. Create an entry in the RMON history rmon history entry-number buckets number interval
control table. sampling-interval [ owner text ]

NOTE:
• The entry-number must be globally unique and cannot be used on another interface; otherwise, the
operation fails.
• You can configure multiple history entries on one interface, but the values of the entry-number
arguments must be different, and the values of the sampling-interval arguments must be different too;
otherwise, the operation fails.
• You can create up to 100 history entries for the device.
• When you create an entry in the history table, if the specified buckets number argument exceeds the
history table size supported by the device, the entry is created. However, the validated value of the
buckets number argument that corresponds to the entry is the history table size supported by the device.

Configuring the RMON alarm function


Configuration prerequisites
• To enable the managed devices to send traps to the NMS when the NMS triggers an alarm event,
configure the SNMP agent as described in the chapter “SNMP configuration” before configuring
the RMON alarm function.
• If the alarm variable is the MIB variable defined in the history group or the Ethernet statistics group,
make sure that the RMON Ethernet statistics function or the RMON history statistics function is
configured on the monitored Ethernet interface; otherwise, the creation of the alarm entry fails, and
no alarm event is triggered.

110
Configuration procedure
To configure the RMON alarm function:

Step Command Remarks


1. Enter system view. system-view N/A
2. Create an event rmon event entry-number [ description string ] { log |
entry in the event log-trap log-trapcommunity | none | trap trap-community } N/A
table. [ owner text ]
• Create an entry in the alarm table:
rmon alarm entry-number alarm-variable
sampling-interval { absolute | delta } rising-threshold
threshold-value1 event-entry1 falling-threshold
3. Create an entry in threshold-value2 event-entry2 [ owner text ]
the alarm table or Use at least one
private alarm • Create an entry in the private alarm table:
command.
table. rmon prialarm entry-number prialarm-formula
prialarm-des sampling-interval { absolute | changeratio
| delta } rising-threshold threshold-value1 event-entry1
falling-threshold threshold-value2 event-entry2
entrytype { forever | cycle cycle-period } [ owner text ]

NOTE:
• A new entry cannot be created if its parameters are identical with the corresponding parameters of an
existing entry. See Table 3 for the parameters to be compared for different entries.
• The system limits the total number of each type of entries (See Table 3 for the detailed numbers). When
the total number of an entry reaches the maximum number of entries that can be created, the creation
fails.

Table 3 Restrictions on the configuration of RMON

Maximum number of entries


Entry Parameters to be compared
that can be created
Event description (description string), event type (log, trap,
Event logtrap or none) and community name (trap-community or 60
log-trapcommunity)

Alarm variable (alarm-variable), sampling interval


(sampling-interval), sampling type (absolute or delta),
Alarm 60
rising threshold (threshold-value1) and falling threshold
(threshold-value2)

Alarm variable formula (alarm-variable), sampling interval


(sampling-interval), sampling type (absolute, changeratio
Prialarm 50
or delta), rising threshold (threshold-value1) and falling
threshold (threshold-value2)

Displaying and maintaining RMON

111
Task Command Remarks
display rmon statistics [ interface-type
Display RMON statistics. interface-number ] [ | { begin | exclude | Available in any view
include } regular-expression ]

Display RMON history control display rmon history [ interface-type


entry and history sampling interface-number ] [ | { begin | exclude | Available in any view
information. include } regular-expression ]

display rmon alarm [ entry-number ] [ |


Display RMON alarm
{ begin | exclude | include } Available in any view
configuration information.
regular-expression ]

display rmon prialarm [ entry-number ] [ |


Display RMON prialarm
{ begin | exclude | include } Available in any view
configuration information.
regular-expression ]

Display RMON events display rmon event [ entry-number ] [ | { begin


Available in any view
configuration information. | exclude | include } regular-expression ]

display rmon eventlog [ event-number ] [ |


Display log information for the
{ begin | exclude | include } Available in any view
specified or all event entries.
regular-expression ]

RMON configuration examples


Ethernet statistics group configuration example
Network requirements
As shown in Figure 41, Agent is connected to a configuration terminal through its console port and to
Server through Ethernet cables.
Gather performance statistics on received packets on GigabitEthernet 3/1/1 through RMON Ethernet
statistics table, and thus the administrator can view the statistics on packets received on the interface at
any time.
Figure 41 Network diagram

Configuration procedure
# Configure RMON to gather statistics for interface GigabitEthernet 3/1/1.
<Sysname> system-view
[Sysname] interface GigabitEthernet 3/1/1
[Sysname-GigabitEthernet3/1/1] rmon statistics 1 owner user1-rmon

112
After the above configuration, the system gathers statistics on packets received on GigabitEthernet
3/1/1.
To view the statistics of the interface:
# Execute the display command.
<Sysname> display rmon statistics GigabitEthernet 3/1/1
EtherStatsEntry 1 owned by user1-rmon is VALID.
Interface : GigabitEthernet3/1/1<ifIndex.3>
etherStatsOctets : 21657 , etherStatsPkts : 307
etherStatsBroadcastPkts : 56 , etherStatsMulticastPkts : 34
etherStatsUndersizePkts : 0 , etherStatsOversizePkts : 0
etherStatsFragments : 0 , etherStatsJabbers : 0
etherStatsCRCAlignErrors : 0 , etherStatsCollisions : 0
etherStatsDropEvents (insufficient resources): 0
Packets received according to length:
64 : 235 , 65-127 : 67 , 128-255 : 4
256-511: 1 , 512-1023: 0 , 1024-1518: 0

# Obtain the value of the MIB node directly by executing the SNMP Get operation on the NMS through
software. (Details not shown)

History group configuration example


Network requirements
As shown in Figure 42, Agent is connected to a configuration terminal through its console port and to
Server through Ethernet cables.
Gather statistics on received packets on GigabitEthernet 3/1/1 every one minute through RMON
history statistics table, and thus the administrator can view whether data burst happens on the interface
in a short time.
Figure 42 Network diagram

Configuration procedure
# Configure RMON to periodically gather statistics for interface GigabitEthernet 3/1/1.
<Sysname> system-view
[Sysname] interface GigabitEthernet 3/1/1
[Sysname-GigabitEthernet3/1/1] rmon history 1 buckets 8 interval 60 owner user1

After the above configuration, the system periodically gathers statistics on packets received on
GigabitEthernet 3/1/1: the statistical interval is 1 minute, and statistics of the last 8 times are saved in
the history statistics table.

113
To view the statistics of the interface:
# Execute the display command.
[Sysname-GigabitEthernet3/1/1] display rmon history
HistoryControlEntry 2 owned by null is VALID
Samples interface : GigabitEthernet3/1/1<ifIndex.3>
Sampled values of record 1 :
dropevents : 0 , octets : 834
packets : 8 , broadcast packets : 1
multicast packets : 6 , CRC alignment errors : 0
undersize packets : 0 , oversize packets : 0
fragments : 0 , jabbers : 0
collisions : 0 , utilization : 0
Sampled values of record 2 :
dropevents : 0 , octets : 962
packets : 10 , broadcast packets : 3
multicast packets : 6 , CRC alignment errors : 0
undersize packets : 0 , oversize packets : 0
fragments : 0 , jabbers : 0
collisions : 0 , utilization : 0
Sampled values of record 3 :
dropevents : 0 , octets : 830
packets : 8 , broadcast packets : 0
multicast packets : 6 , CRC alignment errors : 0
undersize packets : 0 , oversize packets : 0
fragments : 0 , jabbers : 0
collisions : 0 , utilization : 0
Sampled values of record 4 :
dropevents : 0 , octets : 933
packets : 8 , broadcast packets : 0
multicast packets : 7 , CRC alignment errors : 0
undersize packets : 0 , oversize packets : 0
fragments : 0 , jabbers : 0
collisions : 0 , utilization : 0
Sampled values of record 5 :
dropevents : 0 , octets : 898
packets : 9 , broadcast packets : 2
multicast packets : 6 , CRC alignment errors : 0
undersize packets : 0 , oversize packets : 0
fragments : 0 , jabbers : 0
collisions : 0 , utilization : 0
Sampled values of record 6 :
dropevents : 0 , octets : 898
packets : 9 , broadcast packets : 2
multicast packets : 6 , CRC alignment errors : 0
undersize packets : 0 , oversize packets : 0
fragments : 0 , jabbers : 0
collisions : 0 , utilization : 0
Sampled values of record 7 :

114
dropevents : 0 , octets : 766
packets : 7 , broadcast packets : 0
multicast packets : 6 , CRC alignment errors : 0
undersize packets : 0 , oversize packets : 0
fragments : 0 , jabbers : 0
collisions : 0 , utilization : 0
Sampled values of record 8 :
dropevents : 0 , octets : 1154
packets : 13 , broadcast packets : 1
multicast packets : 6 , CRC alignment errors : 0
undersize packets : 0 , oversize packets : 0
fragments : 0 , jabbers : 0
collisions
: 0 , utilization : 0

# Obtain the value of the MIB node directly by executing the SNMP Get operation on the NMS through
software. (Details not shown)

Alarm group configuration example


Network requirements
As shown in Figure 43, Agent is connected to a console terminal through its console port and to an NMS
across Ethernet.
Do the following:
• Connect GigabitEthernet 3/1/10 to the FTP server. Gather statistics on traffic of the server on
GigabitEthernet 3/1/10 with the sampling interval being five seconds. When traffic is above or
below the thresholds, Agent sends the corresponding traps to the NMS.
• Execute the display rmon statistics command on Agent to display the statistics, and query the
statistics on the NMS.
Figure 43 Network diagram

Configuration procedure
# Configure the SNMP agent. (Note that parameter values configured on the agent must be the same as
the following configured on the NMS: suppose SNMPv1 is enabled on the NMS, the read community
name is public, the write community name is private, the IP address of the NMS is 1.1.1.2, authentication
protocol is MD5, authorization password is authkey, the privacy protocol is DES56, and the privacy
password is prikey.)
<Sysname> system-view
[Sysname] snmp-agent

115
[Sysname] snmp-agent community read public
[Sysname] snmp-agent community write private
[Sysname] snmp-agent sys-info version v1
[Sysname] snmp-agent trap enable
[Sysname] snmp-agent target-host trap address udp-domain 1.1.1.2 params securityname
public

# Configure RMON to gather statistics on interface GigabitEthernet 3/1/10.


[Sysname] interface GigabitEthernet 3/1/10
[Sysname-GigabitEthernet3/1/10] rmon statistics 1 owner user1
[Sysname-GigabitEthernet3/1/10] quit

# Create an RMON alarm entry that when the delta sampling value of node 1.3.6.1.2.1.16.1.1.1.4.1
exceeds 100 or is lower than 50, event 1 is triggered to send traps.
[Sysname] rmon event 1 trap public owner user1
[Sysname] rmon alarm 1 1.3.6.1.2.1.16.1.1.1.4.1 5 delta rising-threshold 100 1
falling-threshold 50 1

# Display the RMON alarm entry configuration.


<Sysname> display rmon alarm 1
AlarmEntry 1 owned by null is Valid.
Samples type : delta
Variable formula : 1.3.6.1.2.1.16.1.1.1.4.1<etherStatsOctets.1>
Sampling interval : 5(sec)
Rising threshold : 100(linked with event 1)
Falling threshold : 50(linked with event 2)
When startup enables : risingOrFallingAlarm
Latest value : 0

# Display statistics for interface GigabitEthernet 3/1/10.


<Sysname> display rmon statistics GigabitEthernet 3/1/10
EtherStatsEntry 1 owned by user1-rmon is VALID.
Interface : GigabitEthernet3/1/10<ifIndex.3>
etherStatsOctets : 57329 , etherStatsPkts : 455
etherStatsBroadcastPkts : 53 , etherStatsMulticastPkts : 353
etherStatsUndersizePkts : 0 , etherStatsOversizePkts : 0
etherStatsFragments : 0 , etherStatsJabbers : 0
etherStatsCRCAlignErrors : 0 , etherStatsCollisions : 0
etherStatsDropEvents (insufficient resources): 0
Packets received according to length:
64 : 7 , 65-127 : 413 , 128-255 : 35
256-511: 0 , 512-1023: 0 , 1024-1518: 0

After completing the configuration, you may query alarm events on the NMS. On the monitored device,
alarm event messages are displayed when events occur. The following is a sample output:
[Sysname]
#Aug 27 16:31:34:12 2005 Sysname RMON/2/ALARMFALL:Trap 1.3.6.1.2.1.16.0.2 Alarm table 1
monitors 1.3.6.1.2.1.16.1.1.1.4.1 with sample type 2,has sampled alarm value 0 less than(or
=) 50.

116
Configuring a sampler

NOTE:
In this documentation, SPC cards refer to the cards prefixed with SPC, for example, SPC-GT48L. SPE cards
refer to the cards prefixed with SPE, for example, SPE-1020-E-II.

Sampler overview
A sampler provides the packet sampling function. A sampler selects a packet out of sequential packets,
and sends it to the service module for processing.
The following sampling modes are available:
• Fixed mode—The first packet is selected out of a number of sequential packets in each sampling.
• Random mode—Any packet might be selected out of a number of sequential packets in each
sampling.
A sampler can be used to sample packets for NetStream, local mirroring, or traffic mirroring. Only the
sampled packets are sent and processed by the traffic monitoring module. Sampling is useful if you have
too much traffic and want to limit the traffic of interest to be analyzed. The sampled data is statistically
accurate and decreases the impact on forwarding capacity of the device.

NOTE:
• SPC cards support only the fixed mode.
• For more information about NetStream, see the chapter “Configuring NetStream.”
• For more information about local mirroring and traffic mirroring, see the chapters “Configuring port
mirroring” and “Configuring traffic mirroring”.

Creating a sampler
To create a sampler:

Step Command Remarks


1. Enter system view. system-view N/A

The rate argument specifies the sampling rate,


which equals the 2 to the power of rate. For
example, if the rate is 8, one packet out of 256
sampler sampler-name mode packets (2 to the power of 8) is sampled in
2. Create a sampler. { fixed | random } each sampling; if the rate is 10, one packet out
packet-interval rate of 1024 packets (2 to the power of 10) is
sampled.
SPC cards support only the fixed mode.

117
Displaying and maintaining sampler
Task Command Remarks
display sampler [ sampler-name ] [ slot
Display configuration and running
slot-number ] [ | { begin | exclude | Available in any view
information about the sampler.
include } regular-expression ]

Clear running information about the reset sampler statistics


Available in user view
sampler. [ sampler-name ]

Sampler configuration examples


Using the sampler for NetStream
NOTE:
This feature is supported by SPC cards only.

Network requirements
As shown in Figure 44, configure IPv4 NetStream on Router A to collect statistics on incoming traffic on
GigabitEthernet 3/1/0. The NetStream data is sent to port 5000 on the NSC at 12.110.2.2/16.
Configure fixed sampling in the inbound direction to select the first packet out of 256 packets.
Figure 44 Network diagram

Configuration procedure
# Create sampler 256 in fixed sampling mode and set the sampling rate to 8. The first packet of 256 (2
to the power of 8) packets is selected.
<RouterA> system-view
[RouterA] sampler 256 mode fixed packet-interval 8

# Configure GigabitEthernet 3/0/1, enable IPv4 NetStream to collect statistics on the incoming traffic,
and configure the interface to use sampler 256.
[RouterA] interface GigabitEthernet3/0/1
[RouterA-GigabitEthernet3/0/1] ip address 11.110.2.1 255.255.0.0
[RouterA-GigabitEthernet3/0/1] ip netstream mirror-to interface Net-Stream2/0/1 inbound
[RouterA-GigabitEthernet3/0/1] ip netstream sampler 256 inbound

118
[RouterA-GigabitEthernet3/0/1] quit

# Configure the address and port number of NSC as the destination host for NetStream data export,
leaving the default for source interface.
[RouterA] ip netstream export host 12.110.2.2 5000

Verifying the configuration


# Execute the display sampler command on Router A to view the configuration and running information
about sampler 256. The output shows that Router A received and processed 256 packets, which reached
the number of packets for one sampling, and Router A selected the first packet out of the 256 packets
received on GigabitEthernet 3/0/1.
<RouterA> display sampler 256
Sampler name: 256
Index: 1, Mode: Fixed, Packet-interval: 8

119
Configuring port mirroring

NOTE:
In this document, SPC cards refer to the cards prefixed with SPC, for example, SPC-GT48L. SPE cards refer
to the cards prefixed with SPE, for example, SPE-1020-E-II.

Introduction
Port mirroring refers to the process of copying the packets that pass through a specified port to the
monitor port connecting to a monitoring device for packet analysis.

Terminologies of port mirroring


Mirroring source
The mirroring source can be one or more monitored ports. Packets (called mirrored packets) passing
through them are copied to a port connecting to a monitoring device for packet analysis. Such a port is
called a source port and the device where the port resides is called a source device.

Mirroring destination
The mirroring destination is the destination port (also known as the monitor port) of mirrored packets and
connects to the data monitoring device. The device where the monitor port resides is called the
destination device. The monitor port forwards mirrored packets to its connecting monitoring device.

NOTE:
A monitor port may receive multiple duplicates of a packet in some cases because it can monitor multiple
mirroring sources. For example, assume that Port 1 is monitoring bidirectional traffic on Port 2 and Port 3
on the same device. If a packet travels from Port 2 to Port 3, two duplicates of the packet will be received
on Port 1.

Mirroring direction
The mirroring direction indicates that the inbound, outbound, or bidirectional traffic can be copied on a
mirroring source.
• Inbound—Copies packets received on a mirroring source.
• Outbound—Copies packets sent out a mirroring source.
• Bidirectional—Copies packets both received and sent on a mirroring source.

Mirroring group
Port mirroring is implemented through mirroring groups, which fall into local, remote source, and remote
destination mirroring groups. For more information about the mirroring groups, see “Port mirroring
classification and implementation.”

120
Reflector port, egress port, and remote probe VLAN
A reflector port, remote probe VLAN, and an egress port are used for Layer 2 remote port mirroring. The
remote probe VLAN specially transmits mirrored packets to the destination device. Both the reflector port
and egress port reside on a source device and send mirrored packets to the remote probe VLAN. The
egress port must belong to the remote probe VLAN while the reflector port may not. For more information
about the source device, destination device, reflector port, egress port, and remote probe VLAN, see
“Port mirroring classification and implementation.”

Port mirroring classification and implementation


According to the locations of the mirroring source and the mirroring destination, port mirroring falls into
local port mirroring and remote port mirroring.

Local port mirroring


In local port mirroring, the mirroring source and the mirroring destination are on the same device. A
mirroring group that contains the mirroring source and the mirroring destination on the device is called
a local mirroring group.
Figure 45 Local port mirroring implementation

As shown in Figure 45, the source port GigabitEthernet 3/1/1 and monitor port GigabitEthernet 3/1/2
reside on the same device. Packets of GigabitEthernet 3/1/1 are copied to GigabitEthernet 3/1/2,
which then forwards the packets to the data monitoring device for analysis.

Remote port mirroring


In remote port mirroring, the mirroring source and the mirroring destination reside on different devices
and in different mirroring groups. The mirroring group that contains the mirroring source or the mirroring
destination is called a remote source/destination group. The devices between the source devices and
destination device are intermediate devices.
Remote port mirroring falls into Layer 2 and Layer 3 remote port mirroring.
• Layer 2 remote port mirroring—In Layer 2 remote port mirroring, the mirroring source and the
mirroring destination are located on different devices on a same Layer 2 network.
• Layer 3 remote port mirroring—In Layer 3 remote port mirroring, the mirroring source and the
mirroring destination are separated by IP networks. The router does not support Layer 3 remote port
mirroring.

121
NOTE:
• Layer 2 remote port mirroring can be implemented when a fixed reflector port, configurable reflector
port, or configurable egress port is available on the source device. The configuration method when
either of the first two ports is available on the source device is called reflector port method. You must
configure a reflector port on the source device that has a configurable reflector port. The router only
supports configurable reflector ports.
• Layer 2 remote port mirroring cannot be implemented on an SPE card when a configurable reflector
port is available on the source device.

Figure 46 Layer 2 remote port mirroring implementation (with a reflector port)


Mirroring process in the device

GE3/1/1 GE3/1/3 GE3/1/2

Destinati
GE3/1/3
GE3/1/2 GE3/1/1 GE3/1/2 GE3/1/1 on device
Source Remote Intermediate Remote
GE3/1/1 GE3/1/2
device prone VLAN device probe VLAN

Data monitoring
Host
device

Original packets Source port Reflector port


Mirroring packets Monitor port Common port

On the network shown in Figure 46, the source device copies the packets received on the source port
GigabitEthernet 3/1/1 to the reflector port GigabitEthernet 3/1/3. GigabitEthernet 3/1/3 broadcasts
the packets in the remote probe VLAN and the intermediate device in the VLAN transmits the packets to
the destination device. When receiving these mirrored packets, the destination device compares their
VLAN IDs to the ID of the remote probe VLAN configured in the remote destination group. If the VLAN
IDs of these mirrored packets match the remote probe VLAN ID, the device forwards them to the data
monitoring device through the monitor port.

Configuring local port mirroring


Configuring local mirroring is to configure local mirroring groups.
A local mirroring group comprises one or multiple mirroring ports and one monitor port that are located
on the same router.
To configure local port mirroring:

Step Command Remarks


1. Enter system view. system-view N/A
2. Create a local mirroring group. mirroring-group group-id local N/A

122
Step Command Remarks

Use either approach.


• (Approach I) In system view:: You can add ports to a
mirroring-group group-id port mirroring group as
mirroring-port mirroring-port-list { both mirroring ports in either
| inbound | outbound } system view or
• (Approach II) In interface view: interface view.
3. Add ports to the port mirroring
a. interface interface-type In system view, you can
group as mirroring ports.
interface-number add multiple ports to a
b. [ mirroring-group group-id ] port mirroring group at
mirroring-port { both | inbound | a time. While in
outbound } interface view, you can
add only the current
c. quit
port to a port mirroring
group.

• (Approach I) In system view:


mirroring-group group-id monitor-port Use either approach.
monitor-port-id You can add a monitor
4. Add a port to the mirroring • (Approach II) In interface view: port to a port mirroring
group as the monitor port. group in either system
a. interface interface-type
view or interface view.
interface-number
They achieve the same
b. [ mirroring-group group-id ]
purpose.
monitor-port

NOTE:
• A local mirroring group is effective only when it has both mirroring ports and the monitor port
configured.
• To make sure that the mirroring function works properly, do not enable STP on the monitor port of a
mirroring group.
• A port mirroring group can contain multiple mirroring ports and only one monitor port.
• A port can belong to only one port mirroring group.

Configuring remote port mirroring


NOTE:
• If the GARP VLAN Registration Protocol (GVRP) is enabled, GVRP may register the remote probe VLAN
to unexpected ports, resulting in undesired duplicates. For more information about GVRP, see Layer 2
—LAN Switching Configuration Guide.
• The router supports remote mirroring only when the system working mode is SPC or SPE.

Configuration prerequisites
Before configuring remote port mirroring, create a static VLAN to be configured as the remote probe
VLAN later.

123
CAUTION:
• The remote source mirroring group on the source device and the remote destination mirroring group on
the destination device must use the same remote probe VLAN.
• To make sure that the mirroring function works properly, do not connect a network cable to the reflector
port, and disable these functions on the port: STP, IGMP snooping, static ARP, and MAC address
learning.

Configuring a remote source mirroring group


To configure a remote source mirroring group:

Step Command Remarks


1. Enter system view. system-view N/A
2. Create a remote source mirroring mirroring-group group-id
N/A
group. remote-source
• (Approach I) In system view:
mirroring-group group-id
mirroring-port
mirroring-port-list { both | Use either approach.
inbound | outbound }
You can add ports to a
• (Approach II) In interface view: source port mirroring
3. Add ports to the mirroring group as
a. interface interface-type group in either system
mirroring ports.
interface-number view or interface view.
b. [ mirroring-group They achieve the same
group-id ] mirroring-port purpose.
{ both | inbound |
outbound }
c. quit.
• (Approach I) In system view:
mirroring-group group-id
reflector-port reflector-port-id
• (Approach II) In Ethernet
4. Add a reflector port for a mirroring interface view:
Use either approach.
group. a. interface interface-type
interface-number
b. mirroring-group group-id
reflector-port
c. quit
5. Configure the remote probe VLAN for mirroring-group group-id
N/A
the mirroring group. remote-probe vlan rprobe-vlan-id

124
NOTE:
• To avoid router performance degradation, do not add mirroring ports to the remote probe VLAN.
• Only existing static VLANs can be configured as remote probe VLANs. To remove a VLAN operating as
a remote probe VLAN, you need to restore it to a normal VLAN first. A remote port mirroring group gets
invalid if the corresponding remote probe VLAN is removed.
• Use a remote probe VLAN for remote port mirroring only.
• A port can belong to only one port mirroring group. A VLAN can be the remote probe VLAN of only
one port mirroring group.
• To make sure that the mirroring function works properly, do not enable STP on the monitor port.
• You can configure a reflector port only when the router works in SPC mode.

Configuring a remote destination mirroring group


To configure a remote destination mirroring group:

Step Command Remarks


1. Enter system view. system-view N/A
2. Create a remote destination mirroring mirroring-group group-id
N/A
group. remote-destination

mirroring-group group-id
3. Configure the remote probe VLAN for the
remote-probe vlan N/A
mirroring group.
rprobe-vlan-id
• In system view:
mirroring-group group-id
monitor-port monitor-port-id
• In interface view: Use either approach,
4. Add a port to the port mirroring group as
a. interface interface-type which leads to the same
the monitor port.
interface-number result.
b. [ mirroring-group
group-id ] monitor-port
c. quit

interface interface-type
5. Enter destination interface view. N/A
interface-number
• The port is an access port:
port access vlan
rprobe-vlan-id
• The port is a trunk port:
Use any of these three
port trunk permit vlan
6. Add the port to the remote probe VLAN. approaches according to
rprobe-vlan-id
the port type.
• The port is a hybrid port:
port hybrid vlan
rprobe-vlan-id { tagged |
untagged }

Displaying and maintaining port mirroring


125
Task Command Remarks
display mirroring-group { group-id | all | local
Display the configuration of a port | remote-destination | remote-source } [ |
Available in any view
mirroring group. { begin | exclude | include }
regular-expression ]

Port mirroring configuration examples


Local port mirroring configuration example (in source port
mode)
Network requirements
On a network shown in Figure 47:
• Device A connects to the marketing department through GigabitEthernet 3/1/1 and to the
technical department through GigabitEthernet 3/1/2. It connects to the server through
GigabitEthernet 3/1/3.
• Configure local port mirroring in source port mode to enable the server to monitor the bidirectional
traffic of the marketing department and the technical department.
Figure 47 Network diagram

Marketing Dept.
GE3/1/1

GE3/1/3
Device A

GE3/1/2
Server
Technical Dept.

Source port

Monitor port

Configuration procedure
1. Create a local mirroring group:
# Create local mirroring group 1.
<DeviceA> system-view
[DeviceA] mirroring-group 1 local
# Configure GigabitEthernet 3/1/1 and GigabitEthernet 3/1/2 as source ports and port
GigabitEthernet 3/1/3 as the monitor port.
[DeviceA] mirroring-group 1 mirroring-port GigabitEthernet3/1/1
GigabitEthernet3/1/2 both
[DeviceA] mirroring-group 1 monitor-port GigabitEthernet3/1/3

126
# Disable the spanning tree feature on the monitor port GigabitEthernet 3/1/3.
[DeviceA] interface GigabitEthernet3/1/3
[DeviceA-GigabitEthernet3/1/3] undo stp enable
[DeviceA-GigabitEthernet3/1/3] quit
2. Verify the configuration:
# Display the configuration of all mirroring groups.
[DeviceA] display mirroring-group all
mirroring-group 1:
type: local
status: active
mirroring port:
GigabitEthernet3/1/1 both
GigabitEthernet3/1/2 both
monitor port: GigabitEthernet3/1/3

After the configurations are completed, you can monitor all the packets received and sent by the
marketing department and the technical department on the server.

Layer 2 remote port mirroring configuration example (reflector


port configurable)
NOTE:
This configuration example is available only on SPC cards.

Network requirements
On the Layer 2 network shown in Figure 48:
• Device A connects to the marketing department through GigabitEthernet 3/0/1, and to the trunk
port GigabitEthernet 3/0/1 of Device B through the trunk port GigabitEthernet 3/0/2; Device C
connects to the server through GigabitEthernet 3/0/2, and to the trunk port GigabitEthernet
3/0/2 of Device B through the trunk port GigabitEthernet 3/0/1. Device A supports configurable
reflector port configuration.
• Configure Layer 2 remote port mirroring to enable the server to monitor the bidirectional traffic of
the marketing department.

127
Figure 48 Network diagram
Source device Intermediate device Destination device
Device A Device B Device C
GE3/0/2 GE3/0/1 GE3/0/2 GE3/0/1
GE3/0/3

GE3/0/1 GE3/0/2

Marketing
Dept.

Server

Common port Source port Reflector port Monitor port

Configuration procedure
1. Configure Device A (the source device):
# Create a remote source group.
<DeviceA> system-view
[DeviceA] mirroring-group 1 remote-source
# Create VLAN 2.
[DeviceA] vlan 2
[DeviceA-vlan2] quit
# Configure VLAN 2 as the remote probe VLAN, GigabitEthernet 3/0/1 as a source port, and
GigabitEthernet 3/0/3 as the reflector port in the mirroring group.
[DeviceA] mirroring-group 1 remote-probe vlan 2
[DeviceA] mirroring-group 1 mirroring-port GigabitEthernet3/0/1 both
[DeviceA] mirroring-group 1 reflector-port GigabitEthernet3/0/3
# Configure GigabitEthernet 3/0/2 as a trunk port that permits the packets of VLAN 2 to pass
through.
[DeviceA] interface GigabitEthernet3/0/2
[DeviceA-GigabitEthernet3/0/2] port link-type trunk
[DeviceA-GigabitEthernet3/0/2] port trunk permit vlan 2
[DeviceA-GigabitEthernet3/0/2] quit
# Disable the spanning tree feature on reflector port GigabitEthernet 3/0/3.
[DeviceA] interface GigabitEthernet3/0/3
[DeviceA-GigabitEthernet3/0/3] undo stp enable
[DeviceA-GigabitEthernet3/0/3] quit
2. Configure Device B (the intermediate device):
# Create VLAN 2.
<DeviceB> system-view
[DeviceB] vlan 2
[DeviceB-vlan2] quit
# Configure GigabitEthernet 3/0/1 as a trunk port that permits the packets of VLAN 2 to pass
through.
[DeviceB] interface GigabitEthernet3/0/1
[DeviceB-GigabitEthernet3/0/1] port link-type trunk

128
[DeviceB-GigabitEthernet3/0/1] port trunk permit vlan 2
[DeviceB-GigabitEthernet3/0/1] quit
# Configure GigabitEthernet 3/0/2 as a trunk port that permits the packets of VLAN 2 to pass
through.
[DeviceB] interface GigabitEthernet3/0/2
[DeviceB-GigabitEthernet3/0/2] port link-type trunk
[DeviceB-GigabitEthernet3/0/2] port trunk permit vlan 2
[DeviceB-GigabitEthernet3/0/2] quit
3. Configure Device C (the destination device):
# Configure GigabitEthernet 3/0/1 as a trunk port that permits the packets of VLAN 2 to pass
through.
<DeviceC> system-view
[DeviceC] interface GigabitEthernet3/0/1
[DeviceC-GigabitEthernet3/0/1] port link-type trunk
[DeviceC-GigabitEthernet3/0/1] port trunk permit vlan 2
[DeviceC-GigabitEthernet3/0/1] quit
# Create a remote destination group.
[DeviceC] mirroring-group 1 remote-destination
# Create VLAN 2.
[DeviceC] vlan 2
[DeviceC-vlan2] quit
# Configure VLAN 2 as the remote probe VLAN of the mirroring group and GigabitEthernet
3/0/2 as the monitor port of the mirroring group, disable the spanning tree feature on
GigabitEthernet 3/0/2, and assign the port to VLAN 2.
[DeviceC] mirroring-group 1 remote-probe vlan 2
[DeviceC] interface GigabitEthernet3/0/2
[DeviceC-GigabitEthernet3/0/2] mirroring-group 1 monitor-port
[DeviceC-GigabitEthernet3/0/2] undo stp enable
[DeviceC-GigabitEthernet3/0/2] port access vlan 2
[DeviceC-GigabitEthernet3/0/2] quit
4. Verify the configuration:
After the configurations are completed, you can monitor all the packets received and sent by the
marketing department on the server.

129
Configuring traffic mirroring

Traffic mirroring overview


Traffic mirroring refers to the process of copying the specified packets to the specified destination for
packet analysis and monitoring.
You can configure mirroring traffic to an interface, to the CPU, or to a VLAN.
• Mirroring traffic to an interface copies the matching packets on an interface to a destination
interface.
• Mirroring traffic to the CPU copies the matching packets on an interface to a CPU (the CPU of the
board where the traffic mirroring-enabled interface resides).
• Mirroring traffic to a VLAN copies the matching packets on an interface to a VLAN. In this case, all
the ports in the VLAN can receive the mirrored packets. Even if the VLAN does not exist, you can
pre-define the action of mirroring traffic to the VLAN. After the VLAN is created and some ports join
the VLAN, the action of mirroring traffic to the VLAN takes effect automatically.

NOTE:
• For more information about QoS policies, traffic classes, and traffic behaviors, see ACL and QoS
Configuration Guide.
• SPE cards support mirroring inbound and outbound traffic and SPC cards only support mirroring
inbound traffic.

Configuring traffic mirroring


To configure traffic mirroring, you must enter the view of an existing traffic behavior.

NOTE:
In a traffic behavior, the action of mirroring traffic to an interface, the action of mirroring traffic to a CPU,
and the action of mirroring traffic to a VLAN are mutually exclusive.

Mirroring traffic to an interface


To mirror traffic to an interface:

Step Command
1. Enter system view. system-view
2. Enter traffic behavior view. traffic behavior behavior-name
3. Specify the destination interface for traffic
mirror-to interface interface-type interface-number
mirroring.

NOTE:
Your configuration with the mirror-to interface command in a traffic behavior can overwrite the previous
configuration in the traffic behavior, if any.

130
Mirroring traffic to the CPU
To mirror traffic to the CPU:

Step Command Remarks


1. Enter system view. system-view N/A
2. Enter traffic behavior view. traffic behavior behavior-name N/A

The CPU refers to the CPU of the


3. Mirror traffic to the CPU. mirror-to cpu
board where the interface resides.

Mirroring traffic to a VLAN


To mirror traffic to a VLAN:

Step Command
1. Enter system view. system-view
2. Enter traffic behavior view. traffic behavior behavior-name
3. Mirror traffic to a VLAN. mirror-to vlan vlan-id

NOTE:
• You can mirror traffic to an inexistent VLAN. When the VLAN is created and some ports join the VLAN,
traffic mirroring for the VLAN takes effect automatically.
• If the mirror-to vlan command is configured for the same traffic behavior for multiple times, the new
configuration overwrites the previous one.

Displaying and maintaining traffic mirroring


Task Command Remarks
display traffic behavior user-defined
Display traffic behavior
[ behavior-name ] [ | { begin | exclude Available in any view
configuration information.
| include } regular-expression ]

display qos policy user-defined


Display QoS policy configuration [ policy-name [ classifier tcl-name ] ] [ |
Available in any view
information. { begin | exclude | include }
regular-expression ]

Traffic mirroring configuration example


Network requirements
As shown in Figure 49, configure the device to enable the server to analyze and monitor all the packets
sent from Host.

131
Figure 49 Network diagram

Configuring the device


# Enter system view.
<Device> system-view
# Configure ACL 2000 to match all packets.
[Device] acl number 2000
[Device-acl-basic-2000] rule 1 permit
[Device-acl-basic-2000] quit
# Create traffic class classifier 1 and use ACL 2000 as the match criterion.
[Device] traffic classifier 1
[Device-classifier-1] if-match acl 2000
[Device-classifier-1] quit
# Create traffic behavior behavior 1 and configure the action of mirroring traffic to
GigabitEthernet 3/0/2 for the traffic behavior.
[Device] traffic behavior 1
[Device-behavior-1] mirror-to interface GigabitEthernet 3/0/2
[Device-behavior-1] quit
# Create QoS policy policy 1, and associate traffic behavior behavior 1 with traffic class classifier
1.
[Device] qos policy 1
[Device-qospolicy-1] classifier 1 behavior 1
[Device-qospolicy-1] quit
# Apply the QoS policy in the inbound direction of GigabitEthernet 3/0/1.
[Device] interface GigabitEthernet 3/0/1
[Device-GigabitEthernet3/0/1] qos apply policy 1 inbound

After the configuration above, you can analyze and monitor all the packets that Host sends on Server.

132
Configuring NetStream

NOTE:
In this documentation, SPC cards refer to the cards prefixed with SPC, for example, SPC-GT48L. SPE cards
refer to the cards prefixed with SPE, for example, SPE-1020-E-II. NAM cards refer to the cards prefixed
with IM-NAM, for example, IM-NAM-II.

NetStream overview
Conventional traffic statistics collection methods, like SNMP and port mirroring, cannot provide precise
network management because of inflexible statistical methods or high cost (dedicated servers are
required). This calls for a new technology to collect traffic statistics.
NetStream provides statistics on network traffic flows and can be deployed on access, distribution, and
core layers.
The NetStream technology implements the following features:
• Accounting and billing—NetStream provides fine-gained data about the network usage based on
the resources such as lines, bandwidth, and time periods. The Internet service providers (ISPs) can
use the data for billing based on time period, bandwidth usage, application usage, and quality of
service (QoS). The enterprise customers can use this information for department chargeback or cost
allocation.
• Network planning—NetStream data provides key information, for example the autonomous system
(AS) traffic information, for optimizing the network design and planning. This helps maximize the
network performance and reliability when minimizing the network operation cost.
• Network monitoring—Configured on the Internet interface, NetStream allows for traffic and
bandwidth utilization monitoring in real time. Based on this, administrators can understand how the
network is used and where the bottleneck is, better planning the resource allocation.
• User monitoring and analysis—The NetStream data provides detailed information about network
applications and resources. This information helps network administrators efficiently plan and
allocate network resources, and guarantee network security.

NetStream basic concepts


Flow
NetStream is an accounting technology to provide statistics on a per-flow basis. An IPv4 flow is defined
by the 7-tuple elements: destination IP address, source IP address, destination port number, source port
number, protocol number, type of service (ToS), and inbound or outbound interface. The 7-tuple elements
define a unique flow.

133
NetStream operation
A typical NetStream system comprises the following parts: NetStream data exporter (NDE), NetStream
collector (NSC), and NetStream data analyzer (NDA).
• NDE
The NDE analyzes traffic flows that pass through it, collects necessary data from the target flows,
and exports the data to the NSC. Before exporting data, the NDE may process the data like
aggregation. A device with NetStream configured acts as an NDE.
• NSC
The NSC is usually a program running in UNIX or Windows. It parses the packets sent from the
NDE, stores the statistics to the database for the NDA. The NSC gathers the data from multiple
NDEs, then filters and aggregates the total received data.
• NDA
The NDA is a network traffic analysis tool. It collects statistics from the NSC, and performs further
process, generates various types of reports for applications of traffic billing, network planning,
and attack detection and monitoring. Typically, the NDA features a Web-based system for users
to easily obtain, view, and gather the data.
Figure 50 NetStream system

As shown in Figure 50, the following procedure of NetStream data collection and analysis occurs:
1. The NDE, that is the device configured with NetStream, periodically delivers the collected statistics
to the NSC.
2. The NSC processes the statistics, and then sends the results to the NDA.
3. The NDA analyzes the statistics for accounting, network planning, and the like.

NOTE:
• This document focuses on the description and configuration of NDE.
• NSC and NDA are usually integrated into a NetStream server.

NetStream key technologies


Flow aging
The flow aging in NetStream enables the NDE to export NetStream data to the NetStream server.
NetStream creates a NetStream entry for each flow in the cache and each entry stores the flow statistics.

134
When the timer of the entry expires, the NDE exports the summarized data to the NetStream server in
a specific NetStream version export format.
The following types of NetStream flow aging are available:
• Periodical aging
• Forced aging
• TCP FIN- and RST-triggered aging (it is automatically triggered when a TCP connection is
terminated)

Periodical aging
Periodical aging uses these approaches: inactive flow aging and active flowing aging.
• Inactive flow aging
A flow is considered inactive if its statistics have not been changed, that is, no packet for this
NetStream entry arrives in the time specified by the ip netstream timeout inactive command. The
inactive flow entry remains in the cache until the inactive timer expires. Then the inactive flow is
aged out and its statistics, which can no longer be displayed by the display ip netstream cache
command, are sent to the NetStream server. The inactive flow aging guarantees the cache is big
enough for new flow entries.
• Active flow aging
An active flow is aged out when the time specified by the ip netstream timeout active command is
reached, and its statistics are exported to the NetStream server. The device continues to count the
active flow statistics, which can be displayed by the display ip netstream cache command. The
active flow aging exports the statistics of active flows to the NetStream server.

Forced aging
The reset ip netstream statistics command ages out all NetStream entries in the cache and clears the
statistics. This is forced aging. Alternatively, use the ip netstream max-entry command to configure aging
out of entries in the cache when the maximum number of entries is reached.

TCP FIN- and RST-triggered aging


For a TCP connection, when a packet with a FIN or RST flag is sent out, it means that a session is finished.
When a packet with a FIN or RST flag is recorded for a flow with the NetStream entry already created,
the flow is aged out immediately. This type of aging is enabled by default, and cannot be disabled.

NetStream data export


NetStream common data export
The NetStream collects statistics of each flow and, when the entry timer expires, exports the data of each
entry to the NetStream server.
Though the data includes statistics of each flow, this method consumes more bandwidth and CPU, and
requires large cache size. In most cases, not all the statistics are necessary for analysis.

NetStream aggregation data export


The NetStream aggregation merges the flow statistics according to the aggregation criteria of an
aggregation mode, and sends the summarized data to the NetStream server. This process is the
NetStream aggregation data export, which decreases the bandwidth usage compared to common data
export.

135
For example, the aggregation mode configured on the NDE is protocol-port, which means to aggregate
statistics of flow entries by protocol number, source port and destination port. Four NetStream entries
record four TCP flows with the same destination address, source port and destination port but different
source addresses. According to the aggregation mode, only one NetStream aggregation flow is created
and sent to the NetStream server.
Table 4 lists the 12 aggregation modes. In each mode, the system merges flows into one aggregation
flow if the aggregation criteria are of the same value. These 12 aggregation modes work independently
and can be configured on the same interface.
Table 4 NetStream aggregation modes

Aggregation mode Aggregation criteria


• Source AS number
• Destination AS number
AS aggregation
• Inbound interface index
• Outbound interface index
• Protocol number
Protocol-port aggregation • Source port
• Destination port
• Source AS number
• Source address mask length
Source-prefix aggregation
• Source prefix
• Inbound interface index
• Destination AS number
• Destination address mask length
Destination-prefix aggregation
• Destination prefix
• Outbound interface index
• Source AS number
• Destination AS number
• Source address mask length
• Destination address mask length
Prefix aggregation
• Source prefix
• Destination prefix
• Inbound interface index
• Outbound interface index
• Source prefix
• Destination prefix
• Source address mask length
• Destination address mask length
• ToS
Prefix-port aggregation
• Protocol number
• Source port
• Destination port
• Inbound interface index
• Outbound interface index

136
Aggregation mode Aggregation criteria
• ToS
• Source AS number
ToS-AS aggregation • Destination AS number
• Inbound interface index
• Outbound interface index
• ToS
• Source AS number
ToS-source-prefix aggregation • Source prefix
• Source address mask length
• Inbound interface index
• ToS
• Destination AS number
ToS-destination-prefix aggregation • Destination address mask length
• Destination prefix
• Outbound interface index
• ToS
• Source AS number
• Source prefix
• Source address mask length
ToS- prefix aggregation • Destination AS number
• Destination address mask length
• Destination prefix
• Inbound interface index
• Outbound interface index
• ToS
• Protocol type
• Source port
ToS-protocol-port aggregation
• Destination port
• Inbound interface index
• Outbound interface index
• ToS
ToS-BGP-nexthop • Border Gateway Protocol (BGP) next hop
• Outbound interface index

NOTE:
• In an aggregation mode with AS, if the packets are not forwarded according to the BGP routing table,
the statistics on the AS number cannot be obtained.
• In the aggregation mode of ToS-BGP-nexthop, if the packets are not forwarded according to the BGP
routing table, the statistics on the BGP next hop cannot be obtained.

NetStream export formats


NetStream exports data in UDP datagrams in one of the following formats:

137
• Version 5—Exports original statistics collected based on the 7-tuple elements. The packet format is
fixed and cannot be extended flexibly.
• Version 8—Supports NetStream aggregation data export. The packet formats are fixed and cannot
be extended flexibly.
• Version 9—The most flexible format. It allows users to define templates with different statistics fields.
The template-based feature provides support of different statistics, such as BGP next hop and MPLS
information.

NetStream sampling
NetStream sampling basically reflects the network traffic information by collecting statistics on fewer
packets. The reduced statistics to be transferred also bring down the impact on the router performance.
For more information about sampling, see the chapter “Configuring a sampler.”

NetStream configuration task list


Before you configure NetStream, determine the following proper configurations as needed:
• Make sure on which router you want to enable NetStream.
• If multiple service flows are passing the NDE, use an ACL or QoS policy to select the target data.
• If enormous traffic flows are on the network, configure NetStream sampling.
• Determine which export format is used for NetStream data export.
• Configure the timer for NetStream flow aging.
• To reduce the bandwidth consumption of NetStream data export, configure NetStream
aggregation.

138
Figure 51 NetStream configuration flow

Complete these tasks to configure NetStream:

Task Remarks
Configuring NetStream Required

Configuring NetStream sampling Optional

Configuring attributes of NetStream export data Optional

Configuring NetStream flow aging Optional

Configuring NetStream common data export Required


Configuring NetStream data
export Configuring NetStream aggregation data Use at least one
export approach.

Configuring NetStream
Configuring QoS-based NetStream
NOTE:
NetStream-capable cards include NAM cards, SPC cards, and SPE cards. A QoS policy cannot be
applied to the outgoing traffic of interfaces on SPC cards. For information about configuring NetStream
on an SPC card, see “Configuring NetStream on an SPC card.”

139
Before you configure NetStream, configure QoS first to identify the traffic matching the classification
criteria and configure a traffic behavior with the action of mirroring the identified traffic to a NetStream
interface of a NetStream-capable card (SPC card, SPE card, or NAM card). For more information about
class, traffic behavior, QoS policy, see ACL and QoS Configuration Guide.
To configure QoS-based NetStream:

Step Command Remarks


1. Enter system view. system-view N/A

By default, NetStream is
2. Enable NetStream. ip netstream
disabled.

3. Define a class and enter traffic classifier classifier-name [ operator


N/A
its view. { and | or } ]
4. Define a match criterion. if-match match-criteria N/A
5. Return to system view. quit N/A
6. Create a traffic behavior
and enter traffic behavior traffic behavior behavior-name N/A
view.
7. Configure the action of
mirroring traffic to the mirror-to interface Net-Stream By default, no action of traffic
interface on a NetStream interface-number mirroring is configured.
card.
8. Return to system view. quit N/A
9. Create a policy and enter
qos policy policy-name N/A
its view.
10. Specify a behavior for a classifier classifier-name behavior
N/A
class in the policy. behavior-name
11. Return to system view. quit N/A
12. Enter interface view. interface interface-type interface-number N/A

qos apply policy policy-name { inbound |


13. Apply a QoS policy. N/A
outbound }

Configuring NetStream on an SPC card


NOTE:
• This feature is supported by SPC cards only.
• To enable NetStream on an interface of an SPC card, NetStream sampling is required.

To configure NetStream on an SPC card:

Step Command Remarks


1. Enter system
system-view N/A
view.

140
Step Command Remarks
The rate argument specifies the sampling rate,
that is, the number of packets in each sampling,
which equals the 2 to the power of rate. For
2. Create a sampler sampler-name mode fixed example, if the rate is 8, one packet out of 256
sampler. packet-interval rate packets (2 to the power of 8) is sampled in each
sampling; if the rate is 10, one packet out of
1024 packets (2 to the power of 10) is
sampled.
3. Enter interface interface interface-type
N/A
view. interface-number
4. Enable
ip netstream sampler sampler-name
NetStream By default, NetStream sampling is disabled.
{ inbound | outbound }
sampling.
5. Mirror the
ip netstream mirror-to interface
sampled traffic
Net-Stream interface-number
to a NetStream
[ backup-interface Net-Stream N/A
interface for
interface-number ] { inbound |
statistics
outbound }
collection.

Configuring NetStream sampling


A sampler must be created by using the sampler command before being referenced by NetStream
sampling. For more information about sampler, see the chapter “Sampler configuration.”

Configuring NetStream sampling


To configure NetStream sampling:

Step Command Remarks


1. Enter system view. system-view N/A
2. Create a traffic
behavior and enter traffic behavior behavior-name N/A
traffic behavior view.

mirror-to interface Net-Stream Optional.


3. Mirror the traffic to a
interface-number [ backup-interface
NetStream interface for By default, traffic mirroring is
Net-Stream interface-number ] sampler
statistics collection. disabled.
sampler-name

NOTE:
• For more information about NetStream sampling configuration on an SPC card, see “Configuring
NetStream on an SPC card.”
• A sampler that is referenced by NetStream sampling cannot be deleted.
• The collected statistics of different sampling mode or rate cannot be aggregated.
• Statistics of different sampling mode or rate cannot be exported in the same version 5 or 8 format.

141
Configuring attributes of NetStream export data
Configuring NetStream export format
The NetStream export format configures to export NetStream data in version 5 or version 9 formats, and
the data fields can be expanded to contain more information, such as the following information:
• Statistics about source AS, destination AS, and peer ASs in version 5 or version 9 export format.
• Statistics about BGP next hop in version 9 format only.
To configure the NetStream export format:

Step Command Remarks


1. Enter system view. system-view N/A

Optional.
2. Configure the version
ip netstream export version 5 By default, NetStream common data
for NetStream export
[ origin-as | peer-as ] export uses version 5; IPv4 NetStream
format, and specify
whether to record AS ip netstream export version 9 aggregation data export uses version
and BGP next hop [ origin-as | peer-as ] 8; MPLS flow data is not exported; the
information. [ bgp-nexthop ] peer AS numbers are exported; the
BGP next hop is not exported.

NOTE:
For more information about AS and BGP, see Layer 3—IP Routing Configuration Guide.

A NetStream entry for a flow records the source IP address and destination IP address, each with two
AS numbers. The source AS from which the flow originates and the peer AS from which the flow travels
to the NetStream-enabled device are for the source IP address; the destination AS to which the flow is
destined and the peer AS to which the NetStream-enabled device passes the flow are for the destination
IP address.
To specify which AS numbers to be recorded for the source and destination IP addresses, include
keyword peer-as or origin-as. For example, as shown in Figure 52, a flow starts from AS 20, passes AS
21 through AS 23, and reaches AS 24. NetStream is enabled on the device in AS 22. If keyword
peer-as is provided, the command records AS 21 as the source AS, and AS 23 as the destination AS.
If keyword origin-as is provided, the command records AS 20 as the source AS and AS 24 as the
destination AS.

142
Figure 52 Recorded AS information varies with different keywords

AS 20 AS 21 Enable NetStream

AS 22

Include peer-as in the command. AS 23


AS 21 is recorded as the source AS, and
AS 23 as the destination AS.

Include origin-as in the command.


AS 20 is recorded as the source AS and AS 24
AS 24 as the destination AS.

Configuring refresh rate for NetStream version 9 templates


Version 9 is template-based and supports user-defined formats, so the NetStream-enabled router needs
to resend a new template to the NetStream server for an update. If the version 9 format is changed on
the router and not updated on the NetStream server, the server is unable to associate the received
statistics with its proper fields. To avoid such situation, configure the refresh frequency and rate for version
9 templates so that the NetStream server can refresh the templates on time.
To configure the refresh rate for NetStream version 9 templates:

Step Command Remarks


1. Enter system view. system-view N/A

2. Configure the refresh frequency Optional.


ip netstream export v9-template
for NetStream version 9 By default, the version 9 templates
refresh-rate packet packets
templates. are sent every 20 packets.

3. Configure the refresh interval Optional.


ip netstream export v9-template
for NetStream version 9 By default, the version 9 templates
refresh-rate time minutes
templates. are sent every 30 minutes.

NOTE:
The refresh frequency and interval can be both configured, and the template is resent when either of the
condition is reached.

143
Configuring MPLS-aware NetStream
Introduction to MPLS-aware NetStream
When you configure MPLS-aware NetStream, you can set whether to collect and export statistics about
the three labels of MPLS packets. For MPLS packets that carry non-IPv4 data, NetStream records only
statistics on labels (up to three). For MPLS packets that carry IPv4 data, NetStream records statistics on
labels (up to three) in the label stack, forwarding equivalent class (FEC) corresponding to the top label,
and the traditional 7-tuple elements data. For more information about MPLS, see MPLS Configuration
Guide.

Configuring MPLS-aware NetStream


To configure MPLS-aware NetStream:

Step Command Remarks


1. Enter system view. system-view N/A

ip netstream mpls [ label-positions By default, no statistics


2. Configure to count and export
{ label-position1 [ label-position2 ] about MPLS packets are
statistics on MPLS packets.
[ label-position3 ] } ] [ no-ip-fields ] counted and exported.

NOTE:
• Collection of NetStream MPLS packet statistics is supported only when NetStream is enabled.
• To collect statistics about MPLS packets carrying IPv4 data, enable IPv4 NetStream.
• To collect statistics about MPLS packets carrying IPv6 data, enable IPv6 NetStream. For more
information about IPv6 NetStream, see the chapter “IPv6 NetStream configuration.”
• To collect statistics about MPLS packets carrying neither IPv4 nor IPv6 data, enable IPv4 NetStream.
• On a NAM card, to collect statistics about MPLS packets carrying non-IPv4 data, enable IPv4
NetStream. Only the statistics about labels are collected.

Configuring NetStream flow aging


To configure flow aging:

Step Command Remarks


1. Enter system view. system-view N/A

Optional.
By default:
• Set the aging timer for active flows: • The aging timer for
ip netstream timeout active minutes active flows is 30
2. Configure periodical aging.
• Set the aging timer for inactive flows: minutes.
ip netstream timeout inactive seconds • The aging timer for
inactive flows is 30
seconds.

144
Step Command Remarks

a. Set the maximum number of entries Optional.


that the cache can accommodate,
and the processing method when By default, the maximum
the upper limit is reached: number of entries that the
3. Configure forced aging of NetStream cache can
ip netstream max-entry
the NetStream entries. accommodate is 409600.
{ max-entries | aging |
disable-caching } The reset ip netstream
b. Configure forced aging: statistics command also
reset ip netstream statistics clears the cache.

NOTE:
You can configure both the aging timer for active flows and the aging timer for inactive flows. When either
timer for a flow expires, the flow is aged out.

Configuring NetStream data export


To allow the NDE to export collected statistics to the NetStream server, configure the source interface out
of which the data is sent and the destination address to which the data is sent.

Configuring NetStream common data export


To configure NetStream common data export:

Step Command Remarks


1. Enter system view. system-view N/A
2. Configure the destination ip netstream export host
By default, no destination address is
address for the ip-address udp-port
configured, in which case, the NetStream
NetStream common data [ vpn-instance
common data is not exported.
export. vpn-instance-name ]

Optional.
By default, the interface where the
NetStream data is sent out (the interface
3. Configure the source ip netstream export source connects to the NetStream server) is used
interface for NetStream interface interface-type as the source interface.
common data export. interface-number H3C recommends that you connect the
network management interface to the
NetStream server and configure it as the
source interface.

Optional.
4. Limit the data export rate. ip netstream export rate rate By default, the data export rate is not
limited.

145
Configuring NetStream aggregation data export
NetStream aggregation can be implemented by software or hardware. Generally, the term of NetStream
aggregation refers to the implementation by software, unless otherwise noted.
The NetStream hardware aggregation directly merges the statistics of data flows at the hardware layer
according to the aggregation criteria of a specific aggregation mode, and stores the NetStream
hardware aggregation data in the cache. When the timer for the hardware aggregation entry expires,
the data is exported. This greatly reduces the resource consumption by NetStream aggregation.
Without hardware aggregation configured, the NetStream data aggregation is implemented by
software.
To configure NetStream aggregation data export:

Step Command Remarks


1. Enter system view. system-view N/A
2. Configure the
Optional.
NetStream
ip netstream aggregation advanced By default, NetStream hardware
hardware
aggregation. aggregation is disabled.

ip netstream aggregation { as |
destination-prefix | prefix | prefix-port
3. Set a NetStream
| protocol-port | source-prefix | tos-as |
aggregation mode N/A
tos-destination-prefix | tos-prefix |
and enter its view.
tos-protocol-port | tos-source-prefix |
tos-bgp-nexthop }

By default, no destination address is


configured in NetStream aggregation
4. Configure the
view. Its default destination address is
destination address ip netstream export host ip-address that configured in system view.
for the NetStream udp-port [ vpn-instance
aggregation data vpn-instance-name ] If you expect to export only NetStream
export. aggregation data, configure the
destination address in related
aggregation view only.

Optional.
By default, the interface connecting to
the NetStream server is used as the
source interface.
5. Configure the • Source interfaces in different
source interface for aggregation views can be different.
ip netstream export source interface
NetStream • If no source interface is configured
interface-type interface-number
aggregation data in aggregation view, the source
export. interface configured in system view
is used.
• H3C recommends that you connect
the network management interface
to the NetStream server.
6. Enable the current
NetStream By default, the current NetStream
enable
aggregation aggregation configuration is disabled.
configuration.

146
NOTE:
• Configurations in NetStream aggregation view apply to aggregation data export only, and those in
system view apply to common data export. If configurations in NetStream aggregation view are not
provided, the configurations in system view apply to the aggregation data export.
• The aging of NetStream hardware aggregation entries is exactly the same as the aging of NetStream
common data entries.
• NetStream hardware aggregation is supported by NAM cards.
• NetStream hardware aggregation data export and NetStream common data export are mutually
exclusive. Hardware aggregation does not take effect if the destination address for NetStream common
data export is configured (by using the ip netstream export host command in system view).

Displaying and maintaining NetStream


Task Command Remarks
display ip netstream cache [ verbose ] [ slot
Display the NetStream entry
slot-number ] [ | { begin | exclude | include } Available in any view
information in the cache.
regular-expression ]

Display information about display ip netstream export [ | { begin |


Available in any view
NetStream data export. exclude | include } regular-expression ]

Display the configuration and display ip netstream template [ slot


status of the NetStream flow slot-number ] [ | { begin | exclude | include } Available in any view
record templates. regular-expression ]

Clear the cache, age out and


reset ip netstream statistics Available in user view
export all NetStream data.

NetStream configuration examples


QoS-based NetStream configuration example
Network requirements
As shown in Figure 53, configure NetStream on Router A to collect statistics on packets passing through
it. Enable NetStream in the inbound direction on GigabitEthernet 6/1/7. Configure to export NetStream
data to UDP port 5000 of the NetStream server at 12.110.2.2/16.
Figure 53 Network diagram

147
Configuration procedure
# Configure a QoS policy to mirror traffic entering GigabitEthernet 6/1/7 to NetStream interface
2/0/1.
[RouterA] acl number 2005
[RouterA-acl-basic-2005] rule 1 permit source any
[RouterA-acl-basic-2005] quit
[RouterA] traffic classifier ns_ipv4
[RouterA-classifier-ns_ipv4] if-match acl 2005
[RouterA-classifier-ns_ipv4] quit
[RouterA] traffic behavior ns_ipv4
[RouterA-behavior-ns_ipv4] mirror-to interface Net-Stream 2/0/1
[RouterA-behavior-ns_ipv4] quit
[RouterA] qos policy ns_ipv4
[RouterA-qospolicy-ns_ipv4] classifier ns_ipv4 behavior ns_ipv4
[RouterA-qospolicy-ns_ipv4] quit

# Apply the QoS policy to the inbound direction of GigabitEthernet 6/1/7.


[RouterA] interface GigabitEthernet6/1/7
[RouterA-GigabitEthernet6/1/7] qos apply policy ns_ipv4 inbound
[RouterA-GigabitEthernet6/1/7] quit

# Enable NetStream on Router A. Specify to export NetStream data to IP address 12.110.2.2 and port
5000 (the NetStream server), leaving the default for source address.
[RouterA] ip netstream
[RouterA] ip netstream export host 12.110.2.2 5000

NetStream aggregation data export configuration example


Network requirements
As shown in Figure 54, configure NetStream on Router A so that:
• Router A exports NetStream traditional data in version 5 format to port 5000 of the NetStream
server at 4.1.1.1/16.
• Router A performs NetStream aggregation in the modes of AS, protocol-port, source-prefix,
destination-prefix and prefix. Use version 8 export format to send the aggregation data of different
modes to the destination address at 4.1.1.1, with UDP port 2000, 3000, 4000, 6000, and 7000
respectively.

NOTE:
All the routers in the network are running EBGP. For more information about BGP, see Layer 3—IP Routing
Configuration Guide.

148
Figure 54 Network diagram

Configuration procedure
# Configure network management interface M-Ethernet 6/0/0.
<RouterA> system-view
[RouterA] interface M-Ethernet6/0/0
[RouterA-M-Ethernet6/0/0] ip address 4.1.1.2 255.255.0.0
[RouterA-M-Ethernet6/0/0] quit

# Configure a QoS policy to mirror traffic entering GigabitEthernet 3/1/1 to NetStream interface
2/0/1.
[RouterA] acl number 2005
[RouterA-acl-basic-2005] rule 1 permit source any
[RouterA-acl-basic-2005] quit
[RouterA] traffic classifier ns_ipv4
[RouterA-classifier-ns_ipv4] if-match acl 2005
[RouterA-classifier-ns_ipv4] quit
[RouterA] traffic behavior ns_ipv4
[RouterA-behavior-ns_ipv4] mirror-to interface Net-Stream 2/0/1
[RouterA-behavior-ns_ipv4] quit
[RouterA] qos policy ns_ipv4
[RouterA-qospolicy-ns_ipv4] classifier ns_ipv4 behavior ns_ipv4
[RouterA-qospolicy-ns_ipv4] quit

# Apply the QoS policy to GigabitEthernet 3/1/1.


[RouterA] interface GigabitEthernet 3/1/1
[RouterA-GigabitEthernet3/1/1] qos apply policy ns_ipv4 inbound
[RouterA-GigabitEthernet3/1/1] quit

# Enable NetStream on Router A.


[RouterA] ip netstream

# Configure to export NetStream data in version 5 format and specify the data to include the source AS
and destination AS.
[RouterA] ip netstream export version 5 origin-as

# In system view, configure the destination address for the NetStream traditional data export with the IP
address 4.1.1.1 and port 5000.
[RouterA] ip netstream export host 4.1.1.1 5000
[RouterA] ip netstream export source interface M-Ethernet6/0/0

149
# Configure the aggregation mode as AS, and in aggregation view configure the destination address for
the NetStream AS aggregation data export.
[RouterA] ip netstream aggregation as
[RouterA-ns-aggregation-as] enable
[RouterA-ns-aggregation-as] ip netstream export host 4.1.1.1 2000
[RouterA-ns-aggregation-as] ip netstream export source interface M-Ethernet6/0/0
[RouterA-ns-aggregation-as] quit

# Configure the aggregation mode as protocol-port, and in aggregation view configure the destination
address for the NetStream protocol-port aggregation data export.
[RouterA] ip netstream aggregation protocol-port
[RouterA-ns-aggregation-protport] enable
[RouterA-ns-aggregation-protport] ip netstream export host 4.1.1.1 3000
[RouterA-ns-aggregation-protport] ip netstream export source interface M-Ethernet6/0/0
[RouterA-ns-aggregation-protport] quit

# Configure the aggregation mode as source-prefix, and in aggregation view configure the destination
address for the NetStream source-prefix aggregation data export.
[RouterA] ip netstream aggregation source-prefix
[RouterA-ns-aggregation-srcpre] enable
[RouterA-ns-aggregation-srcpre] ip netstream export host 4.1.1.1 4000
[RouterA-ns-aggregation-srcpre] ip netstream export source interface M-Ethernet6/0/0
[RouterA-ns-aggregation-srcpre] quit

# Configure the aggregation mode as destination-prefix, and in aggregation view configure the
destination address for the NetStream destination-prefix aggregation data export.
[RouterA] ip netstream aggregation destination-prefix
[RouterA-ns-aggregation-dstpre] enable
[RouterA-ns-aggregation-dstpre] ip netstream export host 4.1.1.1 6000
[RouterA-ns-aggregation-dstpre] ip netstream export source interface M-Ethernet6/0/0
[Sysname-ns-aggregation-dstpre] quit

# Configure the aggregation mode as prefix, and in aggregation view configure the destination address
for the NetStream prefix aggregation data export.
[RouterA] ip netstream aggregation prefix
[RouterA-ns-aggregation-prefix] enable
[RouterA-ns-aggregation-prefix] ip netstream export host 4.1.1.1 7000
[RouterA-ns-aggregation-prefix] ip netstream export source interface M-Ethernet6/0/0
[RouterA-ns-aggregation-prefix] quit

NetStream aggregation data export on an SPC card


configuration example
Network requirements
As shown in Figure 55, configure NetStream on Router A as follows:
• Configure NetStream sampling to reduce the amount of packets to be counted, thus reducing the
impact on the router forwarding perform.
• Configure Router A to export NetStream traditional data in version 5 format to port 5000 of the
NetStream server at 4.1.1.1/16.

150
• Configure Router A to perform NetStream aggregation in the modes of AS, protocol-port,
source-prefix, destination-prefix and prefix. Use version 8 export format to send the aggregation
data of different modes to the destination address at 4.1.1.1, with UDP port 2000, 3000, 4000,
6000, and 7000 respectively.

NOTE:
All the routers in the network are running EBGP. For more information about BGP, see Layer 3—IP Routing
Configuration Guide.

Figure 55 Network diagram

Configuration procedure
# Configure network management interface M-Ethernet 6/0/0.
<RouterA> system-view
[RouterA] interface M-Ethernet6/0/0
[RouterA-M-Ethernet6/0/0] ip address 4.1.1.2 255.255.0.0
[RouterA-M-Ethernet6/0/0] quit

# Create a sampler named ns1 in fixed sampling mode. Set the sampling rate to 4, which means in each
sampling, one packet out of 16 packets (2 to the power of 4) is sampled.
[RouterA]sampler ns1 mode fixed packet-interval 4

# Enable NetStream sampling in the outbound direction of GigabitEthernet 3/0/2 and mirror the
sampled packets to the NetStream interface.
[RouterA] interface GigabitEthernet 3/0/2
[RouterA-GigabitEthernet3/0/2] ip netstream sampler ns1 outbound
[RouterA-GigabitEthernet3/0/2] ip netstream mirror-to interface Net-Stream 3/0/1 outbound
[RouterA-GigabitEthernet3/0/2] quit

# Enable NetStream on Router A.


[RouterA] ip netstream

# Configure to export NetStream data in version 5 format and specify the data to include the source AS
and destination AS.
[RouterA] ip netstream export version 5 origin-as

# In system view, configure the destination address for the NetStream traditional data export with the IP
address 4.1.1.1 and port 5000.
[RouterA] ip netstream export host 4.1.1.1 5000
[RouterA] ip netstream export source interface M-Ethernet6/0/0

151
# Configure the aggregation mode as AS, and in aggregation view configure the destination address for
the NetStream AS aggregation data export.
[RouterA] ip netstream aggregation as
[RouterA-ns-aggregation-as] enable
[RouterA-ns-aggregation-as] ip netstream export host 4.1.1.1 2000
[RouterA-ns-aggregation-as] ip netstream export source interface M-Ethernet6/0/0
[RouterA-ns-aggregation-as] quit

# Configure the aggregation mode as protocol-port, and in aggregation view configure the destination
address for the NetStream protocol-port aggregation data export.
[RouterA] ip netstream aggregation protocol-port
[RouterA-ns-aggregation-protport] enable
[RouterA-ns-aggregation-protport] ip netstream export host 4.1.1.1 3000
[RouterA-ns-aggregation-protport] ip netstream export source interface M-Ethernet6/0/0
[RouterA-ns-aggregation-protport] quit

# Configure the aggregation mode as source-prefix, and in aggregation view configure the destination
address for the NetStream source-prefix aggregation data export.
[RouterA] ip netstream aggregation source-prefix
[RouterA-ns-aggregation-srcpre] enable
[RouterA-ns-aggregation-srcpre] ip netstream export host 4.1.1.1 4000
[RouterA-ns-aggregation-srcpre] ip netstream export source interface M-Ethernet6/0/0
[RouterA-ns-aggregation-srcpre] quit

# Configure the aggregation mode as destination-prefix, and in aggregation view configure the
destination address for the NetStream destination-prefix aggregation data export.
[RouterA] ip netstream aggregation destination-prefix
[RouterA-ns-aggregation-dstpre] enable
[RouterA-ns-aggregation-dstpre] ip netstream export host 4.1.1.1 6000
[RouterA-ns-aggregation-dstpre] ip netstream export source interface M-Ethernet6/0/0
[Sysname-ns-aggregation-dstpre] quit

# Configure the aggregation mode as prefix, and in aggregation view configure the destination address
for the NetStream prefix aggregation data export.
[RouterA] ip netstream aggregation prefix
[RouterA-ns-aggregation-prefix] enable
[RouterA-ns-aggregation-prefix] ip netstream export host 4.1.1.1 7000
[RouterA-ns-aggregation-prefix] ip netstream export source interface M-Ethernet6/0/0
[RouterA-ns-aggregation-prefix] quit

152
Configuring IPv6 NetStream

NOTE:
In this documentation, SPC cards refer to the cards prefixed with SPC, for example, SPC-GT48L. SPE cards
refer to the cards prefixed with SPE, for example, SPE-1020-E-II. NAM cards refer to the cards prefixed
with IM-NAM, for example, IM-NAM-II.

IPv6 NetStream overview


Conventional traffic statistics collection methods, like SNMP and port mirroring, cannot provide precise
network management because of inflexible statistical methods or high cost (dedicated servers are
required). This calls for a new technology to collect traffic statistics.
IPv6 NetStream provides statistics on network traffic flows and can be deployed on access, distribution,
and core layers.
The IPv6 NetStream technology implements the following features:
• Accounting and billing—IPv6 NetStream provides fine-gained data about the network usage based
on the resources such as lines, bandwidth, and time periods. The Internet service providers (ISPs)
can use the data for billing based on time period, bandwidth usage, application usage, and quality
of service (QoS). The enterprise customers can use this information for department chargeback or
cost allocation.
• Network planning—IPv6 NetStream data provides key information, for example the autonomous
system (AS) traffic information, for optimizing the network design and planning. This helps
maximize the network performance and reliability when minimizing the network operation cost.
• Network monitoring—Configured on the Internet interface, IPv6 NetStream allows for traffic and
bandwidth utilization monitoring in real time. Based on this, administrators can understand how the
network is used and where the bottleneck is, better planning the resource allocation.
• User monitoring and analysis—The IPv6 NetStream data provides detailed information about
network applications and resources. This information helps network administrators efficiently plan
and allocate network resources, and guarantee network security.

IPv6 NetStream basic concepts


IPv6 flow
IPv6 NetStream is an accounting technology to provide statistics on a per-flow basis. An IPv6 flow is
defined by the 7-tuple elements: destination address, source IP address, destination port number, source
port number, protocol number, type of service (ToS), and inbound or outbound interface. The 7-tuple
elements define a unique flow.

153
IPv6 NetStream operation
A typical IPv6 NetStream system comprises three parts: NetStream data exporter (NDE), NetStream
collector (NSC), and NetStream data analyzer (NDA).
• NDE
The NDE analyzes traffic flows that pass through it, collects necessary data from the target flows,
and exports the data to the NSC. Before exporting data, the NDE may process the data like
aggregation. A device with IPv6 NetStream configured typically acts as an NDE.
• NSC
The NSC is usually a program running in UNIX or Windows. It parses the packets sent from the
NDE, stores the statistics to the database for the NDA. The NSC gathers the data from multiple
NDEs.
• NDA
The NDA is a network traffic analysis tool. It collects statistics from the NSC, and performs further
process, generates various types of reports for applications of traffic billing, network planning,
and attack detection and monitoring. Typically, the NDA features a Web-based system for users
to easily obtain, view, and gather the data.
Figure 56 IPv6 NetStream system

NDE
NSC

NDA

NDE

As shown in Figure 56, the following procedure of IPv6 NetStream data collection and analysis occurs:
1. The NDE, that is the device configured with IPv6 NetStream, periodically delivers the collected
statistics to the NSC.
2. The NSC processes the statistics, and then sends the results to the NDA.
3. The NDA analyzes the statistics for accounting, network planning, and the like.

NOTE:
• This document focuses on the description and configuration of NDE.
• NSC and NDA are usually integrated into a NetStream server.

IPv6 NetStream key technologies


Flow aging
The flow aging in IPv6 NetStream is a means used by the NDE to export IPv6 NetStream data to the
NetStream server. IPv6 NetStream creates an IPv6 NetStream entry for each active flow in the cache of

154
the NDE and each entry stores the flow statistics, which will be later exported to the NetStream server.
When the timer of an entry expires, the NDE exports the summarized data to the NetStream server in a
specific IPv6 NetStream version export format. For more information about flow aging types and
configuration, see “Configuring IPv6 NetStream flow aging.”

IPv6 NetStream data export


IPv6 NetStream traditional data export
The IPv6 NetStream collects statistics of each flow and, when the entry timer expires, exports the data of
each entry to the NetStream server.
Though the data includes statistics of each flow, this method consumes more bandwidth and CPU, and
requires large cache size. In most cases, not the whole statistics are necessary for analysis.

IPv6 NetStream aggregation data export


The IPv6 NetStream aggregation merges the flow statistics according to the aggregation criteria of an
aggregation mode, and sends the summarized data to the NetStream server. This process is the IPv6
NetStream aggregation data export, which decreases the bandwidth usage compared to traditional
data export.
Six IPv6 NetStream aggregation modes are supported as listed in Table 5. In each mode, the system
merges flows into one aggregation flow if the aggregation criteria are of the same value. The six
aggregation modes work independently and can be configured on the same interface.
Table 5 IPv6 NetStream aggregation modes

Aggregation mode Aggregation criteria


• Source AS number
• Destination AS number
AS aggregation
• Inbound interface index
• Outbound interface index
• Protocol number
Protocol-port aggregation • Source port
• Destination port
• Source AS number
• Source address mask length
Source-prefix aggregation
• Source prefix
• Inbound interface index
• Destination AS number
• Destination address mask length
Destination-prefix aggregation
• Destination prefix
• Outbound interface index

155
Aggregation mode Aggregation criteria
• Source AS number
• Destination AS number
• Source address mask length
• Destination address mask length
Prefix aggregation
• Source prefix
• Destination prefix
• Inbound interface index
• Outbound interface index
• BGP next hop
BGP-nexthop
• Outbound interface index

NOTE:
• In an aggregation mode with AS, if the packets are not forwarded according to the BGP routing table,
the statistics on the AS number cannot be obtained.
• In the aggregation mode of BGP-nexthop, if the packets are not forwarded according to the BGP routing
table, the statistics on the BGP next hop cannot be obtained.

IPv6 NetStream export format


IPv6 NetStream exports data in UDP datagrams in version 9 format.
Version 9 format is the most flexible format. Its template-based feature provides support of different
statistics information, such as BGP next hop and MPLS information.

IPv6 NetStream configuration task list


Before configuring IPv6 NetStream, determine proper configurations as needed.
• Make sure on which router you want to enable IPv6 NetStream, that is, the NDE.
• If multiple service flows are passing the NDE, use an ACL or QoS policy to select the target data.
• Determine which export format is used for IPv6 NetStream data export.
• Configure the timer for IPv6 NetStream flow aging.
• To reduce the bandwidth consumption used by IPv6 NetStream data export, configure IPv6
NetStream aggregation.
Complete these tasks to configure IPv6 NetStream:

Task Remarks
Configuring IPv6 NetStream Required

Configuring attributes of IPv6 NetStream data export Optional

Configuring IPv6 NetStream flow aging Optional

Configuring IPv6 NetStream


Configuring IPv6 NetStream data traditional data export Required
export Configuring IPv6 NetStream Use either approach
aggregation data export

156
Configuring IPv6 NetStream
Configuring QoS-based IPv6 NetStream
NOTE:
NetStream-capable cards include NAM cards, SPC cards, and SPE cards. A QoS policy cannot be
applied to the outgoing traffic of interfaces on SPC cards. For information about configuring NetStream
on an SPC card, see “Configuring IPv6 NetStream on an SPC card.”

Before configuring IPv6 NetStream, configure QoS first to identify the traffic matching the classification
criteria and configure a traffic behavior with the action of mirroring the identified traffic to a NetStream
interface of a NetStream-capable card (SPC card, SPE card, or NAM card). For more information about
class, traffic behavior, and QoS policy, see ACL and QoS Configuration Guide.
To configure QoS-based IPv6 NetStream:

Step Command Remarks


1. Enter system view. system-view N/A

By default, IPv6 NetStream is


2. Enable IPv6 NetStream. ipv6 netstream
disabled.
3. Define a class and enter its traffic classifier classifier-name
N/A
view. [ operator { and | or } ]
4. Define a match criterion. if-match match-criteria N/A
5. Return to system view. quit N/A
6. Create a traffic behavior and
traffic behavior behavior-name N/A
enter traffic behavior view.
7. Configure the action of
mirror-to interface Net-Stream By default, no action of traffic
mirroring traffic to the
interface-number mirroring is configured.
NetStream interface.
8. Return to system view. quit N/A
9. Create a policy and enter its
qos policy policy-name N/A
view.
10. Specify a behavior for a class classifier classifier-name behavior
N/A
in the policy. behavior-name
11. Return to system view. quit N/A

interface interface-type
12. Enter interface view. N/A
interface-number

qos apply policy policy-name


13. Apply a QoS policy. N/A
{ inbound | outbound }

Configuring IPv6 NetStream on an SPC card

157
NOTE:
• This feature is supported by SPC cards only.
• To enable IPv6 NetStream on an SPC card, NetStream sampling is required.

To configure NetStream on an SPC card:

Step Command Remarks


1. Enter system view. system-view N/A

The rate argument specifies the


sampling rate, that is, the number
of packets in each sampling, which
equals the 2 to the power of rate.
sampler sampler-name mode fixed For example, if the rate is 8, one
2. Create a sampler.
packet-interval rate packet out of 256 packets (2 to the
power of 8) is sampled in each
sampling; if the rate is 10, one
packet out of 1024 packets (2 to
the power of 10) is sampled.
3. Enter interface view. interface interface-type interface-number N/A
4. Enable NetStream ip netstream sampler sampler-name By default, NetStream sampling is
sampling. { inbound | outbound } disabled.
5. Mirror the sampled ip netstream mirror-to interface
traffic to a NetStream Net-Stream interface-number
N/A
interface for statistics [ backup-interface Net-Stream
collection. interface-number ] { inbound | outbound }

Configuring attributes of IPv6 NetStream data


export
Configuring IPv6 NetStream export format
The IPv6 NetStream export format configures to export IPv6 NetStream data in version 9 formats, and the
data fields can be expanded to contain more information, such as the following information:
• Statistics about source AS, destination AS, and peer AS in version 9 format.
• Statistics about BGP next hop in version 9 format.
To configure the IPv6 NetStream export format:

Step Command Remarks


1. Enter system view. system-view N/A

158
Step Command Remarks
Optional.

2. Configure the version for By default, version 9 format is


IPv6 NetStream export used to export IPv6 NetStream
ipv6 netstream export version 9 traditional data, IPv6 NetStream
format, and specify whether
[ origin-as | peer-as ] [ bgp-nexthop ] aggregation data, and MPLS flow
to record AS and BGP next
hop information. data with IPv6 fields; the peer AS
numbers are recorded; the BGP
next hop is not recorded.

Configuring refresh rate for IPv6 NetStream version 9


templates
Version 9 is template-based and supports user-defined formats, so the NetStream-enabled router needs
to resend the new template to the NetStream server for an update. If the version 9 format is changed on
the router and not updated on the NetStream server, the server is unable to associate the received
statistics with its proper fields. To avoid such situation, configure the refresh frequency and rate for version
9 templates so that the NetStream server can refresh the templates on time.
To configure the refresh rate for IPv6 NetStream version 9 templates:

Step Command Remarks


1. Enter system view. system-view N/A

2. Configure the refresh frequency ipv6 netstream export Optional.


for NetStream version 9 v9-template refresh-rate packet By default, the version 9 templates
templates. packets are sent every 20 packets.

ipv6 netstream export Optional.


3. Configure the refresh interval for
v9-template refresh-rate time By default, the version 9 templates
NetStream version 9 templates.
minutes are sent every 30 minutes.

NOTE:
The refresh frequency and interval can be both configured, and the template is resent when either of the
condition is reached.

Configuring MPLS-aware NetStream


Introduction to MPLS-aware NetStream
When you configure MPLS-aware NetStream, you can set whether to collect and export statistics about
the three labels of MPLS packets. For MPLS packets that carry IP data, NetStream records statistics on
labels (up to three) in the label stack, forwarding equivalent class (FEC) corresponding to the top label,
and the traditional 7-tuple elements data.

Configuring MPLS-aware NetStream


To configure MPLS-aware NetStream:

159
Step Command Remarks
1. Enter system view. system-view N/A
2. Configure to count and ip netstream mpls [ label-positions By default, no statistics about
export statistics on MPLS { label-position1 [ label-position2 ] MPLS packets are counted and
packets. [ label-position3 ] } ] [ no-ip-fields ] exported.

NOTE:
• Collection of NetStream MPLS packet statistics is supported only when NetStream is enabled.
• To collect statistics about MPLS packets carrying IPv4 data, enable IPv4 NetStream.
• To collect statistics about MPLS packets carrying IPv6 data, enable IPv6 NetStream.
• To collect statistics about MPLS packets carrying neither IPv4 nor IPv6 data, enable IPv4 NetStream.
• On a NAM card, to collect statistics about MPLS packets carrying non-IPv4 data, enable IPv4
NetStream. Only the statistics about labels are collected.

Configuring IPv6 NetStream flow aging


Flow aging approaches
Three types of IPv6 NetStream flow aging are available:
• Periodical aging
• Forced aging
• TCP FIN- and RST-triggered aging (it is automatically triggered when a TCP connection is
terminated)

Periodical aging
Periodical aging uses two approaches:
• Inactive flow aging
A flow is considered inactive if its statistics have not been changed. No packet for this IPv6
NetStream entry arrives in the time specified by the ipv6 netstream timeout inactive command. The
inactive flow entry remains in the cache until the inactive timer expires. Then the inactive flow is
aged out and its statistics, which can no longer be displayed by the display ipv6 netstream cache
command, are sent to the NetStream server. The inactive flow aging guarantees the cache is big
enough for new flow entries.
• Active flow aging
An active flow is aged out when the time specified by the ipv6 netstream timeout active command
is reached, and its statistics are exported to the NetStream server. The device continues to count
the active flow statistics, which can be displayed by the display ipv6 netstream cache command.
The active flow aging exports the statistics of active flows to the NetStream server.

Forced aging
The reset ipv6 netstream statistics command ages out all IPv6 NetStream entries in the cache and clears
the statistics. This is forced aging. Alternatively, use the ipv6 netstream max-entry command to configure
aging out of entries in the cache when the maximum number of entries is reached.

160
TCP FIN- and RST-triggered aging
For a TCP connection, when a packet with a FIN or RST flag is sent out, it means that a session is finished.
Therefore, when a packet with a FIN or RST flag is recorded for a flow with the IPv6 NetStream entry
already created, the flow is aged out immediately. However, if the packet with a FIN or RST flag is the first
packet of a flow, a new IPv6 NetStream entry is created instead of being aged out. This type of aging is
enabled by default, and cannot be disabled.

Configuring IPv6 NetStream flow aging


To configure flow aging:

Step Command Remarks


1. Enter system view. system-view N/A

Optional.
By default:
• Set the aging timer for active flows: • The aging timer for
2. Configure periodical aging. ipv6 netstream timeout active minutes active flows is 30
• Set the aging timer for inactive flows: minutes.
ipv6 netstream timeout inactive seconds • The aging timer for
inactive flows is 30
seconds.

a. Set the maximum number of entries that


the cache can accommodate, and the Optional.
processing method when the upper limit By default, the
is reached: maximum number of
3. Configure forced aging of ipv6 netstream max-entry { max-entries entries are 409600.
the IPv6 NetStream entries. | aging | disable-caching }
The reset ipv6
b. Exit to user view:
netstream statistics
quit
command also clears
c. Configure forced aging: the cache.
reset ipv6 netstream statistics

Configuring IPv6 NetStream data export


To allow the NDE to export collected statistics to the NetStream server, configure the source interface out
of which the data is sent and the destination address to which the data is sent.

Configuring IPv6 NetStream traditional data export


Before configuring the IPv6 NetStream traditional data export, enable IPv6 NetStream first.

161
To configure IPv6 NetStream traditional data export:

Step Command Remarks


1. Enter system view. system-view N/A
2. Configure the destination By default, no destination address is
ipv6 netstream export host
address for the IPv6 configured, in which case, the IPv6
ip-address udp-port [ vpn-instance
NetStream traditional NetStream traditional data is not
vpn-instance-name ]
data export. exported.

Optional.
• By default, the interface connecting
3. Configure the source to the NetStream server is used as
ipv6 netstream export source
interface for IPv6 the source interface.
interface interface-type
NetStream traditional • H3C recommends you connect the
interface-number
data export. network management interface to
the NetStream server and configure
it as the source interface.

Optional.
4. Limit the data export rate. ipv6 netstream export rate rate By default, the data export rate is not
limited.

Configuring IPv6 NetStream aggregation data export


Before configuring the IPv6 NetStream traditional data export, enable IPv6 NetStream first.
To configure IPv6 NetStream aggregation data export:

Step Command Remarks


1. Enter system view. system-view N/A
2. Set an IPv6
ipv6 netstream aggregation { as
NetStream
| destination-prefix | prefix |
aggregation N/A
protocol-port | source-prefix |
mode and enter
bgp-nexthop }
its view.

By default, no destination address is configured


3. Configure the
in IPv6 NetStream aggregation view. Its default
destination ipv6 netstream export host destination address is that configured in system
address for the ip-address udp-port view, if any.
IPv6 NetStream [ vpn-instance
aggregation data vpn-instance-name ] If you expect to export only IPv6 NetStream
export. aggregation data, configure the destination
address in related aggregation view only.

162
Step Command Remarks
Optional.
By default, the interface connecting to the
NetStream server is used as the source interface.
4. Configure the
source interface • Source interfaces in different aggregation
ipv6 netstream export source views can be different.
for IPv6
interface interface-type • If no source interface is configured in
NetStream
interface-number aggregation view, the source interface
aggregation data
export. configured in system view, if any, is used.
• H3C recommends you connect the network
management interface to the NetStream
server.
5. Enable the current
IPv6 NetStream By default, the current IPv6 NetStream
enable
aggregation aggregation configuration is disabled.
configuration.

NOTE:
Configurations in IPv6 NetStream aggregation view apply to aggregation data export only, and those in
system view apply to traditional data export. If configurations in IPv6 NetStream aggregation view are not
provided, the configurations in system view apply to the aggregation data export.

Displaying and maintaining IPv6 NetStream


Task Command Remarks
display ipv6 netstream cache [ verbose ] [ slot
Display the IPv6 NetStream entry
slot-number ] [ | { begin | exclude | include } Available in any view
information in the cache.
regular-expression ]

Display information about IPv6 display ipv6 netstream export [ | { begin |


Available in any view
NetStream data export. exclude | include } regular-expression ]

Display the configuration and display ipv6 netstream export template [ slot
status of the NetStream flow slot-number ] [ | { begin | exclude | include } Available in any view
record templates. regular-expression ]

Clear the cache, and age out


and export all IPv6 NetStream reset ipv6 netstream statistics Available in user view
data.

IPv6 NetStream configuration examples


IPv6 NetStream traditional data export configuration example
Network requirements
As shown in Figure 57, configure IPv6 NetStream on Router A to collect statistics on packets passing
through it. Configure IPv6 NetStream in the inbound direction on GigabitEthernet 1/1/1. Configure to
export IPv6 NetStream traditional data to UDP port 5000 of the NetStream server at 1.1.1.2/16.

163
Figure 57 Network diagram
GE1/1/2
Router A 1.1.1.1/16
15::1/64

NetStream server
1.1.1.2/16

Network

Configuration procedure
# Configure interface GigabitEthernet 1/1/2.
<RouterA> system-view
[RouterA] ipv6
[RouterA] interface GigabitEthernet1/1/2
[RouterA-GigabitEthernet1/1/2] ip address 1.1.1.1 255.255.0.0
[RouterA-GigabitEthernet1/1/2] quit

# Configure a QoS policy to mirror traffic entering GigabitEthernet 1/1/1 to NetStream interface
2/0/1.
[RouterA] acl ipv6 number 2000
[RouterA-acl6-basic-2000] rule 1 permit source any
[RouterA-acl6-basic-2000] quit
[RouterA] traffic classifier ns_ipv6
[RouterA-classifier-ns_ipv6] if-match acl ipv6 2000
[RouterA-classifier-ns_ipv6] quit
[RouterA] traffic behavior ns_ipv6
[RouterA-behavior-ns_ipv6] mirror-to interface Net-Stream 2/0/1
[RouterA-behavior-ns_ipv6] quit
[RouterA] qos policy ns_ipv6
[RouterA-qospolicy-ns_ipv6] classifier ns_ipv6 behavior ns_ipv6
[RouterA-qospolicy-ns_ipv6] quit

# Configure interface GigabitEthernet 1/1/1, and apply the QoS policy to the inbound direction of
GigabitEthernet 1/1/1.
[RouterA] int GigabitEthernet1/1/1
[RouterA-GigabitEthernet1/1/1] ipv6 address 10::1/64
[RouterA-GigabitEthernet1/1/1] qos apply policy ns_ipv6 inbound
[RouterA-GigabitEthernet1/1/1] qos apply policy ns_ipv6 outbound
[RouterA-GigabitEthernet1/1/1] quit

# Enable IPv6 NetStream on Router A. Specify to export NetStream data to IP address 1.1.1.2 and port
5000 (the NetStream server), leaving the default for source address.
[RouterA] ipv6 netstream
[RouterA] ipv6 netstream export host 1.1.1.2 5000

164
IPv6 NetStream aggregation data export configuration
example
Network requirements
As shown in Figure 58, configure IPv6 NetStream on Router A so that:
• Router A exports IPv6 NetStream traditional data in version 9 format to port 5000 of the NetStream
server at 3.1.1.2/16.
• Router A performs IPv6 NetStream aggregation in the modes of AS, protocol-port, source-prefix,
destination-prefix and prefix. Send the aggregation data to the destination address with DUP port
2000, 3000, 4000, 6000, and 7000 for different modes.

NOTE:
All the routers in the network are running IPv6 EBGP. For more information about IPv6 BGP, see Layer
3—IP Routing Configuration Guide.

Figure 58 Network diagram

Configuration procedure
# Configure interfaces GigabitEthernet 1/1/1 and GigabitEthernet 1/1/2.
<RouterA> system-view
[RouterA] ipv6
[RouterA] interface GigabitEthernet 1/1/1
[RouterA-GigabitEthernet1/1/1] ipv6 address 15::2 64
[RouterA-GigabitEthernet1/1/1] quit
[RouterA] interface GigabitEthernet 1/1/2
[RouterA-GigabitEthernet1/1/2] ipv6 address 20::1 64
[RouterA-GigabitEthernet1/1/2] quit
[RouterA] interface GigabitEthernet 1/1/3
[RouterA-GigabitEthernet1/1/2] ip address 3.1.1.1 255.255.0.0
[RouterA-GigabitEthernet1/1/2] quit

# Configure a QoS policy to mirror traffic entering GigabitEthernet 1/1/1 to NetStream interface
2/0/1.
[RouterA] acl ipv6 number 2000
[RouterA-acl6-basic-2000] rule 1 permit source any
[RouterA-acl6-basic-2000] quit

165
[RouterA] traffic classifier ns_ipv6
[RouterA-classifier-ns_ipv6] if-match acl ipv6 2000
[RouterA-classifier-ns_ipv6] quit
[RouterA] traffic behavior ns_ipv6
[RouterA-behavior-ns_ipv6] mirror-to interface Net-Stream 2/0/1
[RouterA-behavior-ns_ipv6] quit
[RouterA] qos policy ns_ipv6
[RouterA-qospolicy-ns_ipv6] classifier ns_ipv6 behavior ns_ipv6
[RouterA-qospolicy-ns_ipv6] quit

# Apply the QoS policy to the inbound direction of GigabitEthernet 1/1/1.


[RouterA] int GigabitEthernet1/1/1
[RouterA-GigabitEthernet1/1/1] qos apply policy ns_ipv6 inbound
[RouterA-GigabitEthernet1/1/1] quit

# Enable IPv6 NetStream.


[RouterA] ipv6 netstream

# Configure to export IPv6 NetStream data in version 9 format and specify the data to include the source
AS, destination AS, and the BGP next hop.
[RouterA] ipv6 netstream export version 9 origin-as bgp-nexthop

# Configure the maximum entries that the cache can accommodate as 409600.
[RouterA] ipv6 netstream max-entry 409600

# In system view, configure the destination address for the IPv6 NetStream traditional data export with
the IP address 3.1.1.2 and port 5000.
[RouterA] ipv6 netstream export host 3.1.1.2 5000
[RouterA] ipv6 netstream export source interface GigabitEthernet 1/1/2

# Configure the aggregation mode as AS, and in aggregation view configure the destination address for
the IPv6 NetStream AS aggregation data export.
[RouterA] ipv6 netstream aggregation as
[RouterA-ns6-aggregation-as] enable
[RouterA-ns6-aggregation-as] ipv6 netstream export host 3.1.1.2 2000
[RouterA-ns6-aggregation-as] quit

# Configure the aggregation mode as protocol-port, and in aggregation view configure the destination
address for the IPv6 NetStream protocol-port aggregation data export.
[RouterA] ipv6 netstream aggregation protocol-port
[RouterA-ns6-aggregation-protport] enable
[RouterA-ns6-aggregation-protport] ipv6 netstream export host 3.1.1.2 3000
[RouterA-ns6-aggregation-protport] quit

# Configure the aggregation mode as source-prefix, and in aggregation view configure the destination
address for the IPv6 NetStream source-prefix aggregation data export.
[RouterA] ipv6 netstream aggregation source-prefix
[RouterA-ns6-aggregation-srcpre] enable
[RouterA-ns6-aggregation-srcpre] ipv6 netstream export host 3.1.1.2 4000
[RouterA-ns6-aggregation-srcpre] quit

# Configure the aggregation mode as destination-prefix, and in aggregation view configure the
destination address for the IPv6 NetStream destination-prefix aggregation data export.
[RouterA] ipv6 netstream aggregation destination-prefix

166
[RouterA-ns6-aggregation-dstpre] enable
[RouterA-ns6-aggregation-dstpre] ipv6 netstream export host 3.1.1.2 6000
[Sysname-ns6-aggregation-dstpre] quit

# Configure the aggregation mode as prefix, and in aggregation view configure the destination address
for the IPv6 NetStream prefix aggregation data export.
[RouterA] ipv6 netstream aggregation prefix
[RouterA-ns6-aggregation-prefix] enable
[RouterA-ns6-aggregation-prefix] ipv6 netstream export host 3.1.1.2 7000
[RouterA-ns6-aggregation-prefix] quit

Configuration example for IPv6 NetStream aggregation data


export on an SPC card
Network requirements
As shown in Figure 59, configure IPv6 NetStream on Router A as follows:
• Configure NetStream sampling to reduce the amount of packets to be counted, thus reducing the
impact on the router forwarding perform.
• Configure Router A to export IPv6 NetStream traditional data in version 9 format to port 5000 of
the NetStream server at 3.1.1.2/16.
• Configure Router A to perform IPv6 NetStream aggregation in the modes of AS, protocol-port,
source-prefix, destination-prefix and prefix. Send the aggregation data to the destination address
with DUP port 2000, 3000, 4000, 6000, and 7000 for different modes.

NOTE:
All the routers in the network are running IPv6 EBGP. For more information about IPv6 BGP, see Layer
3—IP Routing Configuration Guide.

Figure 59 Network diagram

Configuration procedure
# Configure interfaces GigabitEthernet 3/0/1 and GigabitEthernet 1/1/2.
<RouterA> system-view
[RouterA] ipv6
[RouterA] interface GigabitEthernet 3/0/1
[RouterA-GigabitEthernet3/0/1] ipv6 address 15::2 64

167
[RouterA-GigabitEthernet3/0/1] quit
[RouterA] interface GigabitEthernet 1/1/2
[RouterA-GigabitEthernet1/1/2] ipv6 address 20::1 64
[RouterA-GigabitEthernet1/1/2] quit
[RouterA] interface GigabitEthernet 1/1/3
[RouterA-GigabitEthernet1/1/3] ip address 3.1.1.1 255.255.0.0
[RouterA-GigabitEthernet1/1/3] quit

# Create a sampler named ns1 in fixed sampling mode. Set the sampling rate to 4, which means in each
sampling, one packet out of 16 packets (2 to the power of 4) is sampled.
[RouterA]sampler ns1 mode fixed packet-interval 4

# Enable NetStream sampling in the outbound direction of GigabitEthernet 3/0/1 and mirror the
sampled packets to the NetStream interface.
[RouterA] interface GigabitEthernet 3/0/1
[RouterA-GigabitEthernet3/0/1] ip netstream sampler ns1 outbound
[RouterA-GigabitEthernet3/0/1] ip netstream mirror-to interface Net-Stream 3/0/1 outbound
[RouterA-GigabitEthernet3/0/1] quit

# Enable IPv6 NetStream on Router A.


[RouterA] ipv6 netstream

# Configure to export NetStream data in version 9 format and specify the data to include the source AS,
destination AS, and BGP next hop.
[RouterA] ipv6 netstream export version 9 origin-as bgp-nexthop

# Configure the maximum entries that the cache can accommodate as 409600.
[RouterA] ipv6 netstream max-entry 409600

# In system view, configure the destination address for the NetStream traditional data export and the
source IP address.
[RouterA] ipv6 netstream export host 3.1.1.2 5000
[RouterA] ipv6 netstream export source interface GigabitEthernet 1/1/3

# Configure the aggregation mode as AS, and in aggregation view configure the destination address for
the NetStream AS aggregation data export.
[RouterA] ipv6 netstream aggregation as
[RouterA-ns6-aggregation-as] enable
[RouterA-ns6-aggregation-as] ipv6 netstream export host 3.1.1.2 2000
[RouterA-ns6-aggregation-as] quit

# Configure the aggregation mode as protocol-port, and in aggregation view configure the destination
address for the NetStream protocol-port aggregation data export.
[RouterA] ipv6 netstream aggregation protocol-port
[RouterA-ns6-aggregation-protport] enable
[RouterA-ns6-aggregation-protport] ipv6 netstream export host 3.1.1.2 3000
[RouterA-ns6-aggregation-protport] quit

# Configure the aggregation mode as source-prefix, and in aggregation view configure the destination
address for the NetStream source-prefix aggregation data export.
[RouterA] ipv6 netstream aggregation source-prefix
[RouterA-ns6-aggregation-srcpre] enable
[RouterA-ns6-aggregation-srcpre] ipv6 netstream export host 3.1.1.2 4000
[RouterA-ns6-aggregation-srcpre] quit

168
# Configure the aggregation mode as destination-prefix, and in aggregation view configure the
destination address for the NetStream destination-prefix aggregation data export.
[RouterA] ipv6 netstream aggregation destination-prefix
[RouterA-ns6-aggregation-dstpre] enable
[RouterA-ns6-aggregation-dstpre] ipv6 netstream export host 3.1.1.2 6000
[Sysname-ns6-aggregation-dstpre] quit

# Configure the aggregation mode as prefix, and in aggregation view configure the destination address
for the NetStream prefix aggregation data export.
[RouterA] ipv6 netstream aggregation prefix
[RouterA-ns6-aggregation-prefix] enable
[RouterA-ns6-aggregation-prefix] ipv6 netstream export host 3.1.1.2 7000
[RouterA-ns6-aggregation-prefix] quit

169
Configuring protocol packet statistics

Protocol packet statistics overview


The protocol packet statistics function collects statistics for protocol packets passed to the CPU. You can
query and analyze the statistics to determine whether the router is under attack and, if yes, take proper
measures. This helps enhance the attack prevention capability of the router.

Displaying and maintaining protocol packet


statistics
Task Command Remarks
display to-cpu-packet statistics { interface
interface-type interface-number | [ protocol
Display protocol packet statistics. Available in any view
protocol-type ] slot slot-number } [ | { begin |
exclude | include } regular-expression ]

Clear protocol packet statistics. reset to-cpu-packet statistics slot slot-number Available in user view

170
Configuring the information center

Information center overview


Introduction to information center
Acting as the system information hub, information center classifies and manages system information,
offering a powerful support for network administrators and developers in monitoring network
performance and diagnosing network problems.
The following describes the working process of information center:
• Receives the log, trap, and debugging information generated by each module.
• Outputs the above information to different information channels according to the user-defined
output rules.
• Outputs the information to different destinations based on the information channel-to-destination
associations.
To sum up, information center assigns the log, trap and debugging information to the ten information
channels according to the eight severity levels and then outputs the information to different destinations.
The following describes the working process in details.
Figure 60 Information center diagram (default)
System Information Output
information channel destination
Log
information

0 Console
console

1 Monitor
Trap
information monitor
2
Log host
loghost

3 Trap buffer
trapbuffer
Debugging
information
4 Log buffer
logbuffer

5 SNMP module
snmpagent

6
channel6

7
channel7

8
channel8

9 Log file
channel9

171
NOTE:
By default, the information center is enabled. An enabled information center affects the system
performance in some degree due to information classification and output. Such impact becomes more
obvious in the event that there is enormous information waiting for processing.

Classification of system information


The system information of the information center falls into the following types:
• Log information
• Trap information
• Debugging information

Eight levels of system information


The information is classified into eight levels by severity. The severity levels in the descending order are
emergency, alert, critical, error, warning, notice, informational and debug. When the system information
is output by level, the information with severity level higher than or equal to the specified level is output.
For example, in the output rule, if you configure to output information with severity level being
informational, the information with severity level being emergency through informational will be output.
Table 6 Severity description

Corresponding keyword
Severity Severity value Description
in commands
Emergency 0 The system is unusable. emergencies

Alert 1 Action must be taken immediately alerts

Critical 2 Critical conditions critical

Error 3 Error conditions errors

Warning 4 Warning conditions warnings

Notice 5 Normal but significant condition notifications

Informational 6 Informational messages informational

Debug 7 Debug-level messages debugging

Eight output destinations and ten channels of system


information
The system supports eight information output destinations, including the console, monitor terminal
(monitor), log buffer, log host, trap buffer, SNMP module, web interface (syslog), and log file.
The system supports ten channels. The channels 0 through 5, and channel 9 are configured with channel
names, output rules, and are associated with output destinations by default. The channel names, output
rules and the associations between the channels and output destinations can be changed through
commands. In addition, you can configure channels 6, 7, and 8 without changing the default
configuration of the eight channels.

172
Table 7 Information channels and output destinations

Information
Default Default output
channel Description
channel name destination
number
0 console Console Receives log, trap and debugging information.

Receives log, trap and debugging information,


1 monitor Monitor terminal
facilitating remote maintenance.

Receives log, trap and debugging information


2 loghost Log host and information will be stored in files for future
retrieval.

Receives trap information, a buffer inside the


3 trapbuffer Trap buffer
router for recording information.

Receives log and debugging information, a


4 logbuffer Log buffer buffer inside the router for recording
information.

5 snmpagent SNMP module Receives trap information.

6 channel6 Web interface Receives log, trap, and debugging information.

7 channel7 Not specified Receives log, trap, and debugging information.

8 channel8 Not specified Receives log, trap, and debugging information.

9 channel9 Log file Receives log, trap, and debugging information.

NOTE:
Configurations for the eight output destinations function independently and take effect only when the
information center is enabled.

Outputting system information by source module


The system is composed of a variety of protocol modules, board drivers, and configuration modules. The
system information can be classified, filtered, and output according to source modules. You can use the
info-center source ? command to view the supported information source modules.

Default output rules of system information


The default output rules define the source modules allowed to output information on each output
destination, the output information type, and the output information level as shown in Table 8, which
indicates that by default and in terms of all modules:
• All log information is allowed to be output to the web interface and log file; log information with
severity level equal to or higher than informational is allowed to be output to the log host; log
information with severity level equal to or higher than informational is allowed to be output to the
console, monitor terminal, and log buffer; log information is not allowed to be output to the trap
buffer and the SNMP module.
• All trap information is allowed to be output to the console, monitor terminal, log host, web interface,
and log file; trap information with severity level equal to or higher than informational is allowed to
be output to the trap buffer and SNMP module; trap information is not allowed to be output to the
log buffer.

173
• All debugging information is allowed to be output to the console and monitor terminal; debugging
information is not allowed to be output to the log host, trap buffer, log buffer, the SNMP module,
and log file.
Table 8 Default output rules for different output destinations

Output LOG TRAP DEBUG


Modules
destinatio Enabled/ Enabled/d Enabled/
allowed Severity Severity Severity
n disabled isabled disabled
default (all
Console Enabled Informational Enabled Debug Enabled Debug
modules)

Monitor default (all


Enabled Informational Enabled Debug Enabled Debug
terminal modules)

default (all
Log host Enabled Informational Enabled Debug Disabled Debug
modules)

default (all Informatio


Trap buffer Disabled Informational Enabled Disabled Debug
modules) nal

default (all
Log buffer Enabled Informational Disabled Debug Disabled Debug
modules)

SNMP default (all Informatio


Disabled Debug Enabled Disabled Debug
module modules) nal

Web default (all


Enabled Debug Enabled Debug Disabled Debug
interface modules)

default (all
Log file Enabled Debug Enabled Debug Disabled Debug
modules)

System information format


The format of system information varies with the output destinations.
1. If the output destination is not the log host (such as console, monitor terminal, logbuffer, trapbuffer,
SNMP, or log file), the system information is in the following format:
timestamp sysname module/level/digest:content
For example, a monitor terminal connects to the router. When a terminal logs in to the router, the
log information in the following format is displayed on the monitor terminal:
%Jun 26 17:08:35:809 2008 Sysname SHELL/4/LOGIN: VTY login from 1.1.1.1
2. If the output destination is the log host, the system information is in the following formats:
{ H3C format
<<PRI>>timestamp sysname %%vvmodule/level/digest: source content
For example, if a log host is connected to the device, when a terminal logs in to the device, the
following log information is displayed on the log host:
<189>Oct 9 14:59:04 2009 MyDevice %%10SHELL/5/SHELL_LOGIN(l):VTY logged in from
192.168.1.21.
{ UNICOM format
<PRI>timestamp sysname vvmodule/level/serial_number: content
For example, if a log host is connected to the device, when a port of the device goes down, the
following log information is displayed on the log host:

174
<186>Oct 13 16:48:08 2000 H3C 10IFNET/2/210231a64jx073000020:
log_type=port;content=Vlan-interface1 link status is DOWN.
<186>Oct 13 16:48:08 2000 H3C 10IFNET/2/210231a64jx073000020:
log_type=port;content=Line protocol on the interface Vlan-interface1 is DOWN.

NOTE:
• The closing set of angel brackets (< >), the space, the forward slash (/), and the colon (:) are all required
in the above format.
• The format in the previous part is the original format of system information, so you may see the
information in a different format. The displayed format depends on the log resolution tools you use.

What follows is a detailed explanation of the fields involved:

PRI (priority)
The priority is calculated using the following formula: facility*8+severity, in which facility represents the
logging facility name and can be configured when you set the log host parameters. The facility ranges
from local0 to local7 (16 to 23 in decimal integers) and defaults to local7. The facility is mainly used to
mark different log sources on the log host, query and filter the logs of the corresponding log source.
Severity ranges from 0 to 7. Table 6 details the value and meaning associated with each severity.
Note that the priority field takes effect only when the information has been sent to the log host.

timestamp
Timestamp records the time when system information is generated to allow users to check and identify
system events. The time stamp of the system information sent from the information center to the log host
is with a precision of seconds. The timestamp format of the system information sent to the log host is
configured with the info-center timestamp loghost command, and that of the system information sent to
the other destinations is configured with the info-center timestamp command. For the detailed
description of the timestamp parameters, see the following table:
Table 9 Description on the timestamp parameters

Timestamp
Description Example
parameter
System up time (that is, the duration for this
%0.16406399 Sysname
operation of the router), in the format of
IFNET/3/LINK_UPDOWN:
xxxxxx.yyyyyy. xxxxxx represents the higher
GigabitEthernet3/1/6 link status is
boot 32 bits, and yyyyyy represents the lower 32
DOWN.
bits.
0.16406399 is a timestamp in the boot
System information sent to all destinations
format.
except log host supports this parameter.

%Aug 19 16:11:03:288 2009 Sysname


Current date and time of the system, in the
IFNET/3/LINK_UPDOWN:
format of Mmm dd hh:mm:ss:sss yyyy.
date GigabitEthernet3/1/6 link status is UP.
System information sent to all destinations
Aug 19 16:11:03:288 2009 is a
supports this parameter.
timestamp in the date format.

175
Timestamp
Description Example
parameter
<187>2009-09-21T15:32:55
Sysname %%10
Timestamp format stipulated in ISO 8601 IFNET/3/LINK_UPDOWN(l):
iso Only the system information sent to a log host GigabitEthernet3/1/6 link status is
supports this parameter. DOWN.
2009-09-21T15:32:55 is a timestamp in
the iso format.

% Sysname IFNET/3/LINK_UPDOWN:
No timestamp is included.
GigabitEthernet3/1/6 link status is
none System information sent to all destinations DOWN.
supports this parameter.
No timestamp is included.

<187>Aug 19 16:120:38
Sysname %%10
Current date and time of the system, with year
IFNET/3/LINK_UPDOWN(l):
information excluded.
no-year-date GigabitEthernet3/1/6 link status is
Only the system information sent to a log host DOWN.
supports this parameter.
Aug 19 16:120:38 is a timestamp in the
no-year-date format.

Sysname (host name or host IP address)


• If the system information is sent to a log host in the format of UNICOM, and the info-center loghost
source command is configured, or vpn-instance vpn-instance-name is provided in the info-center
loghost command, the field is displayed as the IP address of the device that generates the system
information.
• In other cases (when the system information is sent to a log host in the format of H3C, or sent to
other destinations), the field is displayed as the name of the device that generates the system
information, namely, the system name of the device. You can use the sysname command to modify
the system name. For more information, see Fundamentals Command Reference.

%% (vendor ID)
This field indicates that the information is generated by an H3C device. It is displayed only when the
system information is sent to a log host in the format of H3C.

vv
This field is a version identifier of syslog, with a value of 10. It is displayed only when the output
destination is log host.

module
The module field represents the name of the module that generates system information. You can enter the
info-center source ? command in system view to view the module list.

level (severity)
System information can be divided into eight levels based on its severity, from 0 to 7. See Table 6 for
definition and description of these severity levels. The levels of system information generated by modules
are predefined by developers, and you cannot change the system information levels. However, with the
info-center source command, you can configure to output information of the specified level and not to
output information lower than the specified level.

176
digest
The digest field is a string of up to 32 characters, outlining the system information.
For system information destined to the log host:
• If the character string ends with (l), the information is log information
• If the character string ends with (t), the information is trap information
• If the character string ends with (d), the information is debugging information
For system information destined to other destinations:
• If the timestamp starts with a percent sign (%), the information is log information
• If the timestamp starts with a pound sign (#), the information is trap information
• If the timestamp starts with an asterisk (*), the information is debugging information

serial number
This field indicates the serial number of the device that generates the system information. It is displayed
only when the system information is sent to a log host in the format of UNICOM.

source
This field indicates the source of the information, such as the slot number of a board or the source IP
address of the log sender. This field is optional and is displayed only when the system information is sent
to a log host in the format of H3C.

content
This field provides the content of the system information.

Configuring information center


Information center configuration task list
Complete the following tasks to configure information center:

Task Remarks
Outputting system information to the console Optional

Outputting system information to a monitor terminal Optional

Outputting system information to a log host Optional

Outputting system information to the trap buffer Optional

Outputting system information to the log buffer Optional

Outputting system information to the SNMP module Optional

Outputting system information to the web interface Optional

Saving system information to a log file Optional

Configuring synchronous information output Optional

Disabling a port from generating link up/down logging information Optional

177
Outputting system information to the console
Outputting system information to the console

Step Command Remarks


1. Enter system view. system-view N/A

2. Enable information Optional.


info-center enable
center. Enabled by default.

3. Name the channel with a Optional.


info-center channel channel-number
specified channel See Table 7 for default channel
name channel-name
number. names.

4. Configure the channel Optional.


through which system info-center console channel By default, system information is
information can be output { channel-number | channel-name } output to the console through
to the console. channel 0 (known as console).

info-center source { module-name |


default } channel { channel-number | Optional.
5. Configure the output rules channel-name } [ debug { level severity |
of system information. state state } * | log { level severity | state See “Default output rules of system
state } * | trap { level severity | state information.”
state } * ] *

Optional.
6. Configure the format of info-center timestamp { debugging | log The time stamp format for log,
the time stamp. | trap } { boot | date | none } trap and debugging information
is date by default.

Enabling the display of system information on the console


After setting to output system information to the console, you need to enable the associated display
function to display the output information on the console.
Follow these steps in user view to enable the display of system information on the console:

Step Command Remarks


1. Enable the monitoring of Optional.
system information on the terminal monitor Enabled on the console and disabled on
console. the monitor terminal by default.
2. Enable the display of
debugging information on the terminal debugging Disabled by default.
console.

3. Enable the display of log Optional.


terminal logging
information on the console. Enabled by default.

4. Enable the display of trap Optional


terminal trapping
information on the console. Enabled by default.

178
Outputting system information to a monitor terminal
System information can also be output to a monitor terminal, which is a user terminal that has login
connections through the AUX, VTY, or TTY user interface.

Outputting system information to a monitor terminal

Step Command Remarks


1. Enter system view. system-view N/A

Optional.
2. Enable information center. info-center enable
Enabled by default.

info-center channel Optional.


3. Name the channel with a
channel-number name See Table 7 for default channel
specified channel number.
channel-name names.

Optional.
4. Configure the channel through info-center monitor channel By default, system information is
which system information can { channel-number | output to the monitor terminal
be output to a monitor terminal. channel-name } through channel 1 (known as
monitor).

info-center source { module-name |


default } channel { channel-number Optional.
5. Configure the output rules of | channel-name } [ debug { level
the system information. severity | state state } * | log { level See “Default output rules of system
severity | state state } * | trap information.”
{ level severity | state state } * ] *

Optional.
6. Configure the format of the info-center timestamp { debugging By default, the time stamp format
time stamp. | log | trap } { boot | date | none } for log, trap and debugging
information is date.

Enabling the display of system information on a monitor terminal


After setting to output system information to a monitor terminal, you need to enable the associated
display function in order to display the output information on the monitor terminal.
To enable the display of system information on a monitor terminal:

Step Command Remarks


1. Enable the monitoring of Enabled on the console and
system information on a terminal monitor disabled on the monitor terminal
monitor terminal. by default.
2. Enable the display of
debugging information on a terminal debugging Disabled by default.
monitor terminal.
3. Enable the display of log
Optional.
information on a monitor terminal logging
terminal. Enabled by default.

179
Step Command Remarks
4. Enable the display of trap
Optional.
information on a monitor terminal trapping
terminal. Enabled by default.

Outputting system information to a log host


Step Command Remarks
1. Enter system view. system-view N/A

Optional.
2. Enable information center. info-center enable
Enabled by default.

info-center channel Optional


3. Name the channel with a
channel-number name See Table 7 for default channel
specified channel number.
channel-name names.

info-center source { module-name |


default } channel { channel-number Optional.
4. Configure the output rules of | channel-name } [ debug { level
the system information. severity | state state } * | log { level See “Default output rules of system
severity | state state } * | trap information.”
{ level severity | state state } * ] *

Optional.
By default, the source interface is
5. Specify the source IP address info-center loghost source determined by the matched route,
for the log information. interface-type interface-number and the primary IP address of this
interface is the source IP address of
the log information.
6. Configure the format of the
info-center timestamp loghost Optional.
time stamp for system
{ date | iso | no-year-date |
information output to the log date by default.
none }
host.
7. Set the formation of the system
Optional.
information sent to a log host to info-center format unicom
UNICOM. H3C by default.

By default, the system does not


output information to a log host. If
info-center loghost [ vpn-instance you specify to output system
vpn-instance-name ] information to a log host, the
{ host-ipv4-address | ipv6 system uses channel 2 (loghost) by
8. Specify a log host and
host-ipv6-address } [ port default.
configure the related output
port-number ] [ channel
parameters. The value of the port-number
{ channel-number |
channel-name } | facility argument should be the same as
local-number ] * the value configured on the log
host, otherwise, the log host cannot
receive system information.

Outputting system information to the trap buffer


180
NOTE:
The trap buffer receives the trap information only, and discards the log and debugging information even
if you have configured to output them to the trap buffer.

Step Command Remarks


1. Enter system view. system-view N/A

Optional.
2. Enable information center. info-center enable
Enabled by default.

info-center channel Optional.


3. Name the channel with a
channel-number name See Table 7 for default channel
specified channel number.
channel-name names.

Optional.
4. Configure the channel through info-center trapbuffer [ channel
which system information can { channel-number | By default, system information is
be output to the trap buffer and channel-name } | size buffersize ] output to the trap buffer through
specify the buffer size. * channel 3 (known as trapbuffer)
and the default buffer size is 256.

info-center source { module-name |


default } channel { channel-number Optional.
5. Configure the output rules of | channel-name } [ debug { level
the system information. severity | state state } * | log { level See “Default output rules of system
severity | state state } * | trap information.”
{ level severity | state state } * ] *

Optional.
6. Configure the format of the info-center timestamp { debugging The time stamp format for log, trap
time stamp. | log | trap } { boot | date | none } and debugging information is date
by default.

Outputting system information to the log buffer


NOTE:
You can configure to output log, trap, and debugging information to the log buffer, but the log buffer
receives the log and debugging information only, and discards the trap information.

Step Command Remarks


1. Enter system view. system-view N/A

Optional.
2. Enable information center. info-center enable
Enabled by default.

info-center channel Optional.


3. Name the channel with a
channel-number name See Table 7 for default channel
specified channel number.
channel-name names.

181
Step Command Remarks
Optional.
4. Configure the channel through info-center logbuffer [ channel
which system information can { channel-number | By default, system information is
be output to the log buffer and channel-name } | size buffersize ] output to the log buffer through
specify the buffer size. * channel 4 (known as logbuffer)
and the default buffer size is 512.

info-center source { module-name |


default } channel { channel-number Optional.
5. Configure the output rules of | channel-name } [ debug { level
the system information. severity | state state }* | log { level See “Default output rules of system
severity | state state }* | trap information.”
{ level severity | state state }* ]*

Optional.
6. Configure the format of the info-center timestamp { debugging The time stamp format for log, trap
timestamp. | log | trap } { boot | date | none } and debugging information is date
by default.

Outputting system information to the SNMP module


NOTE:
The SNMP module receives the trap information only, and discards the log and debugging information
even if you have configured to output them to the SNMP module.

To monitor the device running status, trap information is usually sent to the SNMP network management
station (NMS). In this case, you need to configure to send traps to the SNMP module, and then set the
trap sending parameters for the SNMP module to further process traps. For more information, see the
chapter “SNMP configuration.”
To configure to output system information to the SNMP module:

Step Command Remarks


1. Enter system view. system-view N/A

Optional.
2. Enable information center. info-center enable
Enabled by default.

info-center channel Optional.


3. Name the channel with a
channel-number name See Table 7 for default channel
specified channel number.
channel-name names.

Optional.
4. Configure the channel through info-center snmp channel By default, system information is
which system information can { channel-number | output to the SNMP module
be output to the SNMP module. channel-name } through channel 5 (known as
snmpagent).

182
Step Command Remarks
info-center source { module-name |
default } channel { channel-number Optional.
5. Configure the output rules of | channel-name } [ debug { level
the system information. severity | state state } * | log { level See “Default output rules of system
severity | state state } * | trap information.”
{ level severity | state state } * ] *

Optional.
6. Configure the format of the info-center timestamp { debugging The time stamp format for log, trap
timestamp. | log | trap } { boot | date | none } and debugging information is date
by default.

Outputting system information to the web interface


This feature allows you to control whether to output system information to the Web interface and which
system information can be output to the Web interface. The Web interface provides abundant search
and sorting functions; therefore, if you configure to output the system information to the Web interface,
you can view system information by clicking corresponding tabs after logging in to the device through the
Web interface.
To set to output system information to the web interface:

To do… Use the command… Remarks


Enter system view system-view N/A

Optional.
Enable information center info-center enable
Enabled by default.

info-center channel Optional.


Name the channel with a specified
channel-number name See Table 7 for default channel
channel number
channel-name names.

Optional.
Configure the channel through info-center syslog channel
which system information can be { channel-number | By default, system information is
output to the Web interface channel-name } output to the Web interface
through channel 6.

info-center source { module-name |


default } channel { channel-number Optional.
Configure the output rules of the | channel-name } [ debug { level
system information severity | state state }* | log { level See “Default output rules of system
severity | state state }* | trap information.”
{ level severity | state state }* ]*

Optional.
Configure the format of the time info-center timestamp { debugging The time stamp format for log, trap
stamp | log | trap } { boot | date | none } and debugging information is date
by default.

NOTE:
You can configure to output log, trap and debugging information to a channel. However, when this
channel is bound with the output destination Web interface, after logging in through the Web interface,
you can view log information of specific types only, and other types of information will be filtered out.

183
Saving system information to a log file
With the log file feature enabled, the log information generated by system can be saved to a specified
directory with a predefined frequency. This allows you to check the operation history at any time to make
sure that the router functions properly.
Logs are saved into the logfile buffer before they are saved into a log file. The system writes the logs in
the logfile buffer into the log file at a specified frequency, which is usually set to 24 hours and during a
relatively free time, in mornings for example. You can also manually save the logs. After the logs in the
logfile buffer are saved into the log file successfully, the system clears the logfile buffer.
A log file has capacity limitations. When the size of a log file reaches the maximum value:
• If the router only supports a single log file, the system will delete the earliest message(s)and write
new message(s) into the log file. The directory of a log file varies with the router models. Typically,
a log file is saved in the directory /logfile/logfile.log.
• If the device supports multiple log files, the system will create new log files to save new messages.
The new log files will be named as logfile1.log, logfile2.log, and so on. When the number of log
files reaches the upper limit, or the storage media has no space available, the system will delete the
earliest log file and create a new one.
To set to save system information to a log file:

Step Command Remarks


1. Enter system view. system-view N/A

2. Enable information Optional.


info-center enable
center. Enabled by default.

3. Enable the log file Optional.


info-center logfile enable
feature. Enabled by default.
4. Configure the frequency
info-center logfile frequency Optional.
with which the log file is
freq-sec The default value is 60 seconds.
saved.
5. Configure the maximum
info-center logfile size-quota Optional.
storage space reserved
size The default value is 10 MB.
for a log file.

Optional.
6. Configure the directory to info-center logfile By default, it is the logfile directory under
save the log file. switch-directory dir-name the root directory of the memory device,
which varies with routers.

Optional.
7. Manually save the log Available in any view.
buffer content to the log logfile save By default, the system saves the log file with
file.
the frequency defined by the info-center
logfile frequency command.

184
NOTE:
• To make sure that the router works normally, use the info-center logfile size-quota command to set a
logfile to be no smaller than 1 MB and no larger than 10 MB.
• Use the info-center logfile switch-directory command to manually configure the directory to which a
log file can be saved. The configuration becomes invalid at a system reboot or an active/standby
switchover.

Configuring synchronous information output


Synchronous information output refers to the feature that if the user’s input is interrupted by system output
such as log, trap, or debugging information, then after the completion of system output the system will
display a command line prompt (a prompt in command editing mode, or a [Y/N] string in interaction
mode) and your input so far.
This command is used in the case that your input is interrupted by a large amount of system output. With
this feature enabled, you can continue your operations from where you were stopped.
To enable synchronous information output:

Step Command Remarks


1. Enter system view. system-view N/A
2. Enable synchronous
info-center synchronous Disabled by default.
information output.

NOTE:
• If system information, such as log information, is output before you input any information under the
current command line prompt, the system will not display the command line prompt after the system
information output.
• If system information is output when you are inputting some interactive information (non Y/N
confirmation information), then after the system information output, the system will not display the
command line prompt but your previous input in a new line.

Disabling a port from generating link up/down logging


information
By default, all the ports of the router generate link up/down logging information when the port state
changes. Therefore, you may need to use this function in some cases, for example:
• You only concern the states of some of the ports. In this case, you can use this function to disable the
other ports from generating link up/down logging information.
• The state of a port is not stable, and therefore redundant logging information will be generated. In
this case, you can use this function to disable the port from generating link up/down logging
information.
Follow the steps below to disable a port from generating link up/down logging information:

Step Command Remarks


1. Enter system view. system-view N/A

185
Step Command Remarks
interface interface-type
2. Enter interface view. N/A
interface-number
3. Disable the port from By default, all ports are allowed to
generating link up/down undo enable log updown generate link up/down logging
logging information. information when the port state changes.

NOTE:
With this feature applied to a port, when the state of the port changes, the system does not generate port
link up/down logging information. In this case, you cannot monitor the port state changes conveniently.
Therefore, H3C recommends that you use the default configuration in normal cases.

Displaying and maintaining information center


Task Command Remarks
display channel [ channel-number |
Display information about
channel-name ] [ | { begin | exclude | Available in any view
information channels.
include } regular-expression ]

Display the information of each display info-center [ | { begin | exclude |


Available in any view
output destination. include } regular-expression ]

display logbuffer [ reverse ] [ level severity


Display the state of the log buffer | size buffersize | slot slot-number]* [ |
Available in any view
and the recorded log information. { begin | exclude | include }
regular-expression ]

display logbuffer summary [ level severity


Display a summary of the log
| slot slot-number ] * [ | { begin | exclude Available in any view
buffer.
| include } regular-expression ]

Display the content of the log file display logfile buffer [ | { begin | exclude
Available in any view
buffer. | include } regular-expression ]

Display the configuration of the log display logfile summary [ | { begin |


Available in any view
file. exclude | include } regular-expression ]

display trapbuffer [ reverse ] [ size


Display the state of the trap buffer
buffersize ] [ | { begin | exclude | include } Available in any view
and the trap information recorded.
regular-expression ]

Reset the log buffer. reset logbuffer Available in user view

Reset the trap buffer. reset trapbuffer Available in user view

Information center configuration examples


Outputting log information to a Unix log host
Network requirements
• Send log information to a Unix log host with an IP address of 1.2.0.1/16.

186
• Log information with severity higher than informational will be output to the log host.
• The source modules are ARP and IP.
Figure 61 Network diagram

Configuration procedure
Before the configuration, make sure that there is a route between Device and PC.
1. Configure Device
# Enable information center.
<Sysname> system-view
[Sysname] info-center enable
# Specify the host with IP address 1.2.0.1/16 as the log host, use channel loghost to output log
information (optional, loghost by default), and use local4 as the logging facility.
[Sysname] info-center loghost 1.2.0.1 channel loghost facility local4
# Disable the output of log, trap, and debugging information of all modules to the log host.
[Sysname] info-center source default channel loghost debug state off log state off
trap state off

CAUTION:
As the default system configurations for different channels are different, you need to disable the output of
log, trap, and debugging information of all modules on the specified channel (loghost in this example)
first and then configure the output rule as needed so that unnecessary information will not be output.

# Configure the information output rule: allow log information of ARP and IP modules with severity
equal to or higher than informational to be output to the log host. (Note that the source modules
allowed to output information depend on the router model.)
[Sysname] info-center source arp channel loghost log level informational state on
[Sysname] info-center source ip channel loghost log level informational state on
2. Configure the log host
The following configurations were performed on SunOS 4.0 which has similar configurations to
the Unix operating systems implemented by other vendors.
a. Log in to the log host as a root user.
b. Create a subdirectory named Device under directory /var/log/, and create file info.log under
the Device directory to save logs of Device.
# mkdir /var/log/Device
# touch /var/log/Device/info.log
c. Edit file /etc/syslog.conf and add the following contents.
# MyDevice configuration messages
local4.info /var/log/Device/ info.log
In the above configuration, local4 is the name of the logging facility used by the log host to
receive logs. info is the information level. The Unix system will record the log information with
severity level equal to or higher than informational to file /var/log/Device/info.log.

187
NOTE:
Be aware of the following issues while editing the /etc/syslog.conf file:
• Comments must be on a separate line and must begin with the # sign.
• No redundant spaces are allowed in the file name.
• The logging facility name and the information level specified in the /etc/syslog.conf file must be
identical to those configured on the device using the info-center loghost and info-center source
commands; otherwise the log information may not be output properly to the log host.

d. After log file info.log is created and file /etc/syslog.conf is modified, you need to issue the
following commands to display the process ID of syslogd, kill the syslogd process and then
restart syslogd using the –r option to make the modified configuration take effect.
# ps -ae | grep syslogd
147
# kill -HUP 147
# syslogd -r &

After the above configurations, the system will be able to keep log information in the related file.

Outputting log information to a Linux log host


Network requirements
• Send log information to a Linux log host with an IP address of 1.2.0.1/16.
• Log information with severity higher than informational will be output to the log host.
• All modules can output log information.
Figure 62 Network diagram

Configuration procedure
Before the configuration, make sure that there is a route between Device and PC.
1. Configure Device
# Enable information center.
<Sysname> system-view
[Sysname] info-center enable
# Specify the host with IP address 1.2.0.1/16 as the log host, use channel loghost to output log
information (optional, loghost by default), and use local5 as the logging facility.
[Sysname] info-center loghost 1.2.0.1 channel loghost facility local5
# Disable the output of log, trap, and debugging information of all modules on channel loghost.
[Sysname] info-center source default channel loghost debug state off log state off
trap state off

188
CAUTION:
As the default system configurations for different channels are different, you need to disable the output of
log, trap, and debugging information of all modules on the specified channel (loghost in this example)
first and then configure the output rule as needed so that unnecessary information will not be output.

# Configure the information output rule: allow log information of all modules with severity equal to
or higher than informational to be output to the log host.
[Sysname] info-center source default channel loghost log level informational state
on
2. Configure the log host
a. Log in to the log host as a root user.
b. Create a subdirectory named Device under directory /var/log/, and create file info.log under
the Device directory to save logs of Device.
# mkdir /var/log/Device
# touch /var/log/Device/info.log
c. Edit file /etc/syslog.conf and add the following contents.
# Device configuration messages
local5.info /var/log/Device/info.log
In the above configuration, local5 is the name of the logging facility used by the log host to
receive logs. info is the information level. The Linux system will record the log information with
severity level equal to or higher than informational to file /var/log/Device/info.log.

NOTE:
Be aware of the following issues while editing the /etc/syslog.conf file:
• Comments must be on a separate line and must begin with the # sign.
• No redundant spaces are allowed in the file name.
• The router name and the accepted severity of the log information specified by the /etc/syslog.conf file
must be identical to those configured on the router using the info-center loghost or info-center source
command; otherwise the log information may not be output properly to the log host.
• The logging facility name and the information level specified in the /etc/syslog.conf file must be
identical to those configured on the device using the info-center loghost and info-center source
commands; otherwise the log information may not be output properly to the log host.

d. After log file info.log is created and file /etc/syslog.conf is modified, you need to issue the
following commands to display the process ID of syslogd, kill the syslogd process, and restart
syslogd using the -r option to make the modified configuration take effect.
# ps -ae | grep syslogd
147
# kill -9 147
# syslogd -r &

NOTE:
Make sure that the syslogd process is started with the -r option on a Linux log host.

After the above configurations, the system will be able to keep log information in the related file.

189
Outputting log information to the console
Network requirements
• Log information with a severity higher than informational will be output to the console;
• The source modules are ARP and IP.
Figure 63 Network diagram

Configuration procedure
# Enable information center.
<Sysname> system-view
[Sysname] info-center enable

# Use channel console to output log information to the console (optional, console by default).
[Sysname] info-center console channel console

# Disable the output of log, trap, and debugging information of all modules on channel console.
[Sysname] info-center source default channel console debug state off log state off trap
state off

CAUTION:
As the default system configurations for different channels are different, you need to disable the output of
log, trap, and debugging information of all modules on the specified channel (console in this example)
first and then configure the output rule as needed so that unnecessary information will not be output.

# Configure the information output rule: allow log information of ARP and IP modules with severity equal
to or higher than informational to be output to the console. (Note that the source modules allowed to
output information depend on the router model.)
[Sysname] info-center source arp channel console log level informational state on
[Sysname] info-center source ip channel console log level informational state on
[Sysname] quit

# Enable the display of log information on a terminal. (Optional, this function is enabled by default.)
<Sysname> terminal monitor
Info: Current terminal monitor is on.
<Sysname> terminal logging
Info: Current terminal logging is on.

After the above configuration takes effect, if the specified module generates log information, the
information center automatically sends the log information to the console, which then displays the
information.

190
Flow logging configuration

Flow logging overview


Introduction to flow logging
Flow logging records users’ access to the extranet. The device classifies and calculates flows through the
5-tuple information, which includes source IP address, destination IP address, source port, destination
port, and protocol number, and generates user flow logs. Flow logging records the 5-tuple information
of the packets and number of the bytes received and sent. With flow logs, administrators can track and
record accesses to the network, facilitating the availability and security of the network.

Flow logging versions


Two versions are available with flow logging: version 1.0 and version 3.0, which are slightly different in
packet format. For more information, see the following two tables.
Table 10 UDP packet format in flow logging version 1.0

Field Description
SIP Source IP address

DIP Destination IP address

SPORT TCP/UDP source port number

DPORT TCP/UDP destination port number

STIME Start time of a flow, in seconds, counted from 1970/1/1 0:0

ETIME End time of a flow, in seconds, counted from 1970/1/1 0:0

PROT Protocol carried over IP

OPERATOR Indicates the reason why a flow ended

RESERVED For future applications

Table 11 Packet format in flow logging version 3.0

Field Description
Prot Protocol carried over IP

Operator Indicates the reason why a flow ended

IpVersion IP packet version

TosIPv4 ToS field of the IPv4 packet

SourceIP Source IP address

SrcNatIP Source IP address after Network Address Translation (NAT)

DestIP Destination IP address

191
Field Description
DestNatIP Destination IP address after NAT

SrcPort TCP/UDP source port number

SrcNatPort TCP/UDP source port number after NAT

DestPort TCP/UDP destination port number

DestNatPort TCP/UDP destination port number after NAT

StartTime Start time of a flow, in seconds, counted from 1970/01/01 00:00

EndTime End time of a flow, in seconds, counted from 1970/01/01 00:00

InTotalPkg Number of packets received

InTotalByte Number of bytes received

OutTotalPkg Number of packets sent

OutTotalByte Number of the bytes sent

Reserved in version 0x02 (FirewallV200R001);


Reserved1 In version 0x03 (FirewallV200R005), the first byte is the source VPN ID, the second byte
is the destination VPN ID, and the third and forth bytes are reserved

Reserved2 For future applications

Reserved3 For future applications

Flow logging configuration task list


Complete the following tasks to configure flow logging:

Task Remarks
Configuring flow logging version Optional

Configuring the source address for flow logging packets Optional

Exporting flow logs to log server Required


Exporting flow logs
Exporting flow logs to information center Use either approach

Configuring flow logging version


Configure the flow logging version according to the receiver capability. A receiver cannot resolve flow
logs correctly if it does not support the flow logging version.
To configure flow logging version:

Step Command Remarks


1. Enter system view. system-view N/A

2. Configure flow logging userlog flow export version Optional


version. version-number The default flow logging version is 1.0

192
NOTE:
Although the router supports both of the two versions, only one can be active at one time. Therefore, if you
configure the flow logging version multiple times, the latest configuration will take effect.

Configuring the source address for flow logging


packets
A source IP address is usually used to uniquely identify the sender of a packet. If the source IP address is
specified, when Device A, for example, sends flow logs to Device B, it uses the specified IP address
instead of the actual egress address as the source IP address of the packets. In this way, although Device
A sends out packets to Device B through different ports, Device B can judge whether the packets are sent
from Device A according to their source IP addresses. This function also simplifies the configurations of
ACL and security policy: If you specify the same source address as the source or destination address in
the rule command in ACL, the IP address variance and the influence of interface status can be masked,
thus filtering flow logging packets.
To configure the source address for flow logging packets:

Step Command Remarks


1. Enter system view. system-view N/A

Optional
2. Specify the source IP
userlog flow export source-ip By default, the source IP address of flow
address of flow logging
ip-address logging packets is the IP address of the
packets.
egress interface of the packets.

Exporting flow logs


Flow logs can be exported in two ways:
• Flow logs are encapsulated into UDP packets and are sent to a log server of the network, as shown
in Figure 64. The log server analyzes flow logs and displays them by class, thus realizing remote
monitoring.
• Flow logs in the format of system information are exported to the information center of the router.
You can set the output destinations of the flow logs by setting the output parameters of the system
information. For more information about information center, see the chapter “Information center
configuration.”

NOTE:
The two export approaches of flow logs are mutually exclusive. If you configure two approaches
simultaneously, the system automatically exports the flow logs to the information center.

Exporting flow logs to log server


Exporting flow logs to an IPv4 log server
To export flow logs to an IPv4 log server:

193
Step Command Remarks
1. Enter system view. system-view N/A

userlog flow export slot


2. Configure the IPv4 address
slot-number [ vpn-instance
and UDP port number of the Not configured by default.
vpn-instance-name ] host
log server.
ipv4-address udp-port

Exporting flow logs to an IPv6 log server


To export flow logs to an IPv6 log server:

Step Command Remarks


1. Enter system view. system-view N/A
2. Configure the IPv6 address userlog flow export slot
and UDP port number of the slot-number host ipv6 ipv6-address Not configured by default.
log server. udp-port

NOTE:
You must configure flow logging server for each card separately. You can select at most two log servers
from three types of log servers (which are flow logging server in a VPN, IPv4 flow logging server, and IPv6
flow logging server) to receive flow logs for each card. If you specify two log servers for a router, the
servers can be of the same type or of different types. If you have already specified two servers for a card,
you need to delete an existing one to specify a new one. If in a new configuration, the IP address is the
same with that of the currently effective configuration, but other information of the two configurations is
different, then the new configuration will overwrite the previous one.

Exporting flow logs to information center


To export flow logs to information center:

Step Command Remarks


1. Enter system view. system-view N/A
2. Export flow logs to Flow logs are exported to the log
userlog flow syslog
information center. server by default.

NOTE:
• Exporting flow logs to the information center takes up storage space of the router, so adopt this export
approach when there are a small amount of logs.
• When the flow logs are exported to the information center, the severity level of the logs is informational,
namely, general messages of the router.

Displaying and maintaining flow logging

194
Task Command Remarks
display userlog export slot slot-number [ |
Display the configuration and
{ begin | exclude | include } Available in any view
statistics about flow logging.
regular-expression ]

Clear statistics of all logs. reset userlog flow export slot slot-number Available in user view

Clear flow logs in the cache. reset userlog flow logbuffer slot slot-number Available in user view

CAUTION:
Clearing flow logs in the cache causes the loss of log information, so H3C recommends that you should not
clear the cache unless you are sure you want to clear it.

Flow logging configuration example


Network requirements
As shown in Figure 64, Log server is used to monitor User’s access to the network.
Figure 64 Network diagram

Configuration procedure
Configure Device:
# Set the flow logging version to 3.0.
<Sysname> system-view
[Sysname] userlog flow export version 3

# Export flow logs of the interface board in slot 2 to the log server with IP address 1.2.3.6:2000.
[Sysname] userlog flow export slot 2 host 1.2.3.6 2000

# Configure the source IP address of UDP packets carrying flow logs as 2.2.2.2.
[Sysname] userlog flow export source-ip 2.2.2.2

Configuration verification
# Display the configuration and statistics about flow logs of the board in slot 2.
<Device> display userlog export slot 2
nat:

195
No userlog export is enabled

flow:
Export Version 3 logs to log server : enabled
Source address of exported logs : 2.2.2.2
Address of log server : 1.2.3.6 (port: 2000)
total Logs/UDP packets exported : 128/91
Logs in buffer : 10

Troubleshooting flow logging


Symptom 1: No flow logs are exported
• Analysis: Neither of the export approach is specified.
• Solution: Configure to export the flow logs to the information center or to the log server.

Symptom 2: Flow logs cannot be exported to log server


• Analysis: Both of the export approaches are configured.
• Solution: Restore to the default, and then configure the IP address and UDP port number of the log
server.

196
Index

CDEFINOPRST
C Configuring the source address for flow logging
packets,193
Clock monitoring module configuration example,89
Configuring threshold monitoring,26
Clock monitoring module configuration task list,85
Configuring traffic mirroring,130
Clock monitoring module overview,83
Configuring working mode of the clock monitoring
Configuring access-control rights,62 module of the SRPU,85
Configuring an NQA test group,13 Creating a sampler,117
Configuring attributes of IPv6 NetStream data Creating an NQA test group,13
export,158
D
Configuring attributes of NetStream export data,142
Configuring flow logging version,192 Displaying and maintaining flow logging,194
Configuring information center,177 Displaying and maintaining information center,186
Configuring IPv6 NetStream,157 Displaying and maintaining IPC,93
Configuring IPv6 NetStream data export,161 Displaying and maintaining IPv6 NetStream,163
Configuring IPv6 NetStream flow aging,160 Displaying and maintaining MIB style,106
Configuring local port mirroring,122 Displaying and maintaining NetStream,147
Configuring NetStream,139 Displaying and maintaining NQA,31
Configuring NetStream data export,145 Displaying and maintaining NTP,68
Configuring NetStream flow aging,144 Displaying and maintaining port mirroring,125
Configuring NetStream sampling,141 Displaying and maintaining protocol packet
Configuring NTP authentication,63 statistics,170
Configuring optional parameters for an NQA test Displaying and maintaining RMON,111
group,29 Displaying and maintaining sampler,118
Configuring optional parameters of NTP,61 Displaying and maintaining SNMP,100
Configuring reference source priority,86 Displaying and maintaining the clock monitoring
Configuring remote port mirroring,123 module,88
Configuring SNMP basic parameters,95 Displaying and maintaining traffic mirroring,131
Configuring SNMP logging,98 E
Configuring SNMP traps,98 Enabling IPC performance statistics,92
Configuring SSM for reference sources,86 Enabling the NQA client,12
Configuring the collaboration function,26 Exporting flow logs,193
Configuring the history records saving function,29
F
Configuring the local clock as a reference source,61
Configuring the NQA server,12 Flow logging configuration example,195
Configuring the NQA statistics collection function,28 Flow logging configuration task list,192
Configuring the operation modes of NTP,57 Flow logging overview,191
Configuring the RMON alarm function,110 I
Configuring the RMON statistics function,109 Information center configuration examples,186
Information center overview,171

197
Introduction,120 Ping,1
IPC overview,91 Ping and tracert configuration example,6
IPv6 NetStream basic concepts,153 Port mirroring configuration examples,126
IPv6 NetStream configuration examples,163 Protocol packet statistics overview,170
IPv6 NetStream configuration task list,156 R
IPv6 NetStream key technologies,154
RMON configuration examples,112
IPv6 NetStream overview,153
RMON overview,107
N
S
NetStream basic concepts,133
Sampler configuration examples,118
NetStream configuration examples,147
Sampler overview,117
NetStream configuration task list,138
Scheduling an NQA test group,30
NetStream key technologies,134
Setting the input port of the line clock (LPU port),87
NetStream overview,133
Setting the MIB style,106
NetStream sampling,138
SNMP configuration examples,101
NQA configuration examples,32
SNMP overview,94
NQA configuration task list,11
System debugging,5
NQA overview,8
NTP configuration examples,69 T
NTP configuration task list,57 Tracert,3
NTP overview,52 Traffic mirroring configuration example,131
O Traffic mirroring overview,130
Troubleshooting flow logging,196
Overview,106
P

198

You might also like