Download as pdf or txt
Download as pdf or txt
You are on page 1of 472

Notice

Legal notices
Publication Date
June 2023

Copyright
Copyright © 2013-2023, Nozomi Networks. All rights reserved.
Nozomi Networks believes the information it furnishes to be
accurate and reliable. However, Nozomi Networks assumes no
responsibility for the use of this information, nor any infringement of
patents or other rights of third parties which may result from its use.
No license is granted by implication or otherwise under any patent,
copyright, or other intellectual property right of Nozomi Networks
except as specifically described by applicable user licenses. Nozomi
Networks reserves the right to change specifications at any time
without notice.

Third Party Software


Nozomi Networks uses third-party software which usage is
governed by the applicable license agreements from each of the
software vendors. Additional details about used third-party software
can be found at https://security.nozominetworks.com/licenses.
| Table of Contents | v

Table of Contents

Legal notices.......................................................................................... iii

Chapter 1: Preliminaries.........................................................................9
Prepare a Safe and Secure Environment...................................................................................10

Chapter 2: Installation.......................................................................... 11
Installing a physical sensor......................................................................................................... 12
Installing on a Virtual Machine (VM)...........................................................................................12
Installing the container................................................................................................................ 15
Set up phase 1 (basic configuration).......................................................................................... 18
Set up phase 2 (web interface configuration)............................................................................. 20
Additional settings....................................................................................................................... 23

Chapter 3: Users................................................................................... 31
Introduction.................................................................................................................................. 32
Managing users........................................................................................................................... 34
Managing user groups................................................................................................................ 37
Password management and policies.......................................................................................... 39
Active Directory users................................................................................................................. 44
LDAP users................................................................................................................................. 45
SAML integration......................................................................................................................... 48
OpenAPI keys..............................................................................................................................50

Chapter 4: Basics..................................................................................53
Environment................................................................................................................................. 54
Asset............................................................................................................................................ 54
Node............................................................................................................................................ 54
Session........................................................................................................................................ 55
Link.............................................................................................................................................. 55
Variable........................................................................................................................................ 56
Vulnerability................................................................................................................................. 56
Query........................................................................................................................................... 56
Protocol........................................................................................................................................ 57
Incident & alert............................................................................................................................ 58
Trace............................................................................................................................................ 59
Charts.......................................................................................................................................... 60
Tables.......................................................................................................................................... 61
Navigation through objects..........................................................................................................61

Chapter 5: User Interface Reference...................................................63


Supported web browsers.............................................................................................................64
Navigation bar............................................................................................................................. 64
Dashboards..................................................................................................................................67
Alerts............................................................................................................................................ 72
Assets.......................................................................................................................................... 78
Network........................................................................................................................................ 86
Process...................................................................................................................................... 111
Queries...................................................................................................................................... 118
| Table of Contents | vi

Reports...................................................................................................................................... 125
Time machine............................................................................................................................ 137
Vulnerabilities.............................................................................................................................142
Settings...................................................................................................................................... 145
System....................................................................................................................................... 185
Continuous trace and other trace actions................................................................................. 209

Chapter 6: Security features.............................................................. 213


Security Control Panel.............................................................................................................. 214
Security Configurations............................................................................................................. 214
Manage Network Learning........................................................................................................ 220
Alerts.......................................................................................................................................... 225
Custom checks: assertions....................................................................................................... 226
Custom checks: specific checks............................................................................................... 230
Alerts Dictionary........................................................................................................................ 233
Incidents Dictionary................................................................................................................... 242
Packet rules...............................................................................................................................245
Hybrid threat detection.............................................................................................................. 249

Chapter 7: Vulnerability assessment................................................ 251


Basics........................................................................................................................................ 252
Passive detection...................................................................................................................... 256
Configuring vulnerability detection............................................................................................ 258

Chapter 8: Smart Polling.................................................................... 259


Plans.......................................................................................................................................... 260
Strategies................................................................................................................................... 262
Configuring Smart Polling plans................................................................................................263
Extracted information.................................................................................................................266
Customizing the log level.......................................................................................................... 268
Smart Polling on CMC.............................................................................................................. 269
Smart Polling Progressive mode...............................................................................................269

Chapter 9: Threat Intelligence............................................................271


Configuring and updating.......................................................................................................... 272
Checking software version and license status..........................................................................276

Chapter 10: Asset Intelligence...........................................................279


Enriched asset information........................................................................................................281
Needed input data.....................................................................................................................281
Asset Intelligence license.......................................................................................................... 282

Chapter 11: Queries............................................................................ 283


Overview.................................................................................................................................... 284
Reference.................................................................................................................................. 286
Examples................................................................................................................................... 298

Chapter 12: Maintenance....................................................................303


System overview....................................................................................................................... 304
Data backup and restore...........................................................................................................305
Reboot or shutdown.................................................................................................................. 309
Software update and rollback................................................................................................... 311
Data factory reset......................................................................................................................314
| Table of Contents | vii

Full factory reset with data sanitization.....................................................................................314


Host-based intrusion detection system..................................................................................... 314
Action on log disk full usage.....................................................................................................315
Support...................................................................................................................................... 315

Chapter 13: Central Management Console.......................................317


Overview.................................................................................................................................... 318
Deployment................................................................................................................................ 319
Settings...................................................................................................................................... 321
Connecting sensors................................................................................................................... 322
Troubleshooting......................................................................................................................... 322
Data synchronization policy.......................................................................................................322
Data synchronization tuning...................................................................................................... 325
CMC or Vantage connected sensor - Date and Time.............................................................. 326
Sensors list................................................................................................................................ 326
Sensors map............................................................................................................................. 329
Configuring High Availability (HA)............................................................................................. 331
Alerts.......................................................................................................................................... 334
Functionalities overview............................................................................................................ 335
Updating.....................................................................................................................................336
Single-Sign-On (SSO) through the CMC.................................................................................. 336

Chapter 14: Remote Collector............................................................339


Overview.................................................................................................................................... 340
Deployment................................................................................................................................ 341
Using a Guardian with connected Remote Collectors.............................................................. 353
Troubleshooting......................................................................................................................... 355
Updating.....................................................................................................................................356
Disabling a Remote Collector................................................................................................... 356
Install the Remote Collector Container on the Cisco Catalyst 9300......................................... 356

Chapter 15: Configuration.................................................................. 359


Features Control Panel............................................................................................................. 360
Editing sensor configuration...................................................................................................... 362
Basic configuration rules........................................................................................................... 363
Configuring the Garbage Collector............................................................................................372
Configuring alerts...................................................................................................................... 375
Configuring Incidents................................................................................................................. 392
Configuring nodes..................................................................................................................... 394
Configuring assets.....................................................................................................................399
Configuring links........................................................................................................................ 400
Configuring variables................................................................................................................. 404
Configuring protocols.................................................................................................................409
Configuring va........................................................................................................................... 416
Customizing node identifier generation.....................................................................................419
Configuring decryption...............................................................................................................420
Configuring trace....................................................................................................................... 422
Configuring continuous trace.....................................................................................................424
Configuring Time Machine........................................................................................................ 426
Configuring retention................................................................................................................. 428
Configuring Bandwidth Throttling.............................................................................................. 433
Configuring synchronization...................................................................................................... 434
Configuring slow updates.......................................................................................................... 438
Configuring session hijacking protection...................................................................................439
Configuring Passwords..............................................................................................................440
Configuring sandbox..................................................................................................................444
Additional Commands............................................................................................................... 450
Chapter 16: FIPS configuration......................................................... 451
Compliant FIPS cryptography features..................................................................................... 452
Important FIPS notes................................................................................................................ 452
Enabling FIPS mode................................................................................................................. 452
Disabling FIPS mode................................................................................................................ 453
Checking FIPS mode................................................................................................................ 454
Auditing FIPS operations...........................................................................................................454
FIPS enabled protocols............................................................................................................. 455

Chapter 17: Compatibility reference................................................. 457


SSH compatibility...................................................................................................................... 458
HTTPS compatibility.................................................................................................................. 459

Appendix A: Reference table of icons.............................................. 461


Icon reference table.................................................................................................................. 462

Glossary................................................................................................................... 467
Chapter

1
Preliminaries
Topics: This chapter describes the preliminary information required to
properly and securely install your Nozomi Networks Guardian or
• Prepare a Safe and Secure Central Management Console (CMC).
Environment
Prepare a Safe and Secure Environment
Before beginning the installation process, confirm the prerequisites in this section in order to have a
safe and secure environment for your Guardian or CMC.
Installing a Physical sensor
If you are installing a physical sensor, install it in a physically secure location that is accessible only
to authorized personnel. Observe the following precautions to prevent potential property damage,
personal injury, or death:
• Do not use damaged equipment, including exposed, frayed or damaged power cables.
• Do not operate the sensor with any covers removed.
• Choose a suitable location for the sensor. It should be installed in a well-ventilated area that is clean
and dust-free. Avoid areas that generate heat, electrical noise, and electromagnetic fields. Avoid
wet areas. Protect the sensor from liquid intrusion. Disconnect power to the sensor if it gets wet.
• Use a regulated Uninterruptible Power Supply (UPS). This keeps your system operating in the event
of a power failure, and protects the sensor from power surges and voltage spikes.
• Maintain a reliable ground at all times. Ground the rack itself and the sensor chassis to it via the
provided sensor grounding cable.
• Mount the sensor in a rack or place it in an area with sufficient airflow for safe operation.
• Avoid uneven mechanical loading when the sensor is mounted in a rack.
Installing a Virtual Machine
If you are installing a Virtual Machine (VM), contact your virtual infrastructure manager to ensure that
only authorized personnel have access to the system's console.
Configuration
The sensor's management port should be assigned an IP address in a dedicated management VLAN
to control access at different levels and to restrict access to a select set of hosts and people.
Before connecting a SPAN/mirror port to the sensor, ensure that the configuration on the switch/router/
firewall or other networking device is set to allow only output traffic. The sensor ports are configured to
accept read-only traffic and not to inject any packets. To prevent human error (e.g. a span port cable
put into the management port), check that no packets can be injected from those ports.
Chapter

2
Installation
Topics: This chapter includes basic configuration information for the Nozomi
Networks solution physical and virtual sensors.
• Installing a physical sensor
Additional configuration information is provided in the Configuration
• Installing on a Virtual Machine
chapter.
(VM)
• Installing the container Maintenance tasks are described in the Maintenance chapter.
• Set up phase 1 (basic
configuration)
• Set up phase 2 (web interface
configuration)
• Additional settings
| Installation | 12

Installing a physical sensor


This topic describes how to install a physical sensor for use with the Nozomi Networks solution.
If you purchased a physical sensor from Nozomi Networks, the appropriate release of the Nozomi
Networks Operating System (N2OS) is already installed on it.
Follow these steps to install and configure your physical sensor:
1. Attach the appropriate null modem serial cable to the sensor's serial console:
• For N1000, N750 and P500 sensors, attach an RJ45 console plug.
• For NSG-L and NSG-M Series, attach a USB serial plug.
• For the R50 and R150, attach a DB9 serial plug.
2. Open a terminal emulator, which can be:
• Hyper Terminal or Putty on Windows
• cu or minicom on macOS and other *nix platforms
3. When connecting, set the speed to 9600 bauds with no parity bit set. Alternatively, connect via the
network (ssh) using the default network settings for the physical sensor:
• IP address: 192.168.1.254
• Netmask: 255.255.255.0
• GW:192.168.1.1
The sensor will show a login prompt.
Proceed to Set up phase 1 (basic configuration) on page 18.

Installing on a Virtual Machine (VM)


Before installing Guardian on a Virtual Machine (VM), consider the prerequisites and minimum
requirements.
Prerequisites:
The Nozomi Networks solution is available in Open Virtual Appliance (OVA) and Virtual Hard Disk
(VHD) formats. When deploying, use a hypervisor that supports .OVA or .VHD file formats, such as,
VMware, HyperV, KVM, and XEN.
The minimum requirements for a Guardian VM are:
• 4 vCPU running at 2 GHz
• 6 GB of RAM
• 10 GB of minimum disk space, running on SSD or hybrid storage (100+ GB of disk recommended)
• 2 or more NICs (the maximum number depends on the hypervisor), with one being used for
management and one (or more) being used for traffic monitoring
All components should be in good working condition. The overall hypervisor load must be under
control, with no regular ballooning on the Guardian so as to avoid unexpected behavior, such as
dropped packets or overall poor system performance.

Virtual Machine (VM) sizing


The following tables list the minimum size requirements for Guardian and CMC instances when they
run on virtual machines. These values are based on assumed amounts of nodes and throughput, and
are based on a simplified model. Consider these recommendations as a starting point for calculating
the best size for your VM. There are more elements affecting the instantaneous and average use
of resources, such as the hypervisor hardware, specific protocol distributions, loading time machine
snapshots, running queries on big data sets, etc. Accordingly to all of these, different settings may be
required and should be tested during deployment.
In the tables below, network elements are defined as the sum of nodes, links, and variables.
| Installation | 13

Guardian sizing recommendations


The following table suggests sizes for instances running the Guardian.

Guardian instances

Min Max Max Max Max Min Suggested RAM Min


1
Model Nodes Network Smart Throughput vCPU vCPU (GB) Disk
Elements IoT (Mbps) (GB)
Devices
V100 100 3,000 500 50 2 4 6 20
V100 250 10,000 1,250 250 4 6 6 20
V100 250 10,000 1,250 500 6 8 6 20
V100 250 10,000 1,250 1,000 8 10 6 20
V100 1,000 20,000 5,000 1,000 8 10 8 100
V250 2,500 50,000 12,500 500 6 8 10 250
V250 2,500 50,000 12,500 1,000 8 10 10 250
V250 5,000 100,000 25,000 500 6 8 12 250
V250 5,000 100,000 25,000 1,000 8 10 12 250
V750 10,000 200,000 100,000 500 6 8 16 250
V750 10,000 200,000 100,000 1,000 8 10 16 250
V1000 40,000 400,000 200,000 1,000 8 10 24 250

1
The amount of RAM from the hypervisor may not correspond with the actual amount seen from within
the Virtual Machine (VM). To confirm that your VM has sufficient RAM to support your configuration,
enter the following command from your terminal:

sysctl hw.physmem

If the VM RAM is insufficient, acquire the required amount identified in the table.

CMC sizing recommendations


The following table suggests sizes for instances running the CMC.

CMC instances
2
Number of Max Network Mode vCPU RAM (GB) Min Disk
1
Sensors Elements
25 100,000 Multi-context 4 8 100+ GB
25 100,000 All-in-one 8 32 100+ GB
50 200,000 Multi-context 6 12 200+ GB
50 200,000 All-in-one 16 64 200+ GB
100 400,000 Multi-context 10 16 1+ TB
250 800,000 Multi-context 12 32 1+ TB
400 1,200,000 Multi-context 16 64 1+ TB

1
Since this sizing is also dependent on the synchronized network elements, the supported number of
sensors varies and should be agreed upon with Nozomi Networks for each installation.
| Installation | 14

2
The amount of RAM from the hypervisor may not correspond with the actual amount seen from within
the Virtual Machine (VM). To confirm that your VM has sufficient RAM to support your configuration,
enter the following command from your terminal:

sysctl hw.physmem

If the VM RAM is insufficient, acquire the required amount identified in the table.

Installing the Virtual Machine (VM)


Installing the Virtual Machine (VM) in the hypervisor requires that you configure your VM to enable
external access. Instructions are provided in subsequent sections.
If you're unfamiliar with importing the OVA Virtual Machine in your hypervisor environment, refer to
your hypervisor's manual or contact your hypervisor's support service.
Deployment requirements
To operate your VM, consider the following deployment requirements:
• VMware Paravirtual SCSI (PVSCSI) storage type is not supported by N2OS
• IDE storage type is not supported by N2OS
• VM OS should be set to FreeBSD 12 or later versions (64-bit)
• VM compatibility version should be set to 15+, such as ESXi 7.0 or later (i.e., VM version 17), if
possible; however, this depends on the VSphere version level
• Recommended: LSI Logic; LSI Logic SAS; SATA are supported disk storage types
Procedure
Install the VM in the hypervisor as follows:
1. Import the Virtual Machine in the hypervisor and configure the resources according to the minimum
requirements specified in the previous section.
2. After importing the VM, at the hypervisor settings for the VM disk, set the desired size. Some
hypervisors, such as VMware ESX >= 6.0, allow you to change the disk size at this stage. With
hypervisors that do not allow this operation, STOP HERE and continue with the instructions in
Adding a secondary disk to a Virtual Machine (VM) on page 15.
3. Boot the VM. The VM now boots into a valid N2OS environment.
4. Login as admin
You are instantly logged in; no password is set by default.
5. Go to privileged mode with the command:

enable-me

You will now be able to perform system changes.

Expanding a disk on a Virtual Machine (VM)


This topic describes how to expand an existing disk on a Virtual Machine (VM).
To expand an existing disk, edit your VM's settings from the hypervisor. Then, follow these steps:
1. Restart the virtual machine and log in to it.
2. Elevate your privileges using the enable-me command:

enable-me

3. Run the data_enlarge command:

data_enlarge

The virtual machine can now detect the newly allocated space.
| Installation | 15

Adding a secondary disk to a Virtual Machine (VM)


This topic describes how to add a larger virtual data disk to the N2OS VM, should the main disk not be
large enough when it is first imported.
Prerequisite: In order to proceed you should be familiar with managing virtual disks in your hypervisor
environment. If you are not, refer to your hypervisor manual or contact your hypervisor's support
service.
Note: When adding a disk, add a SCSI or SATA disk type, using LSI Logic SAS, LSI Logic, or SATA
storage type, as IDE disks are not recommended.
1. Add a disk to the VM and restart it.
2. In the VM console, use the following command to obtain the name of the disk devices:

sysctl kern.disks

3. Assuming ada1 is the device disk added as a secondary disk (note that ada0 is the OS device),
execute this command to move the data partition to it:

data_move ada1

Adding a monitoring interface to the Virtual Machine (VM)


This topic describes how to add a monitoring interface to a Virtual Machine (VM).
By default, the VM has one management network interface and one monitoring interface. Depending
on deployment needs, it may be useful to add more monitoring interfaces to the sensor.
To add one or more interfaces, follow these steps:
1. If the VM is powered on, shut it down.
2. Add one or more network interfaces from the hypervisor configuration.
3. Power on the VM.
The newly added interface(s) will be automatically recognized and used by the Guardian.

Installing the container


This topic describes the container on which you install the Nozomi Networks Operating System
(N2OS).
The container enables you to install the N2OS on embedded platforms, such as switches, routers and
firewalls that have a container engine onboard.
It's also a good platform for tightly integrated scenarios where several products interact on the same
hardware platform to provide a unified experience.
For all the remaining use cases, a physical sensor or a virtual sensor are the recommended options.
| Installation | 16

Install on Docker
This topic describes how to install the Nozomi Networks Operating System (N2OS) on Docker.
After performing these steps, you will have an image and a running container based on it.
Prerequisites:
• Docker must be installed to perform the steps below. We have tested N2OS with Docker version
18.09 and 20.10.
• BuildKit is required to build the image, Docker 18.09 or higher is required. Please refer to the official
Docker documentation to activate the Docker BuildKit feature: https://docs.docker.com/develop/
develop-images/build_enhancements/
Follow these steps to install N2OS on Docker:
1. Build the image with the following command from the directory containing the artifacts:

docker build -t n2os .

This creates the image.


2. Run the image using a command, such as this one:

docker run --hostname=nozomi-sga --name=nozomi-sga \


--volume=<path_to_data_folder>:/data --network=host \
--mount type=tmpfs,destination=/var/sandbox,tmpfs-size=400M \
--mount type=tmpfs,destination=/var/tmp_sandbox,tmpfs-size=100M \
--mount type=tmpfs,destination=/var/pipes,tmpfs-size=10M \
--mount type=tmpfs,destination=/var/traces,tmpfs-size=400M \
--mount type=tmpfs,destination=/var/checksums,tmpfs-size=25M -d n2os

where <path_to_data_folder> is the path to a volume where the sensor's data will be stored,
and saved for future runs.
The image has been built to automatically monitor all network interfaces shown to the container and
the --network=host setting allows access to all network interfaces of the host computer.
3. The container can be stopped at any time with the following command:

docker stop nozomi-sga

and executed with:

docker start nozomi-sga

Additional details
This topic describes additional container details.
The container has the same features as those provided by the physical and virtual machines. A key
difference is that container provisioning "system" settings must be performed using Docker commands,
and thus they are not editable from inside the container itself. A notable example is the hostname: it
must be set when launching a new instance of the image.
You must use volumes for the /data partition to assure that the data will survive image updates.
Updating a container
To update a container:
1. Build a new version of the n2os image.
2. Stop and destroy the current running containers.
3. Start a new container with the updated image.
Data is automatically migrated to the new version.
| Installation | 17

The network=host Docker parameter allows the container to monitor the physical NICs on the host
machine. However, by default it also allows the container to monitor all of the available interfaces. To
restrict to a subset, create a cfg/n2osids_if file in the /data volume with the list of interfaces to
monitor separated by a comma (e.g: eth1,eth2).
Customizing the container build
You can customize the container version build using the following variables which may be passed to
the Docker build command using the --build-arg command line switch, such as: docker build
--build-arg APT_PROXY=x.x.x.x:yy -t n2os.

Parameter Default Description


value
APT_PROXY none Proxy to be used to download container
packages
N2OS_HTTP_PORT 80 Specify custom http web port
N2OS_HTTPS_PORT 443 Specify custom https web port
| Installation | 18

Set up phase 1 (basic configuration)


This topic describes how to set up the basic configuration to begin using the Nozomi Networks solution.
At the end of this procedure, your system management interface is set up, and reachable as a text
console via SSH, and as a web console via HTTPS.
Prerequisite: The Nozomi Networks solution should be installed and be ready for initial configuration.
Note: Depending on the sensor model, use either a serial console (for physical sensors) or the text
hypervisor console (for virtual sensors).
Users
The Guardian shell has two users: admin and root. Log in to both using admin. If elevating from
admin to root, use the admin password. The root account does not have a separate password.
Follow these steps to set up phase 1 of the Nozomi Networks solution:
1. At the console, a prompt displays the text N2OS - login:. Type admin and then press Enter.
• In a virtual sensor, you are instantly logged in, as no password is set by default.
• In physical sensors, nozominetworks is the default password.
• The admin password can be changed at any time, using the change_password command.
2. Enter the enable-me command to elevate the privileges.
3. Enter the setup command to launch the initial configuration wizard.

4. For virtual Guardians, at the prompt, choose the admin password first. Select a strong password
as this will allow the admin user to access the sensor through SSH.

5. To set up the management interface IP address, select the Network Interfaces entry in the menu
dialog.

6. Now set up the management interface IP address. Depending on the sensor model, the
management interface can be named em0 or mgmt. Select the management interface, then press
Enter.
| Installation | 19

7. Edit the values for IP address (ipaddr) and Netmask (netmask). Enable DHCP to configure all
automatically. Then move up to X. Save/Exit and press Enter.

8. Now select Default Router/Gateway from the menu, and enter the IP address of the default
gateway. Press Tab and then Enter to save and exit.

9. Now select DNS nameservers from the menu, and configure the IP addresses of the DNS servers.

10.Move up to X Exit and press Enter.


This completes the basic networking set up. The remaining configuration steps are performed by
opening the web console running on the management interface.
| Installation | 20

Set up phase 2 (web interface configuration)


This topic describes the second phase of the Nozomi Networks Operating System (N2OS) set up,
which is performed from the web console.
Prerequisites: For this setup, you must use one of the supported web browsers.
Note: The product integrates self-signed SSL certificates to get started, so add an exception in your
browser. Later in this chapter, we describe the steps to import valid certificates.
Follow these steps to set up the web interface of the N2OS:
1. Access the web console by typing https://<sensor_ip> where <sensor_ip> as the
management interface IP address.
2. Add an exception to your browser.
The login screen displays:

3. At the login screen, log in using the default username and password: admin / nozominetworks.
Note: At first login, you will be prompted to change these credentials for security reasons.
4. Go to Administration > General and change the host name.

5. Go to Administration > Date and time, to change the time zone, set the date and (optional)
enable the NTP client.
| Installation | 21

Result: The sensor is almost ready to be put into production. The next step is to install a valid license.

License
This topic describes how to set a new license.
1. From the Web UI, go to Administration > Updates & Licenses.
2. Obtain a valid license key by copying the machine ID, and using it in conjunction with the Activation
Code from Nozomi Networks.
3. Paste the valid license key inside the text box.
Note: The license types that you can activate are: Base (a Base license is required and includes
passive monitoring, with Smart Polling as optional), Threat Intelligence, and Asset Intelligence.
Result: After the license is confirmed, the sensor begins to monitor the configured network interfaces.

Figure 1: License screen

The Guardian license statuses and their related behaviors are described below. Functionality depends
on the scope of the specific license.

Status Description
UNLICENSED Functionality is disabled.
OK Functionality is enabled. Be aware of the expiration date to allow time
for renewals. If a Base license is issued with limits, and the limits are
exceeded, functionality is only enabled for the covered elements within
the limits. New elements are not analyzed.
| Installation | 22

Status Description

EXPIRING Following the official expiration date, Nozomi Networks offers a 3-month
grace period. The license still functions as it would in the OK status to
allow time for emergency license renewal.

EXPIRED Functionality is disabled. The contents that were analyzed or imported


before the expiration date remain, however no new analyses are
performed, and no new signatures are imported.

Install SSL certificates


This topic describes how to import an SSL certificate into the sensor. The SSL certificate is required to
securely encrypt traffic between client computers and the Nozomi Networks Operating System (N2OS)
sensor over HTTPS.
The N2OS webserver uses the HTTP protocol to expose the management interface. During the initial
boot, the sensor generates a self-signed certificate valid for one (1) year. A self-signed certificate
should not be used in production. We suggest that you follow this procedure to install a certificate
obtained from a well-known, trusted Certificate (or Certification) Authority (CA).
To add a private CA to the system's trust store, refer to Install CA certificates on page 23.
Prerequisites
• Be sure you have both the certificate and the key file in PEM format.
• Check to see if your certificate is password-protected.
• To avoid browser errors make sure that the certificate chain is complete. You can combine
certificates using a command, such as cat https_nozomi.crt bundle.crt >
https_nozomi.chained.crt.
Follow these steps to install SSL certificates:
1. Upload the certificate and key files to the sensor using an SSH client in the /data/tmp folder. For
example, given that you have https_nozomi.crt and https_nozomi.key files in the same
folder, open a terminal, cd into it, then use this command to upload the files to the sensor:

scp https_nozomi.* admin@<sensor_ip>:/data/tmp

2. Log into the console, either directly or through SSH, then use this command to elevate the
privileges:

enable-me

3. If your certificate key is protected with a password, use the following command to remove the
protection to avoid being prompted for the password each time the server restarts:

openssl rsa -in <https_nozomi.key> -out https_nozomi_nopassword.key

4. Execute the n2os-addtlscert command below to enable the certificate. Note: If you removed
password protection from the certificate, change the second parameter of the command to
https_nozomi_nopassword.key:

n2os-addtlscert https_nozomi.crt https_nozomi.key

5. Restart the web server to apply the change:

service nginx stop

6. Verify that the certificate is correctly loaded by pointing your browser to https://<sensor_ip>/
and checking that the certificate is recognized as valid.
The imported SSL certificates are working correctly and will be applied on the next reboot.
| Installation | 23

Additional settings
This topic describes additional, non-mandatory system settings.

Network Flows
This topic describes the basic network flows to operate the solution components.

Required ports and protocols

Table 1: Operator's access to Guardian, CMC, and RC

Port Protocol Source Destination Purpose


tcp/443 https Operator Guardian/CMC Operator’s https access to Guardian/
CMC Web UI
tcp/22 ssh Operator Guardian/CMC/RC Operator’s ssh access to Guardian/
CMC/Remote Collector shell

Table 2: Communications between RC, Guardian, and CMC

Port Protocol Source Destination Purpose


tcp/443 https Guardian/ CMC Sync from Guardian to CMC or between
CMC CMCs of different tiers in the hierarchy
tcp/443 https RC Guardian Sync from Remote Collector to Guardian
tcp/6000 proprietary RC Guardian Transmission of monitored traffic from
(on TLS) Remote Collector to Guardian

Table 3: Operator's access to Vantage

Port Protocol Source Destination Purpose


tcp/443 https Operator Vantage Operator’s https access to Vantage Web
UI

Table 4: Communications between Guardian, CMC, and Vantage

Port Protocol Source Destination Purpose


tcp/443 https Guardian/ Vantage Sync from Guardian or CMC to Vantage or
CMC/ between Vantages of different tiers in the
Vantage hierarchy

Install CA certificates
This topic describes how to add a CA certificate to a sensor. This procedure is required when the
issuing Certificate (or Certification) Authority (CA) for the HTTPS certificate is not immediately trusted.
Prerequisites: Before starting, make these pre-checks:
• If your intermediate CA and Root CA certificates are in separate files, combine them. For example:

cat <intermediate_root_cert> <ca_root_cert> > cert.crt


• The certificate must be in Privacy Enhanced Mail (PEM) format. Neither Distinguished Encoding
Rules (DER) nor PKCS#12 formats are supported.
Follow these steps to install CA certificates:
| Installation | 24

1. Upload the CA certificate file to the sensor with an SSH client in the /data/tmp folder. For
example, if you have the cert.crt file, open a terminal, cd into the directory, and then use the
following command to upload the file to the sensor:

scp cert.crt admin@<sensor_ip>:/data/tmp

2. Log into the console, either directly or through SSH, then elevate the privileges:

enable-me

3. Use the command n2os-addcacert to add the CA certificate to the trust store:

n2os-addcacert cert.crt

The imported CA certificate is now trusted by the sensor and may be used to secure HTTPS
communication from the connected sensor to a CMC, as described in Connecting sensors on page
322.

Enabling SNMP
This topic describes how to enable the SNMP daemon to monitor the health of the Nozomi Networks
Operating System (N2OS) sensor.
Note: The current SNMP daemon supports versions v1, v2c and v3. This feature is not available in the
container version.
Follow these steps to enable the SNMP daemon:
1. To enable the SNMP daemon, log into the text-console, either directly or through SSH.
2. Elevate the privileges with the command: enable-me.
3. Use vi or nano to edit /etc/snmpd.conf.
4. Edit the location, contact and community variables. For community, choose a strong
password.
5. Provide other variables's value, as needed. For example for SNMP v3 User-Based Security Model
(USM), uncomment the following sections to create a user bsnmp and set privacy and encryption
options to SHA256 message digests and AES encryption for this user:

engine := 0x80:0x10:0x08:0x10:0x80:0x25
snmpEngineID = $(engine)

user1 := "bsnmp"
user1passwd :=
0x22:0x98:0x1a:0x6e:0x39:0x93:0x16: ... :0x05:0x16:0x33:0x38:0x60

begemotSnmpdModulePath."usm" = "/usr/lib/snmp_usm.so"

%usm

usmUserStatus.$(engine).$(user1) = 5
usmUserAuthProtocol.$(engine).$(user1) = $(HMACSHAAuthProtocol)
usmUserAuthKeyChange.$(engine).$(user1) = $(user1passwd)
usmUserPrivProtocol.$(engine).$(user1) = $(AesCfb128Protocol)
usmUserPrivKeyChange.$(engine).$(user1) = $(user1passwd)
usmUserStatus.$(engine).$(user1) = 1

6. Now edit the /etc/rc.conf file to add the following line:

bsnmpd_enable="YES"
| Installation | 25

7. Start the service with the following command:

service bsnmpd start

8. If you enabled the User-Based Security Model (USM) in Step 4, replace the default value for the
user1passwd variable. Launch the bsnmpget command and convert the SHA or MD5 output to
exe format:

sh -c "SNMPUSER=bsnmp SNMPPASSWD=<newpassword> SNMPAUTH=<sha|md5>


SNMPPRIV=<aes|des> bsnmpget -v 3 -D -K -o verbose"

echo <SHA output> | sed 's/.\{2\}/:0x&/g;s/^.\{6\}//g'

Restart the service with the following command:

service bsnmpd restart

9. Save all settings by issuing the following command:

n2os-save

10.To check the functionality, run a test command from an external system (the <sensor_ip> has
to be reachable). For example, in the USM case with the default values provided by the /etc/
snmpd.config file, use a command similar to:

snmpstatus -v3 -u bsnmp -a SHA -A <password> -x AES -X <password> -l


authPriv <sensor_ip>

Configuring the internal firewall


This topic describes how to restrict access to the management interface, SSH terminal, SNMP service,
and ICMP protocol of the Full Stack edition (aka physical and virtual sensors, not the container).
• To limit access to these services, use the CLI to add the required configurations.
• The default settings permit connections from any IP address. The system ignores lines with invalid
IP addresses.
Note: Use caution when changing internal firewall rules because you can lose access to the device
administration interface. In the event of an error, console access is required to fix the rules.
• These configuration settings allow you to fine-tune the firewall rules.

Parameter Description
system firewall icmp Configure acl for icmp protocol
system firewall https Configure acl for http and https services
system firewall ssh Configure acl for ssh service
system firewall snmp Configure acl for snmp service

Follow these steps to configure the internal firewall:


1. Log into the text-console, either directly or through SSH.
2. Add the required configuration lines in the CLI. For example, the following line allows connections
only from networks 192.168.55.0/24 or from the host 10.10.10.10.

conf.user configure system firewall https 192.168.55.0/24, 10.10.10.10

3. Write configuration changes to disk and exit the text editor.


4. Apply new settings using the following command:

n2os-firewall-update
| Installation | 26

Management interface packet rate protection


This topic describes how to disable internal packet rate protection, which is enabled by default.
Background
• The management interface of the physical and virtual sensors have internal packet rate protection
enabled by default.
• Malicious hosts are banned for 5 minutes if they try to send more than 1024 packets within 5
seconds.
• The n2os-firewall-show-block command shows blocked IP addresses, while the n2os-
firewall-unblock command can unblock a single IP address.
Follow these steps to disable internal packet rate protection:
Log into the text-console, either directly or through SSH, and issue the following commands:
1. In the CLI, add the following configuration:

conf.user configure system firewall disable_packet_rate_protection true

2. Write the configuration changes to disk and exit the text editor.
3. Apply the new settings using the following command:

n2os-firewall-update

IPv6 set up
This topic describes how to configure IPv6 to access the full stack edition (i.e., physical and virtual
sensors, not the container).
Follow these steps to configure IPv6 to access the full stack edition:
1. Issue the following command to enable IPv6 on the management interface:

n2os-setupipv6
2. Reboot the sensor.
3. After reboot the address can be retrieved using a system command such as ifconfig
After completing this procedure, you will be able to access the sensor UI by enclosing the address in
the squared brackets, as shown in the following screenshot:

Figure 2: Access a Guardian via IPv6 address

Similarly, sensors may be configured to sync towards a Central Management Console (CMC) or
another sensor in High Availability (HA) specifying the ipv6 address in square brackets.
| Installation | 27

Figure 3: HA connection for two CMC

Enabling 802.1x on the management interface


This topic describes how to enable 802.1x support for the management interface. Configuration of the
RADIUS server and the creation of possible certificates are not discussed.
Prerequisites: Before beginning, verify the following:
• Confirm that you have serial access to the sensor; part of this configuration is performed via serial
console.
• If the 802.1x is already configured, switch side and ports are already closed. Be sure you have a
network patch to reach the sensor via a direct network connection.
• If the authentication process is via TLS certificates, confirm that you have ca.pem, client.pem,
and client.key files, as well as the client.key unlock password.
• If the authentication process is via PEAP, confirm that you have the identity and password.
Follow these steps to enable 802.1x support for the management interface:
1. Log in to the console via the serial console, and enter privileged mode with the command:

enable-me

2. Create the directory /etc/wpa_supplicant_certs and change its permissions to 755.


Note: You must use this exact directory name. No other name is allowed.

mkdir /etc/wpa_supplicant_certs
chmod 755 /etc/wpa_supplicant_certs

3. Create the file /etc/wpa_supplicant.conf and fill it with the required configuration values.
Note: No others file name is allowed; if necessary, rename your file to match the expected
name.

vi /etc/wpa_supplicant.conf

Below, we provide examples of wpa_supplicant.conf.


• Configuration for PEAP authentication:

ctrl_interface=/var/run/wpa_supplicant
ctrl_interface_group=0
eapol_version=1
ap_scan=0
network={
ssid="NOZOMI8021X"
key_mgmt=IEEE8021X
eap=PEAP
identity="identity_for_this_guardian_here"
password="somefancypassword_here"
}
• Configuration for TLS authentication:

ctrl_interface=/var/run/wpa_supplicant
| Installation | 28

ctrl_interface_group=0
eapol_version=1
ap_scan=0
network={
ssid="NOZOMI8021X"
key_mgmt=IEEE8021X
eap=TLS
identity="client"
ca_cert="/etc/wpa_supplicant_certs/ca.pem"
client_cert="/etc/wpa_supplicant_certs/client.pem"
private_key="/etc/wpa_supplicant_certs/client.key"
private_key_passwd="somefancypassword_private_key_here"
}

4. For TLS authentication, copy the required files to the expected location.
To copy the files, connect to the sensor via the Ethernet. If the sensor is not reachable via SSH
using the actual network, we suggest that you configure the mgmt interface with a temporary IP
address and connect the sensor with a direct Ethernet patch cable. Refer to the relevant chapter of
this guide to configure the IP address: Setup Phase 1.
5. For TLS authentication, upload the certificate files to the sensor with an SSH client in the /etc/
wpa_supplicant_certs/ folder.
For example, if you have the ca.pem, client.pem, and client.key files, open a terminal, cd
into the directory containing files, and use the following command to upload them to the sensor:
Note: Skip this step if you are using PEAP authentication.

scp ca.pem client.pem client.key admin@<sensor_ip>:/tmp/

6. In the sensor serial console, with elevated privileges, move the files to the expected location:

mv /tmp/ca.pem /tmp/client.pem /tmp/client.key /etc/


wpa_supplicant_certs

7. In the sensor serial console, with elevated privileges, change the certificate permission to 440 as
shown below:
Note: Skip this step if you are using PEAP authentication.

cd /etc/wpa_supplicant_certs
chown root:wheel ca.pem client.pem client.key
chmod 440 ca.pem client.pem client.key

8. In the sensor serial console, with elevated privileges, change the /etc/rc.conf file by adding the
following entries:

wpa_supplicant_flags="-s -Dwired"
wpa_supplicant_program="/usr/local/sbin/wpa_supplicant"

9. Change the /etc/rc.conf file's ifconfig_mgmt entry by adding the prefix WPA.
If the sensor was configured with a direct Ethernet patch cable, you can now configure the
production-ready IP address and connect the sensor to the switch. For example, if the sensor IP
address is 192.168.10.10, the entry will be similar to the following:

ifconfig_mgmt="WPA inet 192.168.10.10 netmask 255.255.255.0"

10.Use the command n2os-save to save the changes:

n2os-save
| Installation | 29

11.The above configuration process requires that you reboot the system. To reboot the sensor:

shutdown -r now

12.After the reboot, log in to the sensor. Then, using ps aux |grep wpa, you should receive output
similar to the following, which means the WPA Supplicant is enabled for the management network
interface:

root 91591 0.0 0.0 26744 6960 - Ss 09:59


0:00.01 /usr/local/sbin/wpa_supplicant -s -Dwired -B -i mgmt -c /etc/
wpa_supplicant.conf -D wired -P /var/run/wpa_supplicant/mgmt.pid

13.You can check the status of the wpa_supplicant using the wpa_cli -i mgmt status command.
For example:

root@guardian:~# wpa_cli -i mgmt status


bssid=01:01:c1:02:02:02
freq=0
ssid=NOZOMI8021X
id=0
mode=station
pairwise_cipher=NONE
group_cipher=NONE
key_mgmt=IEEE 802.1X (no WPA)
wpa_state=COMPLETED
ip_address=192.168.1.2
address=FF:FF:FF:FF:FF:FF
Supplicant PAE state=AUTHENTICATED
suppPortStatus=Authorized
EAP state=SUCCESS
selectedMethod=13 (EAP-TLS)
eap_tls_version=TLSv1.2
EAP TLS cipher=ECDHE-RSA-AES256-GCM-SHA384
tls_session_reused=0
eap_session_id=0dd52aaeaa2aa3aa4deaac6aaafc65edbfa58cdffecff6ff4[...]
uuid=8a31bd80-1111-22aa-ffff-abafa0a9afa6

Disabling USB port


This topic discusses disabling the USB port on the Nozomi Networks solution, as the Nozomi Networks
Operating System (N2OS) does not support USB ports.
We do not recommend that you connect external disks or other devices to the sensor's USB port, as
we do not guarantee that they will work. USB keyboards will work in a hardened configuration. The Ctrl
+Alt+Canc shortcut won't work.
You may disable the USB port by physically blocking it using external tools that are readily available.
Chapter

3
Users
Topics: User authentication and authorization are described in this topic.

• Introduction • User types (local, Active Directory, LDAP, SAML)


• Managing users • Setup and definition of local users
• Password setup
• Managing user groups
• Setup of groups and definition of allowed nodes and sections
• Password management and
policies • Configuration of Active Directory and the importing of users and
groups
• Active Directory users
• Configuration of LDAP and the importing of users and groups
• LDAP users
• Configuration of SAML and the importing of users
• SAML integration
• Configuration of OpenAPI keys
• OpenAPI keys
| Users | 32

Introduction
This topic describes the types of users in the Nozomi Networks solution.
The Nozomi Networks solution authentication and authorization policies are defined by user type.
Four user types are available:
• Local users: Authentication is enforced with a password, and the user is created from the Web UI.
• Active Directory users: Authentication is managed by the Active Directory. User properties
and groups are imported from the Active Directory. In order to work properly, Active Directory is
configured in the Nozomi Networks Web UI (see Configuring Active Directory integration using the
Web UI on page 44).
• LDAP users: Authentication is managed by LDAP. User properties and groups are imported from
LDAP. To work properly, LDAP is configured in the Nozomi Networks Web UI (see Configuring
LDAP integration using the Web UI on page 46).
• SAML users: Password is not required since Single Sign On (SSO) authentication is enforced
through an authentication server that uses SAML. Users can be inserted via the Web UI or imported
from a CSV file. To work properly, a SAML application should be properly configured in the Nozomi
Networks Web UI (see SAML integration on page 48).
Authorization policies are defined by user groups.
Each group includes:
• List of allowed features
• Filter to enable visualization of just specific node subsets
When a user belongs to a group, the user can only perform the operations allowed by the group and
can only see the nodes defined by the group node filter.
A user can belong to several groups and will inherit the authorizations of those groups. When a user
belongs to multiple groups, any node that satisfies the filter of any group is visible and its features are
available.
In CMC Multicontext, if a user belongs to multiple groups, where at least one of them is non-admin, and
the non-admin groups have restrictions on sensors, nodes or zones, the most restrictive filter is applied
to the user.

Figure 4: Edit group with filters

Two group types have predefined authorization policies:


• Administrators: All features are available.
• Authentication Only: Only the authentication feature is available.
| Users | 33

When a group is neither Administrators nor Authentication Only, the allowed features (sections) can
be enabled/disabled individually.
Note: After a reboot, the local default admin of the Web GUI will automatically be recreated if it has
been deleted, or if it doesn't exist. This is to make sure that a user cannot mistakenly delete it.
| Users | 34

Managing users
This topic describes how to manage users.
This topic includes:
• Displaying a list of users
• Adding a user
• Importing SAML users
• Adding a local user
• Adding SSH keys to admin users

Displaying a list of users


1. Go to Administration > Settings > Users to display a list of users. From this screen, you can
create and delete users, and change the password and/or username of existing users.

Adding a user
1. Go to Administration > Settings > Users, then click the +Add button.

The New User screen displays:

2. From the New User screen, select a user source (or type), which is typically Local or SAML. You
can also select a user from the Active Directory or from LDAP, but you must first ensure that the
user exists in the Active Directory or in LDAP. Therefore, it is preferable to import these users
directly from the Active Directory or from LDAP.
Once the source is selected, the data to be inserted depends on the user type:
| Users | 35

Local user - Specify username, password, and user group(s). (Note: Groups configuration is
covered in the next section.)
SAML user Specify username, and group(s) only, since a password is not required for SAML users.
3. If necessary, select one or more of the check boxes:
• Must Update Password
• Is Suspended
• Is Expired
Note: When Must Update Password is checked, the user is prompted to update their password
the next time they log in.
Note: When Is Suspended is checked, the user will not be able to log in.
Note: When Is Expired is checked, the user is forced to change their password the first time that
they log in after the expiration date.
4. Click New User to add the user.

Importing SAML users


1. Go to Administration > Settings > Users to display a list of users.

2. Click the Import button. An upload dialog displays.


3. Drop or select a CSV file with the list of the SAML users to be added. The template for the CSV file
is three fields, separated by commas, per row.
• The first field defines the user name.
• The second field is the Authentication group that is associated with the user (typically an
Authentication-only group)
• The last field includes one or more groups (separated by semicolons) that define additional
groups associated with the user (typically used to define allowed features).
An example of a CSV file:
user_1,authentication_group_1,group_1;group_2
user_2,authentication_group_2,group_3
4. You receive a message with the number of users correctly imported, once the import is complete.

Editing a local user


1. Go to Administration > Settings > Users to display a list of users.
2. Select a user to edit, then click Edit. An Edit user popup displays:
| Users | 36

3. Update the username and password, as needed.


4. Close the window.

Adding SSH keys to admin users


Using SSH public keys, you can log into the SSH console without typing a password. You must add
SSH public keys to the user account to configure SSH password-less authentication.
1. Go to Administration > Settings > Users to display a list of users.
2. Locate the key icon in the user list, which allows you to add SSH keys:

3. Locate the required fields for SSH key-based authentication:

4. Paste the public key in the first field to allow authentication using SSH keys. Every admin user has a
key. If you need more than one key, paste one per line. Non-admin users must use a password for
SSH authentication. When an admin user leaves, the associated SSH keys are removed.
Note: The pasted key should not contain any new lines. The system will not use invalid keys.
Enabling the second option allows you to log in using the root account.
SSH public keys are propagated to all directly connected sensors. The default key propagation
interval is 30 minutes. To change this, go to the conf.user configure ssh_key_update
interval <seconds> setting in the CLI.
| Users | 37

Managing user groups


This topic describes how to manage user groups, and how to change the sections of the platform that
users access.
The following topics are included:
• Displaying a list of groups
• Adding a local group
• Editing a user group

Displaying a list of groups


1. From the gear icon in the upper right corner, select Settings > Users to display a list of users, then
select the Groups tab.

Adding a local group


1. From the gear icon in the upper right corner, select Settings > Users to display a list of users, then
select the Groups tab.

2. Click the Add button. The following screen displays:


| Users | 38

3. Select the General tab to define the following data:


a. Create a group name and determine if the group should propagate to connected sensors.
b. (Optional) Enter a UUID in the External UUID field. This is useful should the user group be
created through SAML integration and the external IdP uses an ID rather than a human-readable
group name.
c. If the group belongs to a predefined type, check either the Admin or Authentication only box,
as appropriate.
Note: If you do not select a predefined group type, manually select one or more section(s) that
the group can view and interact with, as defined in the next step.
4. If you have not selected a predefined group type, go to the Filters tab (visible on the right of
previous screen) to define the following:
a. Define Zone filters by selecting one or more zone(s) from the list, to limit zone visibility to the
users in the group.
b. Define Node filters by entering a list of subnet addresses in CIDR format, separated by
commas; this limits the nodes users in a group can view in the nodes, links, variables list, graph,
queries and assertions.
c. Define Allowed sensors by selecting sensors that the users can access and see data coming
from; this feature is available only for CMCs and only if the "is admin" group permission is
disabled.
Note: The CMC must be in multi-context mode to view Allowed sensors. The Allowed
sensors group filter is not available in all-in-one CMC.

Editing a user group


1. From the gear icon in the upper right corner, select Settings > Users to display a list of users, then
select the Groups tab.
2. Select the user group to edit, then click Edit.
3. Define the data as described above.
| Users | 39

Password management and policies


Passwords must meet complexity requirements, and password policies can be changed and managed
to accommodate specific customer requirements.

Managing passwords
Passwords for local console and SSH accounts must meet specific complexity requirements.
Valid passwords must be at least 12 characters long, and contain characters from at least three (3) of
the following four (4) classes:
• Upper case letters
• Lower case letters
• Digits
• Other characters
Characters that form a common pattern are discarded.
Upper-case letters used as the first character are not counted towards meeting the upper case letter
class, and digits used as the last character are not counted towards meeting the digits class.
Within the classes and between them, there should be sufficient character differences, distinguished by
binary representation of the characters. For example, within the class, the number of bits in common is
not simply a straight comparison of a and b.
Assigning a password to a new user
To assign a password to a new user:
1. Go to Administration > Settings > Users > +Add. The New user pop-up window displays.

2. Complete the New user information:


a. Select a source from the Choose a source dropdown menu in the Source field. Source
selections are: Local, Active Directory, LDAP, SAML.
b. Enter a username for the user in the Username field.
| Users | 40

c. Enter a password that conforms to the complexity requirements in the Password field, or use a
securely generated password.
Note: If you use a securely generated password, the system automatically fills the Password
confirmation field.
d. Enter the same password to confirm the password in the Password confirmation field.
e. Select a group(s) for the new user from the dropdown menu in the Group field.
f. In the Must update password field, leave the box checked.
g. Click the New User button. The Update password pop-up window displays.

3. Complete the information in the Update password pop-up window to update the new user
password:
a. In the Password field, enter a password that conforms to the complexity requirements described
above
b. In the Password confirmation field, enter the same password.
c. Click the Update new password button
See Configuring Passwords for additional information.
Login messages
After multiple login attempts where the username or password is incorrect, you may receive the
message Invalid username or password rather than Account locked . This behavior is
intentional to improve the security of accounts and prevent account names from being exposed.
The system automatically unlocks the account after five minutes from the latest login attempt. Do not
reset the account password, as it will not unlock the account.
Account locked messages are not automatically enabled by default. To enable them, use the
conf.user configure authentication paranoid_mode false CLI command, followed by a
web server restart via service webserver stop.
Guardian attaches additional information to certain alerts such as multiple unsuccessful logins and
multiple access denied events. This information can be downloaded as a separate file from the Alert
details page.

Configuring password policies


Default policies can be changed via the Command Line Interface (CLI) on page 145 to best suit
organizational requirements.
Note: Refer to Configuring Passwords for additional information on password policies.
This table describes the password policy types:
| Users | 41

Table 5: Policy types

Passwords for Web UI local accounts must meet complexity


Password complexity requirements. By default, passwords must have at least 12
policy characters, and include a combination of upper-case and lower-
case letters, as well as numbers.
Password history policy The password history policy determines the number of unique new
passwords associated with a user account before old passwords
can be reused.
Password lockout policy The password lockout policy disables a user login for a fixed time
after x unsuccessful attempts to prevent brute-force attacks.
Password expiration Local passwords and local user accounts can be forced to expire
policy after a period of time. Admin accounts can be protected from
expiring. See Password parameters for settings.

To change password policies:


1. Check the current password policies, using the info tooltip when adding a new user or editing an
existing one.

Note: This tooltip shows the default password requirements, which are: Uppercase: 1, Lowercase:
1, Symbols: 0, Digits: 1, Min length: 12, History: 3
2. From the Web UI, go to Administration > Settings > CLI to change any of the parameters listed in
the Password parameters table.

Table 6: Password parameters

Default
Parameter Description
value
password_policy Number of unsuccessful login attempts
3
maximum_attempts before user lock
| Users | 42

Default
Parameter Description
value
password_policy lock_time Number of minutes that a user account
5 is locked out after unsuccessful login
attempts
password_policy history Number of unique passwords to be
3
used
pasword_policy digit Number of digits that a password must
1
contain
password_policy lower Number of lower-case characters that a
1
password must contain
password_policy upper Number of upper-case characters that
1
a password must contain
password_policy symbol Number of symbols that a password
0
must contain
password_policy Minimum password length
12
min_password_length
password_policy Maximum password length
128
max_password_length
password_policy Disable inactive user policy flag
false
inactive_user_expire_enable
password_policy Required inactive days to force user as
60
inactive_user_lifetime disabled
password_policy admin_can_expire This setting can prevent admin
false
accounts from expiring
password_policy Password expiration feature
false
password_expire_enable
password_policy password_lifetime Required days to force password
90
change

For example:
Run the following commands in the CLI:

conf.user configure password_policy maximum_attempts 5


conf.user configure password_policy lock_time 10
conf.user configure password_policy history 2
conf.user configure password_policy digit 2
conf.user configure password_policy lower 2
conf.user configure password_policy upper 2
conf.user configure password_policy symbol 2
conf.user configure password_policy min_password_length 7
conf.user configure password_policy max_password_length 10
conf.user configure password_policy inactive_user_expire_enable true
conf.user configure password_policy inactive_user_lifetime 10
conf.user configure password_policy admin_can_expire true
conf.user configure password_policy password_expire_enable true
conf.user configure password_policy password_lifetime 30

Each command obtains the following result:

{
"msg": "success",
"outs": [
{
| Users | 43

"result": false
}
]
}

In the image below, the tooltip has not changed. New passwords will be checked against old
requirements, unless you restart unicorn to apply the changes (see Step 3 below.)
Note: The new requirements should be: Uppercase: 2, Lowercase: 2, Symbols: 2, Digits: 2, Min
length: 7, History: 2

3. Restart puma in order to apply your changes:

service puma restart

Note: Your changes will apply for new passwords.


| Users | 44

Active Directory users


This topic describes how to configure the Active Directory users for login.
Existing Active Directory users can be configured for login in addition to local users. Active Directory
permissions are defined based on the user group. When you set the primary group for a user, that
user is excluded from the corresponding group membership in the Active Directory. This is because
the Active Directory does not support primary group functionality. It does not query the primary group
attribute when building the group membership of a user, therefore the primary group on the Active
Directory server is not visible in N2OS.
Prerequisites
To configure Active Directory users, you need the following:
• Domain name (aka pre-Windows 2000 name), referred to as <domainname>
• Domain distinguished name, referred to as <domainDN>
• One or more domain controller IP addresses, referred to as <domaincontrollerip>

Configuring Active Directory integration using the Web UI


This topic describes how to configure Active Directory integration from the Web UI.

1. Go to Administration > Settings > Users, and select the Active Directory tab.
2. Enter your Username and Password in the appropriate fields.
Note: In order to connect and integrate into the Active Directory, users must belong to at least one
group with reading permission on the server. Administrator privileges are not required.
3. Specify a Domain Controller IP/Hostname.
Check to see if the Active Directory service is running on port 389 (LDAP) or on port 636 (LDAPS)
using the Check Connection button and the LDAPS selector.
By default, the server's SSL certificate is not verified. Enable it using the Verify SSL selector.
Should you need to add another Domain Controller IP, click the Add host button.
4. Specify the domain details in the Domain name and Distinguished name fields.
5. Optionally, configure the Connection timeout.
6. Click the Save button to save the configuration, which also validates the data.
If there are errors, they will display beside the Status field.
The Delete configuration button allows you to delete the Active Directory configuration by
removing all of its variables.
Note: This action is not recoverable.

Import Active Directory groups


This topic describes how to import an existing group from an Active Directory infrastructure. This step is
required to allow Active Directory users to log into the system.
1. Go to Administration > Settings > Users to display a list of users, then select the Groups tab.
| Users | 45

2. Click the Import from Active Directory button.


3. From the Import Groups from Active Directory screen, specify a domain administrative
credential.
a. In the Username field, type the Active Directory user logon name in <domainname>
\<domainusername> format.
b. Enter your password in the Password field.

4. Click the Retrieve groups button to retrieve the list of groups. You can also click the Filter by
group name checkbox, and type the name of the group that you want to retrieve.
5. Now filter and select the desired groups to import. If you also want to import related groups (e.g.
parent groups), click the checkbox near the Import button.

6. Click the Import button once you have selected the groups to import. You will be redirected to the
group list.

7. Edit the group permissions, as needed. Active Directory users that belong to this group are
automatically assigned to it and inherit all permissions of the configured group.
8. After configuring Active Directory group permissions, users can log into the system with the
<domainname>\<domainusername> username and their current domain password in the login
screen.

LDAP users
This topic describes how to configure the Lightweight Directory Access Protocol (LDAP) with the
Nozomi Networks solution.
Existing LDAP users can be configured for login, in addition to local users. LDAP permissions are
defined based on the user group.
Prerequisites for use
| Users | 46

To configure LDAP users, you need the following:


• Domain name (i.e., pre-Windows 2000 name), referred to as <domainname>
• Domain distinguished name, referred to as <domainDN>
• One or more domain controller IP addresses, referred to as <domaincontrollerip>
Prerequisites in the user system
• There are no prerequisites in the user system to use LDAP.
Supported LDAP versions
• v2
• v3

Configuring LDAP integration using the Web UI


This topic describes how to configure LDAP integration from the Web UI.
1. Go to Administration > Settings > Users, and select the LDAP tab.

2. Enter a Username and a Password in the appropriate fields. The Username in this step
requires an admin user with full LDAP server permission. The Username for the LDAP
server should be a distinguished name (DN) that follows the LDAP standard. An example is:
cn=username,cn=group,dc=nozominetworks,dc=com.
3. Specify a Domain Controller IP/Hostname 1. Check to see if the LDAP service is running on
port 389 or on port 636 (LDAPS) using the Check Connection button and the LDAPS selector.
By default, the server's SSL certificate is not verified. Enable it by toggling the LDAPS selector to
Verify SSL.
4. (Optional) Click the Add host button to add another Domain Controller IP.
5. Enter a distinguished name for the user in the Distinguished name field. An example is:
dc=nozominetworks,dc=com.
6. Optionally, click Connection timeout to configure for the number minutes before connection
timeout.
7. Click the Save button to save the configuration, which also validates the data. If there are errors,
they will display beside the Status field.
8. (Optional) The Delete configuration button allows you to delete the LDAP configuration by
removing all of its variables.
Note: The Delete configuration action is not recoverable.

Importing LDAP groups


This topic describes how to import an existing group from an LDAP infrastructure. This step is required
to allow LDAP users to log into the system.
1. Go to Administration > Settings > Users to display a list of users, then select the Groups tab.
| Users | 47

2. Click the Import from LDAP button.

3. From the Import groups from LDAP server screen, specify a domain administrative credential.
4. Enter a Username and a Password in the appropriate fields. The Username in this step
requires an admin user with full LDAP server permission. The Username for the LDAP
server should be a distinguished name (DN) that follows the LDAP standard. An example is:
cn=username,cn=group,dc=nozominetworks,dc=com.

5. Click the Retrieve groups button to retrieve the list of groups. You can also click the Filter by
group name checkbox, and type the name of the group that you want to retrieve.
6. Now, filter and select the desired groups to import. Click the Import Configuration button once you
have selected the groups to import. You will be redirected to the Users management screen with
the groups listed.

7. Edit the group permissions, as needed. LDAP users that belong to this group are automatically
assigned to it and inherit all permissions of the configured group.
8. After configuring LDAP group permissions, users can log into the system with the username and
their current domain password in the login screen.
| Users | 48

SAML integration
This topic describes the Nozomi Networks Security Assertion Markup Language (SAML) integration.
Nozomi Networks supports SAML Single Sign-On (SSO) authentication.
Note: Nozomi Networks integration requires that your Identity Provider (IdP) be compatible with SAML
2.0.
Note: The SAML configuration process is often error-prone. This topic assumes familiarity with: (1)
SAML protocol, (2) your IdP software, and (3) the exact details of your specific IdP implementation.
Prerequisites
Before configuring SAML integration, define a new application in your IdP. This application consists of:
• Assertion Consumer Service (ACS) URL for Nozomi Networks. An ACS specifies the /auth path
such as https://10.0.1.10/saml/auth.
• Issuer URL for your IdP, which specifies the /saml/metadata path, such as /saml/metadata.
The nature of this value depends on your IdP.
• Metadata XML file that describes your IdP’s SAML parameters. Before configuring your Nozomi
Networks Guardian or CMC, download the file from your IdP vendor and save it to a location
accessible to Nozomi Networks.

Configuring the SAML integration


1. Go to Administration > Settings > Users and click the SAML tab.

2. In the Nozomi URL field, enter the URL for your Nozomi Networks instance.
Note: The form of this URL determines how authentication is processed. For example, if the value
that you enter specifies HTTPS, Nozomi Networks uses the HTTPS protocol when processing login
requests.
3. Click Load the Metadata XML file, and select the metadata file provided by your IdP. This file tells
Guardian how to configure SAML parameters for use with your specific IdP solution.
4. In the SAML Role Attribute Key field, enter a string that will be used to map role names between
Guardian and your IdP. The value in this field is used to compare groups defined in Guardian with
those defined in your IdP. The nature of this value depends on your IdP. (For example, if you are
using Microsoft Office 365 as your IdP, the value might be:
http://schemas.microsoft.com/ws/2008/06/identity/claims/role
5. Click Save.
6. On the Guardian login page, click Single Sign On to test the integration, using credentials known
by your IdP.
Note: For SAML to work properly, groups that match SAML roles must exist in the system. Groups are
found using the role name. For example, if the SAML role attribute specifies an Operator role, the IdP
looks for the Operator group when authorizing an authenticating user.
Once configured, the login page displays a new Single Sign On button:
| Users | 49

If authentication fails, Nozomi writes errors to either:


• Audit screen (see Audit on page 206 for additional information), or
• Log file in the /data/log/n2os/production.log.
If it becomes necessary, click Delete configuration to entirely remove your current SAML integration.
Note: We advise deletion only in rare cases when your authentication method changes.

Additional SAML configuration


Typically, with SAML authentication, replies are sent back from the same host that originally received
the request. Occasionally, SAML requests are chained between different IdPs, and replies may come
from a different host. By default, the Web UI content security rules block these types of replies.
• This behavior can be overridden using the csp form-action-urls configuration key.
• To accept replies from an IdP SSO target URL that differs from the one specified in the SAML
metadata, issue the following configuration rule conf.user configure csp form-action-
urls <additional_url> in CLI.
• If you need to specify more than one URL, separate them using spaces.
• After this change, run the service webserver stop command, in a shell console to apply it.

SAML clock skew


Occasionally, the IdP and Guardian system times may differ. By default, the system accepts requests
with up to 60 seconds difference.
• This behavior can be overridden using the saml clock_drift configuration key.
• To change the value, issue the following configuration line conf.user configure saml
clock_drift <allowed_seconds> in CLI.
• After this change, run the service webserver stop command in a shell console to apply it.

Known SAML limitation


The SAML logout protocol is not supported.
| Users | 50

OpenAPI keys
This topic describes how to manage OpenAPI keys for local accounts.
Use OpenAPI keys for bearer token authentication instead of basic authentication (username and
password). For additional information about the OpenAPI refer to the SDK User Manual.
Note: OpenAPI keys can only be assigned to local users.
OpenAPI keys for the current user can be accessed through <Username> > Other actions > Edit
OpenAPI keys (only available to local users).

Creating a new key


Click Generate to create a new OpenAPI key for the current user. Optionally choose a description for
the key and an allowed IP list in CIDR notation (e.g. 10.0.0.0/8,192.168.2.0/24):

After Generate is clicked, a dialog containing the automatically generated key name and key token is
shown:
| Users | 51

Important: store the key name and key token in a safe place; the key token will not be shown again.

Editing keys
Key description and allowed IP list can be edited by clicking the pen icon:

Revoking and reinstating keys


Keys can be revoked by clicking the trash icon:

Revoked key names are shown with strikethrough text. It is possible to reinstate a revoked key by
clicking the plus icon.

Reviewing all keys


Administrators can also review, edit, revoke and reinstate the OpenAPI keys of all users from the users
management page:
Chapter

4
Basics
Topics: This chapter describes the basic concepts of the Nozomi solution,
as well as some graphical interface controls.
• Environment
You should have a solid understanding of these concepts in order to
• Asset
understand how to properly use and configure the Nozomi Networks
• Node Operating System (N2OS) solution.
• Session
• Link
• Variable
• Vulnerability
• Query
• Protocol
• Incident & alert
• Trace
• Charts
• Tables
• Navigation through objects
| Basics | 54

Environment
The Nozomi Networks environment is a real-time representation of the network monitored by
Guardian that provides a synthetic view of all assets and network nodes, and the communication
between them.

Assets
Assets displays all assets, intended as single discrete endpoints. In assets, you can visualize, find,
and drill down on asset information, such as hardware and software versions. Go to Environment >
Assets to access assets.
See Assets on page 78 for more details.

Network
Network includes generic network information unrelated to the Supervisory Control and Data
Acquisition (SCADA) side of some protocols, such as a list of nodes, and the connection between
nodes and the topology. Go to Network to access network page.
See Network on page 86 for more details.

Process
Process includes SCADA specific information, such as the SCADA producers list, producer variables
with their history of values and related information, along with an analysis of the variable values and
related statistics.
See Process on page 111 for more details.

Asset
An asset in the environment represents an actor in the network communication that can range from a
simple personal computer to an OT device, depending on the nodes and components involved. Go to
Environment > Assets > List to see a list of assets and go to Environment > Assets > Diagram to
see a graphical list of assets, with assets aggregated at different levels.

Figure 5: Example of list of assets

Node
A node in the environment represents an actor in network communication. Depending on the protocols
involved, a node can range from a simple personal computer to an RTU or a PLC. Go to Network >
Nodes to access a list of nodes in the environment and go to Network > Graph to view a graphical list
of nodes in the environment.
When a node is involved in communication using SCADA protocols, it can be a consumer or a
producer. SCADA producers can be analyzed in detail by going to Process.
| Basics | 55

Figure 6: Example of a list of network nodes

Session
A session is a semi-permanent interactive information interchange between two or more
communicating nodes. Go to Network > Sessions to access sessions.
A session is set up or established at a certain point in time, and then turned down at some later point.
An established communication session may involve more than one message in each direction.
The Nozomi Networks solution displays the session status based on the transport protocol. For
example, a TCP session can be in SYN or SYN-ACK status before being OPEN.
When a session is closed, it is retained and can be queried for subsequent analysis.

Figure 7: Example of a list of network sessions

Link
A link in the environment represents communication between two nodes, using a specific protocol.
Go to Network > Link to access a list of links and go to Network > Graph to access a graphical list of
links.
| Basics | 56

Figure 8: Example: list of network links

Variable
A variable is a symbolic name for process data about a specific node. A variable has properties that are
described in detail in Process variables on page 111. For example, the RTU ID and name properties
have specific values depending on the protocol.
Variables are extracted based on passive detection through Nozomi Networks support for OT/IoT
protocols. The close relationship between variables and process is relevant, so much so that the
Variables page in the Web UI is titled Process.

Vulnerability
A vulnerability is a weakness that allows attackers to reduce a system's information assurance. Go to
Analysis > Vulnerabilities to access vulnerabilities.
By constantly analyzing industrial network assets against a state-of-the-art repository of ICS
vulnerabilities, the Nozomi Networks solution permits operators to stay on top of device vulnerabilities,
updates, and patch requirements.

Figure 9: Vulnerabilities

Query
The Nozomi Networks Query Language (N2QL) syntax is inspired by the most common Linux and
Unix terminal scripting languages. A query is a concatenation of single commands separated by the |
symbol in which the output of a command is the input of the next command. This allows you to create
complex data processing by composing several simple operations. Go to Analysis > Queries to create
new queries, or to access saved queries.
You can query only the query sources corresponding to the sections enabled for the user. See
Managing user groups on page 37 for information on managing groups.
The table shows the appropriate permission needed to query the sources.

Source Permission
alerts Alerts
assets Assets
| Basics | 57

Source Permission
captured_urls Captured urls
link_events Link events
sessions Sessions
report_files Reports
variables Process
variable_history Process
trace_requests Trace requests
sessions_history Sessions
health_log Health
packet_rules Threat Intelligence
yara_rules Threat Intelligence
stix_indicators Threat Intelligence

The following example is a query that lists all nodes ordered by received_bytes (in descending order):

nodes | sort received.bytes desc

Go to Query - User interface reference for information on the graphical user interface and how you can
create/edit queries.
Go to Query - complete reference for additional information on commands, data sources, and
examples of the query language.

Protocol
In the Nozomi Networks environment, links communicate using one or more protocols. A protocol
is recognized by the system simply by the transport layer and the port, or by a deep inspection of its
application layer packets. Go to Process > Protocol Connections to access protocols.

SCADA protocol mapping


SCADA protocols are recognized by deep packet inspection, and for each, mapping brings protocol-
specific concepts to the more generic and flexible environment variable model.
Following are examples of SCADA protocol mapping:

Table 7: SCADA protocol mapping examples

Protocol RTU ID Name


Modbus Unit identifier (r|dr|c|di)<register address>
IEC 104 Common address <ioa>-<high byte>-<low byte>
Siemens S7 (Timer or Fixed to 1 (C|T)<address>
Counter area)
Siemens S7 (DB or DI Fixed to 1 (DB|DI)<db number>.<type>_<byte
area) position>.<bitposition>
Siemens S7 (other areas) Fixed to 1 (P|I|Q|M|L).<type>_<byte
position>.<bitposition>
| Basics | 58

Protocol RTU ID Name


Beckhoff ADS <AMSNetId <Index Group>/<Index Offset>
Target><AMSPort
Target>
and more...

Incident & alert


An alert represents an event of interest in the observed system. There are various types of alerts. For
example, alerts may derive from anomaly-based learning, assertions, or protocol validation. See Alerts
Dictionary on page 233 for a complete list of alerts.
Go to the Alerts menu from the main Web UI to access alerts and incidents.
Note: When an alert is raised, a trace request is issued.
An incident is a summarized view of alerts. When multiple alerts describe different aspects of the
same situation, the Nozomi Network solution's powerful correlation engine groups them and provides a
simple and clear view of what is happening in the monitored system.
See Incidents Dictionary on page 242 for a complete list of incidents.

Figure 10: List of alerts

Risk value may be weighted by such factors as the learning state of the involved nodes (known learned
nodes have reduced risk and unknown or unlearned nodes have increased risk) and the reputation of
the IP address (known "bad" addresses carry increased risk). Adjustments are made to the final score
from a starting default base value.

Risk levels
A high risk alert has a value that is more than eight (>8). A high risk alert shows a red indicator.

Figure 11: High risk alert


| Basics | 59

A medium risk alert has a value that is less than, or equal to, eight (<= 8), but more than four (>4). A
medium risk alert shows an orange indicator.

Figure 12: Medium risk alert

A low risk alert has a value that is less than, or equal to, four (<=4). A low risk alert shows a green
indicator.

Figure 13: Low risk alert

Trace
A trace is a sequence of processed network packets that can be downloaded in a Packet Capture
(PCAP) file for subsequent analysis. Go to Network to access trace capabilities.

The Nozomi Networks solution shows the icon from which you can download available traces. A
trace is generated by an alert or by issuing a trace request from the icon. Find this icon in sections
related to the trace feature. Non-admin users need trace permission in order to issue a trace.
See Configuring trace on page 422 for a detailed explanation of trace configurations.
A continuous trace is network packets that are kept for future download from the moment requested
until the request is paused. Such collections can be requested through the Web UI.
See Continuous trace and other trace actions on page 209 for a detailed explanation of continuous
traces.
Examples:
• This example shows alerts with trace. To download the PCAP file, click the three dots, then click the
cloud icon.

Figure 14: Alerts with trace

• This example shows how to issue a manual trace request from the Links section by clicking the bolt
icon.
| Basics | 60

Figure 15: Manual trace request from Links section

• This example shows how to send a trace request from the graph view.

Figure 16: Trace request from graph view

Charts
The Nozomi Networks solution charts show different types of information, from network traffic to
the history of variable values. Two main chart controls are area charts and history charts. Go to
Administration > System > Network Interfaces to access area charts for network throughput.

Area charts
Area charts show network traffic.

A Chart title
B Buttons to toggle the chart's live update on and off
C Time window control; click to open the historic view
D Chart unit of measure
E Legend, in this case the entries in the legend represent traffic categories;
click each entry to show or hide the associated data in the chart

History charts
History charts display the history of a variable's values.
| Basics | 61

A Buttons to detach the chart, and export the data to an Excel or CSV file
B Time window control
C Unit of measure
D Navigator: interact with it using the mouse; drag it to change the visibility of
the time window, enlarge or shrink it to change the width of the time window

Tables
Tables are used to help organize and provide information throughout the Nozomi Networks solution,
including lists of nodes and links.

Figure 17: Table with a filter and sorting applied

A Filtering control; while typing in a row, the table updates according to the
filter
B Sorting control; sorts information in the table (click twice on the same
heading to change the sort direction); press the CTRL key while clicking to
activate multiple column sorting
C Reset buttons are in two separate sections to independently remove the
filters and sorting from the table
D Update the data in the table; click Live to periodically update the table
content
E Use this menu to hide or show columns (as a space saver, tables may have
hidden columns by default)

Navigation through objects

The navigation icon allows you to go directly to related objects.


Two examples:

Figure 18: Navigation options for a node

Figure 19: Navigation options for a link


Chapter

5
User Interface Reference
Topics: In this chapter we will describe every aspect of the graphical user
interface. For each view of the GUI we attached a screenshot with a
• Supported web browsers reference explaining the meaning and the behavior of each interface
• Navigation bar control.
• Dashboards
• Alerts
• Assets
• Network
• Process
• Queries
• Reports
• Time machine
• Vulnerabilities
• Settings
• System
• Continuous trace and other
trace actions
| User Interface Reference | 64

Supported web browsers


The Nozomi Networks solution supports recent versions of the following web browsers:
• Google Chrome
• Chromium
• Safari (for macOS)
• Firefox
• Microsoft Edge
Note: We do not support outdated web browsers.

Navigation bar
This topic describes the Guardian sensor navigation bar and how to access menu items.

Navigate to these sections of the Web UI:

A Select the dropdown ( ) icon for access to these features:


Allows you to see, monitor, configure and upgrade Nozomi sensors
Sensors
(i.e., CMCs, Guardians and Remote Collectors, and in future more)
Provides a list of alerts, showing the anomalies and threats that
Alerts
occurred in the monitored environment
Provides views of the monitored environment, including assets, network
Asset view
elements and variables.
Provides tools for in-depth analysis of the monitored network, including
Queries
queries, reports and vulnerability assessment
(If purchased) Provides the ability to configure and monitor the Smart
Smart Polling
Polling add-on
Arc (If purchased) Host based sensor for endpoints
| User Interface Reference | 65

Select the gear ( ) icon for access to Administration > Settings and
System

Figure 20: Administration menu

C Select the profile ( ) icon for access to the following actions:


Logout

• Clear personal settings - Clear all of the personal settings stored in


the browser local storage
• Continuous traces - Request a trace that has only the disk size
constraint
• Request custom trace - Request a trace specifying a custom
Other actions
packet filter
• Show requested traces - Show the trace requests executed by the
current user
• Enable experimental features - Warning that you are activating
features that are still under development

Zone Filters Apply zone filters so only specific zones display


Utility navigation bar that includes information on licensing, and
symbols for any of the following add-ons (if purchased): TI (if Threat
D
Intelligence), AI (Asset Intelligence), SP (Smart Polling), and Arc, as
follows:
Collapse button ( ) Click to reduce nav bar height
Monitoring Click to disable auto logout
mode button ( )
If a time machine snapshot is currently loaded, its timestamp is shown
Time machine status
here. Otherwise, the text LIVE displays.
Host Server hostname
| User Interface Reference | 66

Site Server location


Release of the Nozomi Networks Operating System (N2OS) that is in
N2OS version
use
Time NTP (Network Time Protocol) offset
Disk Statistics about used and available space
Licensee Entity to whom license is granted
Version information for TI (Threat Intelligence) and AI (Asset
Updates
Intelligence), SP (Smart Polling), and Arc, visible only if purchased
Switch between English, French, German, Italian, Spanish,
Language Vietnamese, traditional Chinese, simplified Chinese, Japanese, and
Korean
E Toggle between the classic Web UI and the new Web UI

Note: You may see the following warning messages:


• HIGH LOAD - Notifies you that the sensor is currently receiving more traffic that it can handle, and it
is protecting itself by discarding some information.
• LIMITS REACHED - Notifies you that the machine license has reached its limit. When this occurs,
the system stops analyzing new network elements (existing are still analyzed), and you may want to
consider upgrading your license.
• MIGRATION ERROR: The last upgrade process encountered migration problems. The health log
reports a more detailed explanation of the issue.
• Busy - Machine is momentarily not responding to browser requests, which does not imply data
analysis loss or malfunction. The machine may be busy processing other tasks, may be rebooting,
or may be experiencing internet connectivity problems.
• Slow connection - Machine is momentarily not responding to browser requests, which does not
imply data analysis loss or malfunction. Check your current connection to the sensor.
| User Interface Reference | 67

Dashboards
This topic describes the dashboards of the Nozomi Networks solution.
The Nozomi Networks solution offers multiple configurable dashboards that include widgets, which
can be configured. For information on configuring dashboards, go to Dashboard configuration on page
69.
The default dashboard displays when you open the Nozomi Networks solution.
Useful controls for all dashboards include:
• On the left, with the time selector component, choose the time window for the dashboard data. All
widgets are influenced by the time selector.
• On the right, with the dropdown menu and a button with a wrench icon, select a dashboard and go
directly to the dashboard configuration page.

Default dashboard

From the default dashboard, you can view widgets that provide information about your network.
| User Interface Reference | 68

Table 8: Default widgets

Environment information This provides a high level view of your network from the Nozomi
Networks solution perspective. Click each section (except
protocols) for additional details.
Total throughput Live view of traffic volume
Asset overview Assets, by level, as per IEC 62443
Alert flow over time Alert risk charted over time
Situational awareness List of evidences, by severity
Latest alerts Latest alerts (the most recent being first)
Failed assertions List of failed assertions

Note: Click the button (where available) for additional details.


| User Interface Reference | 69

Dashboard configuration
This topic describes how to import existing dashboards, create new dashboards and modify an existing
dashboard.
The default dashboard displays when you open the Nozomi Networks solution. For additional
information on the default dashboard, go to Dashboards on page 67.

Initial dashboard configuration


Go to Administration > Settings > Dashboards to configure the dashboard.
Note: Users must have admin permission to configure the dashboard.

To make changes to configure the dashboard, select from the following actions shown at the top right
of the page:
• Import
• New dashboard
• Choose a dashboard

Import Click Import to choose a dashboard configuration that was


previously on your computer.
New Dashboard... Click New Dashboard... to choose a built-in template to begin
configuring the dashboard.

Note: Do not specify a template if you want to start from


scratch.

Choose a Dashboard Click Choose a Dashboard to select the dashboard to modify.


You can toggle between the predefined overview dashboard
and your custom ones.

Main dashboard actions


Once you've selected from the main actions, use the Dashboard configuration taskbar to perform
additional actions.
| User Interface Reference | 70

Click + Add row to add a new row to the dashboard.


+ Add row ( )
Click History to restore a previously saved version of the
History ( ) dashboard.
Click Delete to remove the dashboard from your dashboard list.
Delete ( )
Click the Edit button to rename the dashboard configuration
Edit ( )
and customize the dashboard visibility. At the Configure
dashboard details popup, change the Name and/or Group
visibility, as needed.

Click Discard to restore the previously saved version of the


Discard ( ) dashboard.
Click Clone to create a new dashboard based on a copy of the
Clone ( )
selected version.
Click Export to save the configured dashboard to your local
Export ( ) computer.
Click Save to save the dashboard.
Save ( )

Row actions
From the Dashboard configuration taskbar, execute actions on a row.
Note: By default a new widget is added after the existing widgets.

Move row up/down Click the up or down buttons to move the row up or down in the
dashboard.
Delete row Click Delete row to remove the row from the dashboard.

Widget actions
From the Dashboard configuration taskbar, execute actions on a widget.
| User Interface Reference | 71

Increase/decrease width Click to increase or decrease the width of the widget.


Increase/decrease height Click to increase or decrease the height of the widget.
Adjust height in row Click to adjust the height of all of the widgets in the same row.
Move widget before/after Click to move the widget in the row one step left or one step
right.
Move widget up/down Click to move the widget to the previous or to the next row.
Delete widget Click to delete the widget from the row.
| User Interface Reference | 72

Alerts
This topic describes the alerts produced by the Nozomi Networks solution.
An alert represents an event of interest in the observed system.
Prerequisites
• Users must belong to a group with admin permission enabled to perform actions on alerts, such as
acknowledgments and removals.
• Non-admin users can access alerts only if at least one of the groups that they belong to has alerts
permission enabled.
1. Go to the Alerts page.
2. From the upper right hand of the page, select either standard or expert mode, depending on the
desired level of detail.
Standard mode provides an overview of the latest anomalies. Expert mode provides a detailed list
of detected anomalies, and allows for detailed filtering, sorting, and analysis information.
Note: In either mode, users can see a list of individual alerts, and can group alerts as incidents. For
additional information, see Incident & alert on page 58.

Figure 21: Standard or expert mode


3. Click the Group by Incident toggle to group the alerts by incident and sort them, as needed.

Note: In contrast to standard mode, expert mode provides a comprehensive table layout, with
details on the alerts and incidents listed, including addresses, labels, along with roles of the involved
nodes, zones, protocol, and ports used in the involved transactions, and more.

Figure 22: Standard mode

A Risk Risk associated with each alert or incident


B Time Time associated with each event
C Name Name category of the event
| User Interface Reference | 73

D Description Detailed explanation of the event


E Analysis Upon selecting a row, see a more in-depth analysis of the
alert

Figure 23: Expert mode

Optionally, in expert mode, click the Count by field button to select a data field on which to group
and count the alerts and incidents.

Figure 24: Incidents grouped by MAC Srv


4. Depending on the mode selected, use one of the following methods to see additional details about
the alert:
a. In standard mode, click the Show details button in the right pane.
b. In expert mode, click alert ID for a specific alert.
The Alerts detail popup displays in either mode.
The Alerts popup provides a detailed overview of the alert, including links to the involved nodes,
and, for incidents, the list of corresponding alerts. Besides some additional tabs are available that
include:
• Network graph at the time of the event.
• Audit of the operations performed on the alert (such as ack or close).
• Analysis tools such as the relationship within the MITRE ATT&CK for ICS and Enterprise
knowledge bases.
• Playbook content if available.
| User Interface Reference | 74

Figure 25: Alerts detail popup


5. Display the Alert operations popup by doing one of the following:
• In standard mode, click the ellipsis (triple-dot icon) at the top right in the right pane.
• In expert mode, click the ellipsis (triple-dot icon) in the first cell of the alert row.
• In the Alert details popup click the top left of the popup.
The Alert operations popup displays.
Note: the dropdown menu may change depending on the alert status.

Figure 26: Alert operations popup

The Alert operations popup provides access to the following operations:


• Configure alert: Allows users to easily introduce a new alert rule about future events similar to
the current one.
• Ack/Unack: Allows users to mark the alert or incident as acknowledged, or to restore its status
as non-acknowledged.
• Close: Allows users to mark the alert as closed. The Alert closing popup displays, allowing
users to choose the type of learning operation to perform.
• Download trace: Allows users to download the trace, if available. The trace contains the packet
that triggered the alert, along with an extract of the same session before and after that packet.
Traces might be unavailable if the sensor is under stress. For detections that require multiple
packets, such as Multiple login failures, the trace might not contain enough traffic to reproduce
the alert. Incidents do not have an associated trace.
• Edit note: Allows users to add a customized arbitrary note to the alert or incident.
• Time machine diff: Opens the Time Machine difference screen corresponding to the time of
the alert or incident.
• Navigate: Provides access to correlated objects, such as the involved nodes or links, or the
vulnerabilities of the involved assets.
| User Interface Reference | 75

Figure 27: Alerts closing popup


The Alerts closing popup allows users to select a reason for the alert or incident closing, and to
specify the learning process for the corresponding objects.
Two predefined reasons for closing alerts are:
• This is a change: If the alert was caused by a legitimate change in the network configuration, such
as a change in the fixed-address of an asset after valid maintenance, the alert can be closed as
a change with instructions to Guardian to learn the change. The new address is learned, so no
additional alerts are raised again about the same network configuration change are raised again.
• This is an incident: If the cause of the alert is a configuration error, an attack, a malfunctioning
device, or other security incident, the change is not learned as part of the environment baseline.
When closing an alert in this way, the IDS is instructed to delete the corresponding objects. For
example, a new node entering the network for the first time causes a VI:NEW-NODE alert. If an
alert closes as an incident, reference to the new node is deleted. The VI:NEW-NODE alert is raised
again in subsequent communication involving the same node.
In addition to the predefined reasons for closing alerts, users may write a custom reason for closing an
alert in the Custom reason field. This allows users to enter an arbitrary string as the closing reason,
with a request to apply one of the two described behaviors.
| User Interface Reference | 76

Figure 28: Closing alert for custom reason with comment

Regardless of the reason for closing an alert, add a comment to appear in the alert audit log.

Figure 29: Audit alert operations

Modifying an alert playbook from the Playbook tab


You can modify a playbook associated with an alert from the Playbook tab in the Alerts popup.
1. At the Web UI, select Alerts. The Alerts screen appears.
| User Interface Reference | 77

Figure 30: Alerts popup


2. Find the alert on which to modify the assigned playbook.
Note: To more easily find an alert associated with a given playbook, first find the alert rule
associated with the playbook, then filter the alerts using the same rule, such as Type ID, Protocol,
ip src, or other field used in the alert rule.
3. Select the alert, then select the Playbook tab at the bottom of the screen.

Figure 31: Playbook tab


4. Click Edit to modify the playbook, as needed.
5. Save your changes.
Note: Modifications from the Playbook tab affect only the playbook for that specific alert. The
playbook template from which the alert playbook was generated remains unchanged, as do any other
alert playbooks generated from the same playbook template.

Alert bulk actions


In expert mode, users can perform bulk actions on alerts. The large ellipsis (triple-dot icon) in the first
cell of the header opens a popup to perform operations such as closing or acknowledging alerts.

Figure 32: Alerts bulk actions in expert mode


| User Interface Reference | 78

You can determine the target of the bulk operation either by selecting the relevant alerts, or by using
the header of the table to define a filter and then selecting a by table filter operation. These operations
are applied to all alerts or incidents matching the filter, even those not shown in the table. Users can
apply these operations to a large set of alerts, with a corresponding amount of time.

Assets
This topic describes Assets, which displays assets in the local network environment and their
associated details. It includes information on actions that can be performed on assets.

Introduction
Assets displays assets in the environment. An asset in the environment represents an actor in the
network communication and, depending on the nodes and components involved, it ranges from a
simple personal computer to an OT device.
To access Assets from the Web UI, go to Environment > Assets. List tab lists the assets in the
environment and includes details about them. Diagram tab uses the Purdue model format to display
the assets (i.e., assets are arranged in separate rows, according to their level).

List tab
The List tab screen displays a list of assets in table format.

Figure 33: List view of assets

1. Choose the columns to display in the table by selecting them from the # Selected dropdown menu
to the right of the table. Some of the available table column selections and definitions are described
below.

Note: Click the column heading or the arrow to the right of it to sort the assets in ascending or
decreasing order. Click the x button to remove the sorting information.
| User Interface Reference | 79

Actions allowed on assets include: configuring an asset, creating


a PDF report, and navigating to another asset, link or node.
Click the Actions column ellipsis (three dots) for actions that can
be performed on: a single asset, all assets, or no assets. You can
also invert the selection in the current page.

Actions

Type of sensor used to perform packet captures for


Capture device
troubleshooting and security reviews
Name Asset name
Asset type such as router, printer scanner, OT device, controller,
Type
PLC, computer, and camera
Type of operating system/firmware including MAC OS, iOS, and
OS/Firmware
Windows
IP Asset IP address
VLAN Virtual local area network in which the asset is segmented
MAC address Asset MAC address
MAC vendor Asset MAC vendor

Select from an available asset role:


• consumer
• db_server
• dhcp_server
• dns_server
Roles • other
• producer
• terminal
• time_server
• voip_server
• web_server

Level Asset level according to the Purdue model


Protocols Protocols used by the links associated with the asset
Zone Network zone to which the asset belongs
Provides the time that the link to the asset was created, which
can be 1m (minute), 15m (minutes), 1h (hour), 3h (hours), 12h
Created at
(hours), 1d (day) or custom date and time, such as 2017-05-01
16:50:09.155
| User Interface Reference | 80

Provides the time of the last activity on the asset, including


never, 1m (minute), 15m (minutes), 1h (hour), 3h (hours), 12h
Last activity
(hours), 1d (day) or custom date and time, such as 2017-05-01
16:50:09.155
AI enriched Specifies if the asset has been enriched by Asset Intelligence (AI)
Custom Displays the value of custom fields added to the asset
# Selected Refers to the number of columns selected for display
2. (Optional) Perform an action on an asset from the Actions column. Go to Actions on assets for
details.
3. Select an asset to display its details. The Asset details popup displays asset details in the
Overview tab. Select additional details to display by clicking from the following tabs: Sessions,
Alerts, Software, Vulnerabilities, Variables.

Figure 34: Asset detail popup


| User Interface Reference | 81

Table 9: Asset detail tabs

The top part of the screen contains generic data. Hover your
mouse over the information ( ) icon to display the source,
granularity and confidence of the corresponding piece of data.
Data includes:
• IP address
• Roles
• Type
• MAC address
• MAC vendor
Overview
The bottom part of the screen contains an in-depth analysis of
the asset, including its:
• network stats
• network location
• properties
• protocols
• learning status
• security with associated vulnerabilities
• hardware components

Sessions List of associated active sessions


Alerts List of high and medium alerts
Software List of installed software
Vulnerabilities List of high and medium vulnerabilities
Variables List of variables
4. At the Overview tab, hover your mouse over the information ( ) icon for details about: source,
granularity, confidence.

Figure 35: Additional information

We define source, granularity, confidence as follows:

Origin of the information:


• manual: Information that is manually added from the
configuration
Source • imported data: Imported information
• passive detection: Information from Deep Packet Inspection
• asset-kb: Information from Asset Intelligence
• smart-polling: Information from Smart Polling
| User Interface Reference | 82

Level of detailed information:


• manual-or-import: Information manually added or imported
• complete: Detailed information that is extracted
Granularity
• partial: Detailed, but not complete information
• generic: A family/generic value is found, but is not detailed
• unknown

Level of confidence with the published information:


• manual-or-import: Information manually added or imported,
with the highest confidence at this level:
Confidence • high
• good
• low
• unknown

Actions on assets
From the Assets table, users may perform these actions on the assets: configuring an asset, creating
a PDF report, or navigating elsewhere.

To perform an action on an asset, click the checkbox next to the asset or merely highlight the asset in
the list.
Configuring an asset

Figure 36: Configure an asset

Perform the following steps to configure an asset:


1. Click the Configure asset ( ) icon. The Configure asset popup displays with the IP address of
the asset.
| User Interface Reference | 83

2. Click the Type field and from the dropdown menu, select the type of asset.
3. Save your selection. This information appears in the Type column of the list table.
Creating a PDF report

Perform the following steps to create a PDF report on the asset(s):


1. Click the PDF ( ) report icon. The Generate PDF popup displays.

2. Check the Include installed software found with Smart Polling to include assets found with
Smart Polling software in the report.
3. Save your selection. The generated report displays in the Generated report section.
Navigating to another entity

Perform the following steps to navigate to another link, node, protocol, session, vulnerability or asset.
1. Click the Navigate to ( ) icon.
| User Interface Reference | 84

2. From the dropdown menu, select from the list of IP addresses where to navigate to. This includes
nodes, protocols, links, vulnerabilities, sessions, etc. This then displays in Network.

Diagram tab
The Diagram tab uses the Purdue model format to display the assets (i.e., assets are arranged in
separate rows, according to their level).

1. To access Assets from the Web UI, go to Environment > Assets, then select the Diagram tab.

Figure 37: Diagram tab


2. Click in the Search bar to search for a specific asset.
3. Click the asset box to display asset details that then appear in the right pane.
| User Interface Reference | 85

Figure 38: Asset details pane


4. Alternatively, click the link within the asset box for details about the asset. The Asset details popup
displays.

Figure 39: Asset details popup


5. Repeat these procedures for information on other assets, as needed.
| User Interface Reference | 86

Network
This topic describes Network, which includes the following topics:
• Network nodes
• Network links
• Network sessions
• Network graph
• Traffic

Network nodes
This topic describes the network nodes in Network, and icons to configure the nodes.
From the Web UI, go to Network > Nodes tab to access the Nodes table, which displays information
about the nodes in the Environment.
From the Nodes table, access the Actions column icons to configure the nodes.

Note: The Nodes table screen displays the selected columns to display from the # Selected field at
the right of the table. Some table columns and definitions are described below.

Figure 40: Nodes table

Actions Configure nodes using these icons, which includes Bulk


configuration and Bulk learning
Capture device Type of sensor used to perform packet captures for troubleshooting,
and security reviews.
Address Sort by address, such as IP address, MAC, or any other physical,
network, or logic address that identifies that node.
Label Name applied to the node sensor, such as Historian-01
| User Interface Reference | 87

Roles Select from an available role including:


• consumer
• db_server
• dhcp_server
• dns_server
• other
• producer
• terminal
• time_server
• voip_server
• web_server

Type Node type


VLAN Virtual local area network in which the node is segmented
MAC address Node MAC address
MAC vendor Node MAC vendor
Operating system Node operating system
TCP retrans. % Percentage of TCP packets that have been retransmitted
TCP retrans. packets Total number of TCP packets that have been retransmitted
TCP retrans. bytes Total number of bytes for retransmitted TCP packets
# of links Number of links associated with the node
Protocols Protocols used by the links associated with the node
Level Level of the node according to the Purdue model
Is public True if the node does not belong to the local network
Is disabled Nodes that are disabled are not shown in the network graph view
Zone Network zone to which the node belongs
# variables Count of variables belonging to the node
Cluster If a cluster containing this node has been defined, this field contains
the cluster's name
Is learned Either the IP address or the MAC address is “known.” With anomalies
in the environment, nodes are compared against the “known” or
"learned" state. Nodes discovered during the learning phase are
considered “normal” on the network.
Is fully learned Both the IP address and the MAC address are components of a single
asset and are learned. “Is learned” is associated with a node while
“is fully learned” represents a single asset with both IP and MAC
addresses.
# Selected Refers to the number of columns selected for display

Action icons
Configure node icon

Figure 41: Configure node icon


| User Interface Reference | 88

Perform the following steps to configure the node and set the node properties:
1. Click the configure node icon. The Configure node popup displays.
2. Click the Is disabled checkbox to make the node(s) invisible in network graph view.
3. From the Label field, select an asset from the dropdown menu and assign the node to it.
4. From the Level field, input a node level, according to the Purdue model classification.
5. From the Device ID override field, remove or re-assign Device ID to overwrite the automatically-
assigned Device ID.

Show alerts icon

Figure 42: Show alerts icon

Perform the following steps to show the alerts associated with the current node:
Click the Show alerts icon to open the Alerts for node popup that displays the alerts associated with
the nodes.

Show requested traces icon

Figure 43: Show requested traces icon

Click the Show requested traces icon to open the Requested traces for node popup that displays
the traces associated with the nodes.
| User Interface Reference | 89

Request a trace icon

Figure 44: Request a trace icon

1. Click the Request a trace icon. The Request a trace popup displays. In the Trace max size
(packets) field, input the maximum size of the trace (the default size is 5000 packets).
2. In the Trace max duration (sections) field, input the maximum duration of the trace in seconds
(the default is 60).
3. The Packet filter field, is prepopulated with a Berkeley Packet Filters (BPF) that captures the
packets to/from the selected node, but can be customized.
Note: Click the BPF examples dropdown for examples.
4. Click the Send trace request button to request the trace.

Manage Learning icon

Figure 45: Manage Learning icon

1. Click the Manage Learning icon. The Manage Learning popup displays.
2. Manage Learning node settings from this popup, including deleting, learning, saving and
discarding.
In the popup, the entire node and its individual details (such as IP or MAC address) can be learned
and deleted.
• Nodes whose details are learned are considered entirely learned and have a green icon.
• Nodes whose details have been only partly learned have an orange icon.
• Nodes that are not learned have a red icon.
• Individual details have either a green or red icon, depending on whether they are learned or not.
By learning or deleting a node, all of its details undergo the same effect.
• By learning or deleting an individual detail, only that detail's learning status changes.
3. Click Save to confirm any changes made in this popup.
| User Interface Reference | 90

Navigate to icon

Figure 46: Navigate to icon

1. Click the Navigate to icon. A popup displays that allows you to navigate to various nodes, links,
protocols, vulnerabilities, and sessions.
2. Click the link to navigate to the corresponding entity.

Add node to a Smart Polling plan

Figure 47: Additional node icon (Available when Smart Polling is present)

Input information into the fields for which you would like to override the plan's configuration:
Note: This icon only appears if Smart Polling is present. Refer to Smart Polling on page 259 for
additional information.
1. Click the radar icon to add a node to a plan with an optionally different configuration from the plan's
original one. The Smart Polling configuration for node popup displays.
2. From the dropdown menu in the Select an existing plan to add the node to field, select an
existing plan to that you would like to add the node to.
3. Optionally, customize the parameters that display for the selected plan. Customized values override
plan-defined values when polling this specific node. Entries not modified in this popup window retain
the plan-defined values.
4. Toggle the Poll node immediately field, if you wish to poll the node immediately. Otherwise, the
node is polled during the next execution of the selected plan.
| User Interface Reference | 91

See adding additional nodes for more information.

Network links
This topic describes the network links in Network, and the icons to configure the links.
From the Web UI, go to Network > Links tab to access the Links table, which displays information
about the links in the Environment.
From the Links table, access the Actions column icons to configure the links.

Note: You can filter the records using the filter field of most columns, either by inputting a string or by
selecting the desired entry form the dropdown menu.

Figure 48: Links table


| User Interface Reference | 92

Action icons
Configure link icon

Figure 49: Configure link icon

Perform the following steps to configure the link and set link properties:
1. Click the configure link icon. The Configure link popup displays.
2. Click the Is persistent checkbox to raise an alert when a new TCP handshake is detected on the
link.
3. Click the Alert on SYN checkbox, to raise an alert when a TCP SYN packet is detected on the link.
4. Click the Track availability (seconds) checkbox to notify the link events when the link
communication is interrupted or resumed. Then enter the number of seconds for the interruption or
resumption.
5. Click the Last activity check (seconds) checkbox to raise an alert when the link becomes inactive
for more than the specified number of seconds. Then enter the number of seconds.
6. Click Save to save your changes.

Show alerts icon

Figure 50: Show alerts icon

Click the Show alerts icon. The Alerts for link popup displays.
| User Interface Reference | 93

Show requested traces icon

Figure 51: Show requested traces icon

Click the Show requested traces icon. The Requested traces for link popup displays.

Request a trace icon

Figure 52: Request a trace icon

Perform the following steps to request a trace:


1. Click the Request a trace icon. The Request a trace popup displays.
2. In the Trace max size (packets) field, input the maximum size of the trace (the default size is 5000
packets).
3. In the Trace max duration (sections) field, input the maximum duration of the trace in seconds
(the default is 60 seconds).
4. In the Packet filter field, input the IP host. This field contains the Berkeley Packet Filters (BPF)
used to capture the packets. This input field is prepopulated with a BPF that captures the packets
to/from the selected node, which can be customized.
Note: Click the BPF examples dropdown for examples.
5. Click Send trace request button to request the trace.
| User Interface Reference | 94

Show events icon

Figure 53: Show events icon

1. Click the Show events icon. The Events for link popup displays, with a history of TCP events.
Note: Show events is only available for TCP links.
2. Make changes to time, transport, source node, source port, destination mode, destination port,
event and additional information settings.

Show captured URLs icon

Figure 54: Show captured URLS icon

Click the Show captured URLs icon. The Show captured URLs popup displays, with the URLs
captured from the analyzed traffic.
Note: Show captured URLs is only available for some protocols.

Manage Learning icon


| User Interface Reference | 95

Figure 55: Manage learning icon

1. Click the Manage Learning icon. The Manage Learning popup displays.
2. Manage Learning link settings from this popup, including deleting, learning, saving and discarding.
Note: The color depends on the learning status of the link.
In the popup, the entire node and its individual details (such as IP or MAC address) can be learned
and deleted.
• Nodes whose details are learned are considered entirely learned and have a green icon.
• Nodes whose details have been only partly learned have an orange icon.
• Nodes that are not learned have a red icon.
• Individual details have either a green or red icon, depending on whether they are learned or not.
By learning or deleting a node, all of its details undergo the same effect.
• By learning or deleting an individual detail, only that detail's learning status changes.
3. Click Saveto confirm any changes made in this popup.

Navigate to icon

Figure 56: Navigate to icon

1. Click the Navigate to icon. A popup displays that allows you to navigate to various nodes, links,
protocols, vulnerabilities, and sessions.
2. Click the link to navigate to the corresponding entity.

Link events
Go to Network > Links tab to access the Links table to access link events. The screen displays the
links in the Environment.
| User Interface Reference | 96

Figure 57: Link events

A Link availability is based on UP and DOWN events


B Time span control to view only the events in the specified time
range
C Graphical history of events; a point with value 1 represents an
UP event, a value -1 represents a DOWN event
D History of events in table format

The following schematic representation displays the downtime for two links (d0 and d1):

Figure 58: Schematic representation for downtime for two links

How link availability is calculated


A history of events is stored for each link. Two events are of particular interest for computing
availability: UP and DOWN. The former occurs when an activity is detected on an inactive link. The
latter occurs when an active link stops its activity. Each event has a timestamp to track the precise
moment of its occurrence.
Guardian computes the total downtime of a link by taking into consideration the history of events within
a finite time window. Then, it sums the time spans of all events starting with a DOWN event and ending
with an UP event. All links are considered active by default, therefore the availability of the link is 100%
minus the percentage of total downtime.
| User Interface Reference | 97

Track Availability
The Track Availability feature allows an accurate computation of availability. It enables the monitoring
of activity on a link at regular intervals, generating extra UP and DOWN events, depending on the
detected activity on both sides of the link during the last interval.
To specify the interval for a link, go to the Links table (or any other section where the link_actions are

displayed) and click the button, to open the following popup:

We recommend that you select a value greater than the expected link polling time to avoid checks that
are too frequent and are likely to produce spurious DOWN events.
Note: link_events generation is disabled by default. To enable it, see the configuration rule described
in Configuring links.

Network sessions
This topic describes sessions in the Nozomi Networks solution. A session is a semi-permanent
interactive information exchange between two or more communicating nodes.
A session is established at a certain point in time, and later turned down. An established
communication session may involve more than one message in each direction.
Go to Network > Sessions tab to access the Sessions table. The screen displays the sessions in the
Environment.

Figure 59: Sessions table


| User Interface Reference | 98

The Sessions table lists all Sessions in table format. Click the From or To node ID for additional
details about the listed nodes. The action buttons allow you to request or show traces as you navigate
through the Web UI. You can also see additional details about each session, such as source and
destination ports, number of transferred packets or bytes, etc.

Network graph
This topic describes the network graph for the Nozomi Networks solution.
From the Web UI, go to Network > Graph tab to access the graph view, which gives a graphical
representation of the nodes in the environment.
Each vertex represents a single network node or an ensemble of nodes, while every edge represents
one or more links between nodes or node ensembles. Edges and vertices are annotated to provide
node identification information, protocols used to communicate between two nodes, and more.
Node position in the graph is determined either by a specific layout format or by a dynamic automatic
adjustment algorithm that looks for minimal overlap and best readability of the items. An example of a
network graph is provided below. The auxiliary zone/topology graph window is on the left and the
information pane is on the right.

Figure 60: Example: Main network graph

The format of the data represented in the graph in controlled by the graph layout menu. From the
menu, users can select the graph type and the node format in the graph. See a detailed description of
the available options from the layout menu below.
Users can also control the graph by zooming in and out and centering in specific zones. You can also
obtain more information by clicking the mouse on specific elements, as described in graph control.
On the left and the right of the network graph, two auxiliary windows are available to provide additional
information and control:
• Information pane (right): Contains additional information about the node or link selected in the
network graph (see graph control).
• Zone/Topology graph (left): Contains network visualization from a zone or topology perspective. A
detailed description of the feature is provided in the topic Zones/Topology graph on page 107.

Graph commands
To obtain a clearer representation of the network or to obtain specific details, filter the graph contents
using specific criteria. Controls to do this are provided in the figure and the table below.
| User Interface Reference | 99

Figure 61: Network graph with available commands

A Toggles to adjust the dynamic motion of the items.

B Toggles to/from the information pane.

C Increases (left) or decreases (right) the size of the node icons (also
affects label size).
D Identifies evidence (influential) nodes, using a mouse, to increase the
size (and label) of the node.
E Identifies specific link(s).

F Toggles to/from the topology pane.

G Toggles to/from the zone pane.

H Indicates active graph filtering, when present. Filters can be from the
filter bar (see R and S below), or activated from the zone/topology
graph when you click a link/node in the zone/topology graphs.
I Exports a PDF report containing the graph, as currently shown on the
page.
J Shows the legend for link and nodes based on the selected
perspective.

K Resets customizations and reloads the data


| User Interface Reference | 100

L Permits reload of the data, keeping the current customizations. If the


toggle is live, the graph is periodically updated, otherwise a single
update is performed when requested.
M Filters by activity time

N Opens a wizard to help filter the graph and view only the desired
information; contains solutions to reduce visualized data from large
graphs
O Select node visualization configuration options from the dropdown
menu as described below
P Select link visualization configuration options from the dropdown
menu as described below
Q Select a graph layout from the dropdown menu

R Select available filter types from the main network graph window. The
selected filters are shown at the center top of the graph window (S).
No filter is selected by default.
S Shows the filters enabled in R. Once a filter is enabled with a value,
the graph is automatically updated. If more than one filter is enabled,
then a logical and criteria is applied. Only nodes that satisfy all of the
specified filters are shown.
Note: If a node passes the filters, then all of the directly connected
nodes are shown in the graph. For example if a specific IP filter is
used, then the specified node is shown along with all the nodes
connected to it.

Layout options
Layout defines how the nodes and links are presented in the graph.
1. Go to Network > Graph tab to access the Network (with graphs) screen.
2. Select the Layout dropdown menu.
3. Click one of the following layout options:
• Standard
• Purdue model
• Grouped
• Clustered (Beta)
4. Click the Group by field dropdown menu to select the node group. The available options are:
• – (None)
• Asset
• Cluster
• Level
• Roles
• Subnet
• Type
• Site
• Host
5. Select Apply to apply the changes.
| User Interface Reference | 101

Table 10: Layout options

Standard Default layout. The type of visualization depends on the criteria


defined in Group_by:
• Group_by not defined: All of the nodes and links are
shown.
• Group_by defined: Nodes that belong to the same group
(based on the defined criteria) are collapsed into a single
node.

Purdue model Nodes are arranged in separate rows, according to their


level. You can distinguish the levels and isolate potential
communication problems that cross two or more levels.
Grouped Nodes are grouped according to the criteria defined in
Group_by. The graph is visualized as follows:
• Group_by not defined: All nodes and links are shown.
• Group_by defined: Nodes that belong to the same group
are shown and are placed inside a circle that represents
the group. Links between nodes within the same group are
shown. However, links between groups are replaced with
lines that connect the circles.
| User Interface Reference | 102

Clustered Nodes are clustered according to the criteria below. Once


nodes are clustered, a single circle represents the node cluster.
Upon zoom-in, the circle expands and the internal nodes
display. A cluster may contain multiple subclusters. This layout
is useful when visualizing large graphs because it provides an
overview of the graph, along with sufficient details.

Nodes are clustered depending on the values defined by


Group_by:
• Group_by not defined: Nodes are clustered based on
connections. Nodes with a large number of links act as a
cluster center with neighboring nodes assigned to the same
cluster.
• Group_by defined: At the highest level, a cluster is
created for each group. Inside each high level cluster are
subclusters created around nodes with a high number of
links. For example, if Group_by=Zones, then a cluster
is created for each zone, and inside each zone other
subclusters may be created around nodes with a high
number of links.

Group by Defines the group used for Standard, Grouped, and


Clustered layouts. Nodes with the chosen property (i.e. zone,
subnet, etc.) are assigned to the same group. The group
displays depending on the selected layout.

Example:
This Environment graph displays the open Zones pane:
Group_by=Zone
Layout=Zone

Figure 62: Example: Graph grouped by zone with zone layout

Example:
This Environment graph displays the open Zones pane:
Group_by=Zone
Layout=Cluster
The Info pane contains information about the Undefined zone.
| User Interface Reference | 103

Figure 63: Example: Graph grouped by zone with cluster layout

Graph control
You can move and zoom the graph using the mouse. You can also increase/decrease the size of the
icons and the text for better readability.

Move Move the graph by clicking and dragging, other than on a node.
Zoom (mode 1) Zoom in and out (scrolling) by turning the mouse wheel up
and down inside the window. Zoom is centered on the mouse
position.
Zoom (mode 2) Drag the graph in a vertical direction while pressing the z key.
Zoom centers on the position where the mouse starts dragging.
Icon and Text size Increase/decrease the icon and label size, using the buttons
identified with the letter c.

Additional mouse actions are:

Single click Single click on a node or a link. Fill the info pane with
information about the selected node or link. The type of
information displayed depends on the nature of the selected
node or link (nodes, cluster, ...).
Double click Double click on a node. Show a new window with additional
information about the clicked node or link. The action can be
performed only on nodes not on clusters or links.
Mouse over Mouse over a node or a link. Shows the node or link.
Mouse down Single click down on a node or a link without releasing
the mouse button. Shows the selected node or link and the
elements directly connected to it.

Node visualization options


Node visualization options define which nodes are shown (through filtering), and how they are shown
(through colors).
1. Go to Network > Graph tab to access the Network (with graphs) screen.
2. Select the Nodes dropdown menu.
3. Click the Perspective field dropdown menu to select a different color for the nodes. The available
options are:
• Roles
| User Interface Reference | 104

• Zones
• Transferred bytes
• Not learned nodes
• Level
• Public
• Node reputation
• sensor host
• sensor site
4. Click the Roles field dropdown menu to choose the nodes (and the nodes directly connected to
them) that match the selected roles criteria. The available options are:
• All button / None button
• consumer
• db_server
• dhcp_server
• dns_server
• other
• producer
• terminal
• time_server
• voip_server
• web_server
5. (Optional) Click the Exact match checkbox to remove the specified IDs from Graph view. (In the ID
filter field, you can specify more than one ID, separated by comma.)
6. Click the Display dropdown menu to select how the node text is displayed (i.e., label formatting of
the nodes). The ID can be an IP address, a mac address or a cluster name.
• ID (label)
• ID
• Label
7. (Optional) Check the Show broadcast checkbox to include the nodes with a broadcast IP address.
8. (Optional) Check the Only confirmed nodes checkbox to show only the nodes that exchanged bi-
directional data while communicating.
| User Interface Reference | 105

Table 11: Node visualization options

Perspective Changes the color of the nodes according to a predefined


criterion
Roles Allows filtering of the graph by node roles
Exclude IDs Removes the specified IDs from the graph view, separated by
commas
ID filter Filters the graph using one or more ID addresses, separated by
comma
ID filter exact match Filters the graph by ID filter showing only the nodes with
specified ID(s) rather than with a "start with" criterion
Display Displays the formatted label for the nodes
Show broadcast Includes the nodes with a broadcast IP
Only confirmed nodes Includes only the nodes that exchange bi-directional data when
communicating

Link visualization options


Link visualization options permit you to define which links are displayed (filtering), and how they are
displayed (coloring based on some properties).
1. Go to Network > Graph tab to access the Network (with graphs) screen.
2. Select the Links dropdown menu.
3. Click the Perspective field dropdown menu to select a different color for the links. The available
options are:
• None
• Transferred bytes
• TCP firewalled
• TCP handshaked connections
| User Interface Reference | 106

• TCP connection attempts


• TCP retransmitted bytes
• Throughput
• Interzones
• Interlevels
• Not learned links
• Alerts risk
4. Select the Protocols field dropdown menu to choose the delected protocol options on which to filter
the links:
• All button / None button
• bacnet-ip
• browser
• cdp
• cotp
• dce-rpc
• delta-v
• dhcpv6
• dns
• dropbox-isp
• dvrip-dahua
• ethernetip
5. Click the Alert types field dropdown menu to choose an alert type option.
6. (Optional) Click the Show link direction checkbox to make a link bolder so it is easier to select.
Note: Performance may be affected.
7. (Optional) Click the Show protocols checkbox to show a link's protocol.
8. (Optional) Click the Only with confirmed data checkbox to show links that exchanged bi-
directional data.
9. Click Apply to save your changes.

Table 12: Link visualization options

Perspective Change the color of the links according to a predefined


criterion.
Protocols Allows the ability to filter the graph by link protocols.
| User Interface Reference | 107

Enable links highlighting Highlights links to make them bolder in reaction to mouse
movements, making them easier to select (may affect
performance).
Show protocols Shows link protocols.
Only with confirmed data Shows links that exchanged bi-directional data.

Zones/Topology graph
The Zones/Topology graph provides a network visualization for the network topology or zones.
Go to Network > Graph tab, then select the Zones or the Topology toggle button.
Visualization using either the Zones or the Topology graph is mutually exclusive and is controlled with
the zone and topology toggle buttons (see G and F in the "Network graph with available commands"
figure at Graph).
Inside the Zones graph, each node represents a zone and each link represents all of the links between
the nodes in the connected zones. When the user clicks a zone, the information pane is populated with
all of the nodes/links that belong to the clicked zone. The main network graph is filtered to show only
the nodes and the links for that zone, and the filtering icon (H) appears.
In a similar way when a link is clicked in the Zones graph, the information pane is populated with all of
the links between the two zones, and the Networks graph shows only the nodes and links that belong
to one of the two connected zones. When the user clicks in a region of the Zones graph without any
nodes or links, the visualization in the Networks graph is reset to show all the nodes and links.
Examples of graphs
In the following example Zones graph displays with the open Zones pane to highlight the zone of origin
for each node:
Zones perspective: Active

Figure 64: Environment graph with open Zones pane

The Zones pane offers the ability to filter the graph by clicking a zone or on a link between two zones.
The Zones graph also has a legend and shares some of the node and link options. Clicking a node or
link in the Zones pane displays additional information about the zone or the links between the zones.
See the basic configuration rules to customize Zones .
In the following example, the Zones graph displays with the open Zones pane to highlight the high
traffic usage for the consumer nodes:
Zones perspective: Transferred bytes
| User Interface Reference | 108

Figure 65: Environment graph with transferred bytes node perspective

Magic wand options


The graph wizard provides hints to help you improve the graph performance. Settings that are
annotated with an orange exclamation point are considered suboptimal. Settings annotated with green
thumbs are considered helpful.
1. Go to Network > Graph tab to access the Network (with graphs) screen. Then, click the magic
wand .
Note: We recommend that you use Google Chrome for better performance.
2. (Optional; suboptimal) Click the Show broadcast checkbox to high broadcast nodes to display a
simpler graph.
3. (Optional; suboptimal) Click Only with confirmed data to show only links with confirmed date to
display a simpler graph.
4. (Optional; helpful) Click Only confirmed nodes to show only confirmed nodes to display a simpler
graph.
5. (Optional; suboptimal) Click Exclude tangled nodes to exclude tangled nodes from the graph,
whose connections case the node to be too complex.
Note: Tangled nodes can be returned by removing their IDs from the nodes options.
6. (Optional; helpful) Click from the Protocols field dropdown menu to select options to show only the
links that match the selected protocols.
7. Click OK to save your settings.
| User Interface Reference | 109

Table 13: Magic wand options

Show broadcast Broadcast addresses are not actual network nodes in that
no asset is bound to a broadcast address. They are used
to represent communications performed by a node towards
an entire subnet. Removing broadcast nodes reduces the
complexity of a graph.
Only with confirmed data Unconfirmed links can be hidden easily to reduce the
complexity of an entangled graph.
Only confirmed nodes Unconfirmed nodes can be hidden to reduce the size of a large
graph.
Exclude tangled nodes Nodes whose connections cause the node to be too complex
can be removed to improve the readability of the graph.
Protocols Nodes and edges can be filtered so to show only those items
participating in communications involving one of the selected
protocols. By clicking on "SCADA", all SCADA protocols are
selected.
| User Interface Reference | 110

Traffic
This topic describes how to access traffic information in the Nozomi Networks solution.
Go to Network > Traffic tab to access traffic charts with information about throughput, protocols, and
open TCP connections.

Figure 66: Traffic charts

Section descriptions
A Shows traffic by macro category
B Shows traffic by protocol
C Shows the proportion of packets sent by protocol, in pie chart format
D Shows the proportion of traffic generated by protocol, in pie chart format
E Shows the number of open TCP connections
| User Interface Reference | 111

Process
This topic describes the variables in Process.
Process is a set of repeatable functions undertaken by a business in order to deliver a core value. This
includes repeatable tasks, data gathering, and resource control resources in accordance with business
policies.
Variables model communication between operational devices as they participate in the industrial
process. Individual values within operational devices are represented as variables, and Guardian tracks
them over time in Process.
Prerequisites: Users must have Process permission to access the Process tab.
This topic includes the following:
• Process variables
• Process variables extraction

Process variables
This topic describes the process variables in the Nozomi Networks solution.
From the Web UI, go to Process to access the Process table, which displays detailed information
about variables.

Figure 67: Process table with variables

Actions Click one of the following actions:



Configure variables icon ( ) opens a popup where the individual
variable can be configured

Variable details icon ( ) displays the variable details

Add to favorites icon ( ) adds the variable to the Favorite
variables list

Navigate icon ( ) takes users to the corresponding node, link,
vulnerabilities page or session

Host Identifier of the node that the variable belongs to


Host label Label of the variable host
Namespace Identifier of the variable container, also known as RTU ID (see an
example of the format in Protocol on page 57)
Name Name assigned to the variable (see an explanation on how this is
calculated in Protocol on page 57)
| User Interface Reference | 112

Label Configurable description of the variable (for instructions see Configuring


variables on page 404)
Type Type of value, which can be:
• Unknown
• Analog
• Digital
• Bitstring
• String
• Doublepoint
• Timestamp
• Octetstring

Value Current valid value of the variable


Last value Last observed value with an indicator showing if it is valid (green) or not

(red). Click the icon to display the variable history chart.


Last valid quality Last time the variable had a valid value quality
Last quality Last value quality
Min value Minimum value the variable has ever had
Max value Maximum value the variable has ever had
Unit Unit of measure; for configuration instructions see Configuring variables
on page 404
Protocol Protocol used to exchange the variable
# Changes Number of times the variable value has changed
# Requests Number of read operations
Last client Identifier of the last node querying the variable
Last FC Function code of the last operation performed
Last FC Info Function code information of the last operation performed
First activity First time an operation was performed
Last activity Last time an operation was performed
Last change Last time an operation performed on the variable changed its value
Flow control status The status of the flow control can be:
• Cyclic if the variable is detected to be updated or read at regular
intervals
• Not Cyclic otherwise
• Disabled if flow control has been disabled from the learning control
panel
• Learning if the algorithm is still analyzing the flow
When the status is Cyclic there is a chart indicating the timing and the
average value in milliseconds.
| User Interface Reference | 113

Flow anomaly in It is true if the system has detected an anomaly in progress, otherwise
progress it is false. When an anomaly is in progress a Resolve button appears;
click the button to tell the system that the anomaly has ended. If the
anomaly is detected again, another alert is raised.

Active checks Shows the active checks enabled on the variable


History enabled A Boolean flag shows if the value history is enabled for the variable

Configure variable

Click the configure variable icon ( ) beside the variable. The Configure variable popup displays,
from which users can configure the individual variable.

Table 14: Configure variable

Label Name of variable


Enable history Permits users to enable variable history
Last activity check Raises an alert when the variable is not updated for more than the
specified amount of seconds
Invalid quality check Raises an alert when the variable keeps the invalid quality for more
than the specified amount of seconds
Disallowed qualities Raises an alert when the variable has one of the specified qualities
check (valid, not topical, blocked, substituted, overflow, reserved,
questionable, out of range, bad reference, oscillatory, failure,
inconsistent, inaccurate, test, alarm); values can be separated by
comma
| User Interface Reference | 114

Unit Unit of measurement of the variable value


Scale Constant value multiplied to the variable value
Offset Constant value added to the variable value

Variable details

To see variable details, click the magnifying glass ( ) icon beside the variable.
The Variable Details popup displays, which includes information about the variable and its value
history in both chart and table format (if it is configured as monitored, see Configuring variables on
page 404).
Using the buttons above the chart, open the chart in another window or export the data in Excel or CSV
format.
By default, the chart shows the variable value history only for a specific period of time. To update the
chart in real-time, click the Live update checkbox.

Figure 68: Variable details

Favorite variables

Click the star icon ( ) beside the variable to add it to the Favorite variables list.
The Favorite variables list includes a chosen group of variables. You can plot the favorite variables on
the same chart to make a comparison easier.
| User Interface Reference | 115

Figure 69: Process table with favorite variables on top

Navigate

Click the configure Navigate icon ( ) beside the variable to navigate to the corresponding node,
link, vulnerabilities page or session. From the dropdown menu, select the corresponding node, link,
vulnerabilities page or session.

Process variable extraction


This topic describes how to configure variable extractions both globally and at the protocol level.
Guardian creates a variable for commands, monitored measures, and information accessed or
modified by the OT system. Different characteristics can be attached to a variable, which depend on
the protocol used to access or modify it.
From the Web UI, go to Process > Settings tab to access process variable extraction. The Process
popup allows you to extract variables globally, by zone, or by protocol.
| User Interface Reference | 116

Figure 70: Process variable extraction tuning

Global variables extraction


Global variable extraction levels apply to all protocols for which an extraction level has not been
specified.
Possible values that can be set are the following:
• Disabled: Variables are not extracted
• Enabled: Variables are extracted
• Advanced: Variables are extracted; advanced heuristics are used to extract additional variables on
protocols that support them
Per-protocol variable extraction
Protocol specific settings are shown in a dedicated table (see the Protocol specific variables
extraction field) that lists all protocols for which at least one variable has been extracted.
Protocol specific settings prevail over global settings except when variables are globally disabled, in
which case variables are not extracted.
To change the variable extraction level for any protocol on a global level, click the corresponding
Configure ( ) icon under the Actions column for the protocol. The Variables extraction for
protocol popup displays, with a new Global field checked. This indicates that the variable extraction
settings are inherited from the global settings.
| User Interface Reference | 117

Figure 71: Variables extraction tuning

Note:
• Variable extraction is globally enabled by default.
• The Advanced level can be set only on protocols that support it.
| User Interface Reference | 118

Queries
This topic describes how to create queries in the Nozomi Networks solution from Query builder or
Query editor. It also describes how to create group queries using the Saved queries feature.
For additional information on queries, go to:
• Query builder on page 118
• Query editor on page 119
Use the Nozomi Networks Query Language (N2QL) to query all data sources. The Saved queries tab
allows you to make changes to the query group, create a PDF, export, edit or delete existing queries.
1. From the Web UI, go to Analysis > Queries to access the Queries page. The Editor tab page
displays.
2. In the upper right corner of the page, select either standard mode (currently offered as a beta
feature) to create queries using Query builder, or expert mode to create queries using Query
editor. Query editor requires more expertise and allows for more complex queries. Query builder
allows you to quickly view your data.

Figure 72: Standard or expert mode


3. (Optional) Click the Saved queries tab to view saved queries.
Go to Queries on page 283 for a complete reference of query commands and data sources.

Query builder
Query builder is a feature that allows users to create and execute queries on the observed system.
1. From the Web UI, go to Analysis > Queries to access the Queries page. The Editor tab page
displays.
2. In the upper right corner of the page, select standard mode to create queries using Query builder.

Figure 73: Query builder - Standard mode

When you build your query in Query builder, the available options change, depending on your
selection choices.
3. Choose from one of these options to begin building your query:
• Nodes
• Nodes CVES
• Assets
• Variables
| User Interface Reference | 119

• Links
• Alerts

Figure 74: Query builder during a query


4. With each query, you are guided through the process, based on your selections.
Note: Query builder requires knowledge of the Nozomi Networks Query Language (N2QL).
These action items describe your choices:

Action Description
Group by Allows you to group by column
Head Returns the first n results
Join Merges two records into one
Select Shows selected columns
Sort Sorts results by column
Where Filters results by conditions

Result: You have created a query using the Query builder feature.

Query editor
Query editor is a feature that allows you to create and execute queries on the observed system from
expert mode. It requires more expertise than Query builder.
Query editor provides you with a series of example query templates to begin your query execution.
1. From the Web UI, go to Analysis > Queries to access the Queries page. The Editor tab page
displays.
Note: To formulate from a saved query, go to the Saved Queries tab to begin formulating your
query.
2. In the upper right corner of the page, select expert mode to create queries using Query editor.
| User Interface Reference | 120

Figure 75: Query editor with example query templates


3. From the example queries, click the one that most closely resembles the query that you want to
build. If needed, click the History icon ( ) to obtain a previous query or version.
4. In the text box, adjust your query to reflect your specific requirements, then press Enter to execute
the query. The Queries result page displays after execution, similar to the figure below.

Figure 76: Sample Query editor result


5. (Optional) If you belong to a group with admin privileges, click the floppy icon on the right to save
the query. The query displays in the Saved Queries section, otherwise the button is disabled.
a. When saving the query, specify a description and a group.
b. To export the query results, click the Export button, and choose either Excel or CSV format. The
file produces in the background (to facilitate queries with large data amounts).
c. After production is complete, retrieve the file through the Exports List sub-menu. When an
export is downloaded it automatically is removed from the file system.
Result: You have created a query using the Query editor feature.

Saved queries
Queries can be saved from the Query builder or the Query editor.
Prerequisites
| User Interface Reference | 121

You must have admin privileges to manage groups and create, rename, and delete queries, and you
must have admin permission to perform and save any changes.
Note: When you delete a group, queries within it are eliminated.
1. From the Web UI, go to Analysis > Queries > Saved queries tab to view your saved queries. The
Saved queries tab page displays, from which you can manage and edit query groups.

Figure 77: Saved queries page

Managing query groups


You can organize query groups as follows:
• create a new group
• edit the group
• delete the group
• create a PDF of the group
Editing query groups
Within a specific group, you can:
• view and configure assertions
• export the query
• toggle to/from live view
• edit the query
• delete the query
2. Edit the query groups, as needed.
a. Click the pen icon to change the description and/or the query group.
b. Click the trash icon to delete the saved query.
c. Use the group selector to change the current group and restrict the view to queries of the chosen
group.
| User Interface Reference | 122

Figure 78: Saved queries example

Note: You must have admin privileges to perform save functions.


Managing query groups
You can change the current group and you can restrict the view to selected group queries, using the
group selector.
1. From the Web UI, go to Analysis > Queries > Saved queries tab to access saved queries. The
Saved queries screen displays.
2. Confirm the selected Current group, or click the Current group dropdown menu to select a
different query group.

Figure 79: Queries - current group


3. (Optional) Click the New group header to create a new query group. The Enter the group name
popup displays.
a. Enter a new query group name in the Group Name field.
| User Interface Reference | 123

b. The Automatically execute queries on load checkbox is checked by default. Uncheck it if you
do not want to automatically execute queries upon loading.

4. (Optional) Click the Edit group header to edit the query group. The Enter the group name popup
displays.
a. Enter a group query name to edit in the Group Name field.
b. The Automatically execute queries on load checkbox is checked by default. Uncheck it to not
automatically execute queries on the load.
5. (Optional) Click Delete group to delete the query group.
6. (Optional) Click PDF to create a PDF of the query group.
Editing a query group
You can make changes to the specific query group, such as editing, exporting, and deleting.
1. From the Web UI, go to Analysis > Queries > Saved queries tab to access saved queries and
make changes to the query group. The Saved queries screen displays. You can configure, export,
debug, and save the query from this screen.
2. Select a query group.
Note: Click the see in editor link to change to editor view for the specific group.
3. (Optional) Click the To assertion icon to view and make changes to the query. The Assertions
popup displays. Using the icons, you can edit, configure, debug, and save the query.

4. (Optional) Click the Export icon to export the query group. The Exports list popup displays.
a. Select either Excel or CSV as the format for your exported file, and the file exports in that format.
b. (Optional) Click the trash icon to delete the saved query.

5. (Optional) Toggle between live and live updates disabled.


6. (Optional) Click the Edit button to edit the description and/or perform edits to the query group.
The Edit Query popup displays. Make any changes to the query, description and group (including
adding a new group), then click Save to save your changes.
| User Interface Reference | 124

7. (Optional) Click the Delete icon to delete the saved query.


| User Interface Reference | 125

Reports
This topic describes the reports (including customized reports) for the Nozomi Networks solution.
From the Web UI, go to Analysis > Reports to access reports. From the Reports screen, you can
generate Custom Reports based on custom queries and layouts. For additional information, see
Report Management.

Reports dashboard
This topic describes the Reports dashboard from which you have an overview of reports, including disk
availability, report settings, generated reports, report management, and scheduled reports.
Using the dashboard, you can customize reports through the use of filters, edit settings, and access
report management.
The following options are available from the Reports dashboard:
• To access the Reports dashboard, from the Web UI, go to Analysis > Reports > Dashboard tab.
• From the Web UI, go to Analysis > Reports > Management tab to access report management.
See Report management on page 125 for additional information.
• From the Web UI, go to Analysis > Reports > Generated tab to access report generation. See
Generating a report on page 131 for additional information.
• From the Web UI, go to Analysis > Reports > Scheduled tab to access report scheduling. See
Scheduled Reports for additional information.
• From the Web UI, go to Analysis > Reports > Settings tab to access report settings (such as
logos, or the SMTP Server). See Report settings on page 134 for additional information.

Figure 80: Reports dashboard

Report management
This topic describes how to create and edit reports for the Nozomi Networks solution.

To access reports, from the Web UI, go to the main menu dropdown ( ) list Reports > Management
tab.
On the left, you will see a Report list of created and saved reports, grouped by folder.
| User Interface Reference | 126

Adding a new folder


To create a new folder:
1. Click the + Add folder button next to Report list. A New report folder popup displays.
2. In the Name field, specify a name for the folder.
3. In the Group visibility field, from the dropdown menu, select the user groups that can view the
report.
4. Click OK to save the settings.

Editing a folder
To edit an existing folder:
1. Click the pencil icon beside the folder name. The Edit report folder popup displays.
2. In the Name field, edit the name for the folder, as needed.
3. In the Group visibility field, from the dropdown menu, edit the user groups that can view the report,
as needed.
4. Click OK to save the settings.
| User Interface Reference | 127

Deleting a folder
To delete a folder:
1. Highlight the folder that you wish to delete.
2. Click the trash icon beside the folder name, then click OK.

Creating a new report


To create a new report:
1. Click the New report button. The New Report popup displays.
2. Specify a name for the report.
3. Choose a layout from the dropdown menu.
4. Select a folder for the report.
5. Choose the group(s) that can view the report.
6. Click OK to save your settings.

Filtering a report globally


You can filter a report either globally or by specific widget. When you filter on a global level, which is
the default filter, you apply filters to the entire report by specific category. When you filter by widget,
you filter by specific, individual widget(s).
1. To filter on a global level, at the Web UI, go to the main menu dropdown ( ) list Reports >
Management tab, then select the Filters icon.
| User Interface Reference | 128

The Edit filters for report popup displays. The categories on which you can apply global filters are
listed.
2. Select the category on which to filter, then enter your filter query in the Filter on field. See Queries
on page 283 for additional information.

3. Click OK to save your settings.


Note: At the bottom of the Edit filters list is a statement about the widgets on which filters will not
work.

Filtering a report by widget


You can opt to filter a report on a specific widget. You first create a new widget, then filter on that
widget.
1. To apply a filter on a widget, at the Web UI, go to the main menu dropdown ( ) list Reports >
Management tab.
2. Click +Add widget. The Add widget popup displays.
| User Interface Reference | 129

3. Highlight the Table field. The list of widgets on which you can filter your report displays:
• Assets
• Clients accessing SMB Shares
• Communication with Public Nodes
• Files with Malware Sandbox
• Host CPEs with Vulnerabilities
• Identify assets with SMB v1
• Inactive nodes (5 days)
• New ARP traffic
• New nodes not learned
• Number of Network Devices
• Public Nodes
• TCP firewalled connections
4. Select the specific widget on which to apply a filter, then click OK. The new widget is created.
5. Click the Edit filter button at the top of the widget.

The Edit filters for widget popup displays.


| User Interface Reference | 130

6. In the Filter on alerts field, input your filter query. See Queries on page 283 for additional
information.
7. Click OK to save your settings.
Note: At the bottom of the Edit filters list is a statement about the widgets on which filters will not
work.

Importing a report schema


To import an exported report, click the Import Schema button. You can then select the schema.

On the right you can preview the selected report. On the top you can find some action buttons and
options:
• Format: Changes the report pages format
• Add page: Adds a page to the layout
• Save: Saves layout changes
• Edit: Allows you to change the report name and group

• Delete: Deletes the report


• Export schema: Exports the report
• Generate Report: Starts the Report generation
| User Interface Reference | 131

Rows in reports
The report is a set of pages that contains a list of rows. You can:
• Add a row to the bottom of the page, by clicking the Add row button.
• Delete a row, by clicking the trash icon.
• Move the row up/down, by clicking the up/down arrow buttons.

Each row is split in two columns. You can add elements, which can be widgets or queries, provided
you have saved queries. Click the trash icon to remove an element. The element can fill one or two
columns, depending on the type. You can change the width of the element by clicking the reduce/
enlarge buttons. Some widgets have additional options (e.g., Style for [custom text]).

Generating a report
This topic describes how to generate a report, either scheduled or on-demand, in multiple file formats.
1. From the Web UI, go to Analysis > Reports > Management tab to generate a report. See
Generated reports on page 132 for additional information.
2. Select Generate Report on the right. The Generate Report popup displays.
3. Select a Report type from one of the following options:
• PDF: default selection. This is what you see in Report Management.
• CSV: zipped folder with one csv for each widget that can be converted in this format.
• Excel: single Excel file with one sheet for each widget that can be converted into this format and
a legend in the last one.
4. In the Report execution field, select a report file format from one of the following options:
•On-demand reports: immediate
•Scheduled reports: cyclical (customize the recurrence). This feature is available only for
users granted the Allow editor permission. Scheduled reports can be managed through the
Scheduled Reports screen.
5. Schedule the report in the Schedule report creation field:
Note: The time schedule is based on the server time.
a. Select a recurrence timeframe: daily, weekly, or monthly.
b. Select a time of day for report recurrence, in hours and minutes from the dropdown menu.
c. Select a day of the week for report recurrence.
d. Enter a username in the user defined name field.
e. Enter email addresses to email the report to, separated by commas in the email recipients field.
6. (Optional) Click the checkbox for Include only Alerts following Security Profile [Applies to Alert
widgets only] if you want only alerts for a security profile.
7. Click Save to save your report generation schedule.
| User Interface Reference | 132

Figure 81: Generate Report dialog

Once report files are generated (either on-demand or scheduled), users can download them from the
Generated Reports screen. When scheduling reports, you can optionally send the report files by email.

Generated reports
This topic provides an overview of generated reports.
• From the Web UI, access report generation from the Reports dashboard by going to Analysis >
Reports > Generation tab.
• From the Generated Reports screen, you can browse created reports, download them, configure
them, and delete them if necessary.
• From the Generated Reports screen, you can access both on-demand and scheduled generated
reports as files.

Figure 82: Generated reports

1. From the Web UI, go to Analysis > Reports > Generated tab to configure report retention.
2. Click the Configure button to begin report retention configuration. The Report retention
configuration popup displays.

3. Set the number of days that a scheduled report remains available after it's generated (the default is
90 days) in the Max number of reports saved field.
4. Set the maximum number of reports that can be stored (the default is 500 stored reports) in the
Number of days reports remain saved field.
| User Interface Reference | 133

5. Click Save to save the configuration.

Note: If the sensor runs low on disk space, the oldest reports are automatically deleted to make room
for the newest ones.

Scheduled reports
This topic provides an overview of reports that have been scheduled for the Nozomi Networks solution.
Scheduled reports allows you to browse scheduled reports, edit them, and delete them.

Figure 83: Scheduled reports

1. From the Web UI, go to Analysis > Reports > Scheduled tab to view scheduled reports. A
Reports screen displays with the list of scheduled reports.
2. From the Reports screen, view the following report entries for each report:
• Actions (allows you to edit or delete the report)
View the following information in ascending or descending order:
• Name (report name)
• User Defined Name
• Query
• Recurrence (server time)
• Email recipients
• Created by
• Report type
3. (Optional) Click the Edit icon in the Actions column to make changes to the available schedule
settings. The Generate Report popup displays. See Generating a report on page 131 for
information on changing settings. Save any changes that you make.
| User Interface Reference | 134

4. (Optional) Click the trash icon in the Actions column to delete the scheduled report.

Report settings
This topic describes how to access and make changes to report settings, such as uploading custom
logos, and configuring SMTP settings.
From the Web UI, go to Analysis > Reports > Settings tab to access report settings. The Reports
screen displays.
| User Interface Reference | 135

Figure 84: Report settings

Report custom logo


With the custom logo feature, you can upload a custom logo to replace the Nozomi Networks logo.
1. At the Reports screen, drag your logo image to the custom logo section, or upload it. Supported
formats include jpg, png, and gif. Logo sizes should be 360x90 or larger, with an ideal ratio of 4:1 or
similar.
Note: Using a logo of a different size than suggested by the tooltip can break the layout of
generated reports by introducing overlapping page headers.
2. (Optional) After uploading the new logo, you can delete it (the Nozomi Networks logo will be
restored), or upload a new one to replace it.
3. (Optional) Edit the report custom logo section visibility to grant or deny user access to the custom
logo.
4. Non-administrative users can see/change the report custom logo only if they are granted Report
and Allow editor permissions. You can edit user report permissions through the user groups topic.
| User Interface Reference | 136

Figure 85: Report custom logo

Report SMTP settings


An SMTP server is required to send scheduled reports by email at each recurrence (if the Email
recipients field is set). You can configure SMTP settings to optionally receive Scheduled Reports by
email.
1. At the Reports screen, in the SMTP section, click the On button to turn on SMTP settings.
2. At the To URI field, enter the host URI information. Example: HOST[:PORT][/ID].
3. At the Sender field, enter sender identification information.
4. Check STARTTLS to send a command to the SNMP server to start sending encrypted email
reports, otherwise reports will be sent unencrypted.
5. At the Authentication Mechanism field, select either Plain or Login to start the authentication
process. The default is Plain
6. Enter your username in the Username field.
7. Enter your password in the Password field.
8. Save your changes.
Once enabled and saved, scheduled reports will be sent to the specified recipients' email addresses
from the Email recipients field.

Figure 86: Report SMTP settings

Note: When enabled, for each scheduled report, an email will be sent beginning with the next
scheduled recurrence.
| User Interface Reference | 137

Time machine
This topic describes the time machine for the Nozomi Networks solution.
With the time machine feature, users can load a previously saved state (called a snapshot) and go
back in time, analyzing the data in the Nozomi Networks solution from a past situation. You can load a
single snapshot and use the platform as usual or load two snapshots and compare the user interface to
highlight changes.

Time machine snapshots list


This topic describes the time machine snapshots list. This includes loading a snapshot and requesting
a diff between two snapshots.
Access the time machine from the Web UI, by going to Analysis > Time machine. The Time machine
screen displays.
The snapshots periodically taken by the Nozomi Networks solution are displayed in this table.
Snapshots can be used to go back in time to analyze the Environment status at a certain point in time.
Two snapshots can be compared by means of a diff.

Figure 87: Time machine snapshots list

To configure retention, snapshot interval and event-based snapshots, see Configuring Time Machine
on page 426.

Loading a snapshot
1. From the Time machine screen, click the Load snapshot button to load and analyze a snapshot
as if you were in the past. The user interface turns gray to highlight that you are watching a static
snapshot.

Figure 88: Load snapshot button


2. Click the Forward button to return to the present and watch the Environment in real time.

Figure 89: Forward button


| User Interface Reference | 138

Figure 90: Forward button in utility navigation bar

Requesting a diff
1. From the Time machine screen, click the plus button to select the snapshot to be used as baseline
for the diff.

Figure 91: Plus button


2. The target of the diff can be either a snapshot from the past, or the current live environment.
• Click the plus button to select a past snapshot as diff target.
• Click the LIVE button to select the current live environment as diff target.

Figure 92: LIVE button


3. (Optional) Toggle the Exclude frequently changing fields switch to on, in order to exclude from
the diff all fields that are affected from normal traffic handling. Examples of frequently changing
fields are the number of bytes received / sent, and the last activity time.

Figure 93: Exclude frequently changing fields


4. After the diff baseline / target have been configured, click the Diff button.

Figure 94: Diff button


5. The system evaluates the diff baseline / target files and estimates how CPU / memory intensive
the diff operation is going to be. If there is not enough free memory at the moment, the diff will be
aborted with the appopriate message being shown to the user. If the diff is estimated to be a long
one (i.e. estimated to take more than a few minutes), a warning will be shown and confirmation will
be requested from the user.
| User Interface Reference | 139

Figure 95: Big snapshot diff warning


6. As soon as the diff operation is started, a dialog is shown which provides information on its
progress. If the diff operation needs to be stopped, click the Abort button.

Figure 96: Progress dialog


7. As soon as the diff results are computed, they are immediately shown on screen.

Reload the diff operation progress


Diff operations can take significant amounts of time to conclude. While such an operation is ongoing,
the user may use the Nozomi Networks solution normally and navigate to other locations of the Web
UI. In order to reload the diff operation progress:
1. Access the time machine again, by going to Analysis > Time machine
2. Click the Reload button.

Figure 97: Reload button


If the diff operation is still ongoing, the progress dialog will be shown back. If the operation has
concluded, the diff results will be directly shown.

Time machine diff from alert


This topic describes how to observe diffs from alerts.
To request a snapshot diff starting from an alert (instead of the time machine), this automatic feature
will use the previous and the next snapshots according to the alert time.
1. Click the Alerts heading from the dashboard to access the Alerts table.
2. Click an alert ID in the alerts table.
3. Click the three dots in the upper left corner and then select the time machine diff button. You will
be redirected to the diff result screen.
| User Interface Reference | 140

Figure 98: Fast diff button

Time machine diff result


This topic describes the result of time machine diffs.
To view the difference between two snapshots, perform the following steps:
1. Request a diff between two snapshots (see Time machine snapshots list on page 137 for
instructions).
2. After requesting the diff between two snapshots, click Show changes to see the differences on
rows of interest.

Figure 99: Diff result

A Use these buttons to navigate between the Environment items


B Use these buttons to navigate between subsections (the example
shows nodes with changes)
3. Select one of four tabs to show diff results: Nodes, Links, Variables and Graph. Each tab
has three subsections: Added, Removed, Changed. Navigating between these sections and
subsections allows you to observe Environment changes between the two snapshots. For
example, you can observe if a node has been added or if a variable value has changed.
4. Observe diff results in table or graph format.
| User Interface Reference | 141

Figure 100: Diff results for a single node

The graph view and the use of color allows you to quickly spot the nodes or links that have been
added, removed, or changed. Added items are in green, those that have been removed are in red,
and those with changes are in blue. Click a node or link with changes to see details on the right side
of the graph.

Figure 101: Diff result as a graph


| User Interface Reference | 142

Vulnerabilities
This section describes the Vulnerabilities table.
From the Web UI, go to Analysis > Vulnerabilities. The Vulnerabilities table displays.
The Vulnerabilities table lists all vulnerabilities in table format. The table has three tabs:
• Assets: Lists vulnerability information per vulnerable asset.
• List: Lists vulnerabilities in table format.
• Stats: Lists vulnerability statistics on a global level.

Figure 102: Vulnerabilities table

Assets tab
From the dropdown menu in the Assets tab, you can filter only the most likely vulnerabilities, by
selecting Only Most Likely, with the likelihood threshold configured as shown in the image below.

Figure 103: Most likely filter configuration form

Note: Likelihood threshold is a value between 0.1 and 1.0 where 1.0 represents the maximum
likelihood of the CVE to be present. Likelihood is the confidence that a certain vulnerability actually
exists on a particular node. Likelihood threshold is the minimum likelihood a vulnerability needs in
order for it to be shown in this page when the switch is turned on. As a guideline, we suggest using
0.8 for a high level of confidence, 0.5 for a medium level of confidence, and 0.3 for a low level of
confidence.
Click the Common Vulnerabilities and Exposures (CVE) link to view a popup with additional details
about the vulnerability.
| User Interface Reference | 143

Figure 104: Vulnerability details popup

List tab
From the List tab, update the vulnerability status using the controls. Vulnerability status options
are: Unresolved, Mitigated, and Accepted. Both the statuses, Mitigated and Accepted lead to a
resolution status that equals true.

Figure 105: Change CVE resolution

The resolution status and reason can also be updated automatically in the background by the system,
as a result of Smart Polling. See Smart Polling on page 259 for additional information.
For example, Guardian Ticket Incidents that are closed in ServiceNOW are propagated into Guardian,
a synchronous process that is configured in the Smart Polling section of the Guardian portal. Using
the "Close incidents according to their status on the external service" checkbox, you can toggle
incident synchronization on and off. Incidents closed in ServiceNOW are sent to the Guardian when the
box is checked.

Stats tab
From the Stats tab, you can view the top CPS, CWEs and CVEs in graphic format.
| User Interface Reference | 144

Figure 106: Stats tab


| User Interface Reference | 145

Settings
Settings in the Nozomi Networks solution allows users to customize the product to fit their specific
needs through a series of configuration steps. This differs from system configurations which are
primarily related to product accessibility.

Command Line Interface (CLI)


This topic describes the Nozomi Networks solution Command Line Interface (CLI).
The CLI allows you to change the configuration parameters and perform troubleshooting activities.
See the Configuration section for a complete list of configuration rules.

Figure 107: Command Line Interface

Useful commands

help Shows a list of available commands


history Shows previously entered commands
clear Clears the console
find_cmd Finds available CLI commands with a given sequence of space-separated
keywords

Keyboard shortcuts

Ctrl+R Reverses search through the command history


Esc Cancels search
Up arrow Displays previous entry in the history
Down arrow Displays next entry in the history
Tab Invokes completion handler
Ctrl+A Moves cursor to the beginning of the line
Ctrl+E Moves cursor to the end of the line
| User Interface Reference | 146

Firewall integrations
This topic describes how to configure Guardian firewall integrations.
The Nozomi Networks solution discovers, identifies, and learns the behavior of assets on your network.
Through integration with the firewall, unlearned nodes and links are automatically blocked through
block policies. Block policies are not created for nodes and links in the learned state.
Note: For some firewall integrations, the Nozomi Networks Operating System (N2OS) supports
session kill.
Guardian supports integration with the following firewalls:
• Fortinet FortiGate on page 146
• Check Point Gateway on page 147
• Palo Alto Networks v8 on page 148
• Palo Alto Networks v9 on page 149
• Palo Alto Networks v10 on page 150
• Cisco ASA on page 152
• Cisco FTD on page 152
• Cisco ISE on page 153
• TXOne EdgeIPS on page 156
• Stormshield SNS on page 156
• Barracuda on page 158
Note: Setting up firewall integrations requires administrative privileges.
1. From the Web UI, go to Administration > Settings > Firewall Integration to begin the integration
process.
2. Then select the firewall from the Select an option dropdown menu.

After the integration has been set up, policies are produced and inserted in the firewall. The policies are
displayed in the Policies section.

Features
• Firewall integrations only work when the global learning policy mode is set to protecting and strict.
It does not work when the policy for zones is set to override the protecting and strict mode. In this
mode, we can see new nodes, but they are not learned.
• If the global learning policy is set to learning and adaptive, and a zone is set to protecting and
adaptive, we see new nodes, but they are not learned, however links to new nodes are learned
automatically.

Fortinet FortiGate
This topic describes how to configure Guardian firewall integration with the Fortinet FortiGate firewall.
This integration uses the REST API. The supported FortiOS versions are 6.2, 6.4, 7.0, 7.2
Prerequisites
• You need a REST API access token, which can be generated directly from the firewall admin Web
UI.
• The access token needs to have permission to insert, read, and delete entities as addresses,
addrgroups, routes, sessions and policies. Also, add the Guardian address subnet to trusted hosts.
| User Interface Reference | 147

• The vdom field is optional. If you specify multiple vdoms, use a comma (,) to separate them, such
as vdom1,vdom2.
1. Go to Administration > Settings > Firewall integration to access firewall integrations.
2. At the Firewall integration screen, click the + sign in the upper right corner to add a firewall.
3. At the Choose firewall popup, select Fortigate from the dropdown menu to access the Fortigate
firewall. Then, complete the following information in the Required tab:
a. Enter the host IP address, in the Host field, if not entered by default.
b. (Optional) Enter a vdom in the vdom field.
c. Enter an access token in the Access token field.
4.

Figure 108: FortiGate configuration


5. From the Options section make any configuration changes, as needed. Each option is described
beneath its checkbox.
Note: You can enable transparent mode through the proper flag. With transparent mode enabled,
the integration also sends layer 2 rules to the firewall.
If you enable transparent mode, the port check feature is disabled by default.
6. Save your changes.

Check Point Gateway


This topic describes how to configure Guardian firewall integration with the Check Point Gateway
firewall.
1. Go to Administration > Settings > Firewall integration to access firewall integrations.
2. At the Firewall integration screen, click the + sign in the upper right corner to add a firewall.
3. At the Choose firewall popup, select Check Point Gateway from the dropdown menu to access
the Check Point Gateway firewall. Then, complete the following information:
a. Enter the host IP address, in the Host field, if not entered by default.
b. Enter the SAM server in the SAM server field.
c. Enter the Firewall host in the Firewall host field.
d. Enter your user name in the User field.
e. Enter your password in the Password field.
| User Interface Reference | 148

Figure 109: Check Point Gateway configuration


4. From the Options section make any configuration changes, as needed. Each option is described
beneath its checkbox.
5. Save your changes.

Figure 110: Guardian policies inserted in the Check Point Gateway

Palo Alto Networks v8


This topic describes how to configure Guardian firewall integration with the Palo Alto Networks v8
firewall.
1. Go to Administration > Settings > Firewall integration to access firewall integrations.
2. At the Firewall integration screen, click the + sign in the upper right corner to add a firewall.
3. At the Choose firewall popup, select Palo Alto Networks v8 from the dropdown menu to access
the Palo Alto Networks v8 firewall. Then, complete the following information:
a. Enter the host IP address in the Host (CA-Emitted TLS Certificate) field, if not entered by
default.
Note: Nozomi Networks recommends the use of SSL certificates in your environment.
b. (Optional) Enter a virtual system name in the Virtual System name (optional) field.
c. Enter your user name in the User field.
d. Enter your password in the Password field.
| User Interface Reference | 149

Figure 111: Palo Alto v8 configuration


4. From the Options section make any configuration changes, as needed. Each option is described
beneath its checkbox.
5. Save your changes.

Figure 112: Guardian policies inserted in the Palo Alto v8 firewall

Palo Alto Networks v9


This topic describes how to configure Guardian firewall integration with the Palo Alto Networks v9
firewall.
Background
Starting with version 9.0, PAN-OS provides a REST API. The Guardian integration that relies on this
new API supports the same features as the previous Palo Alto integration and the following ones:
• Commit by user: Commits the current changes required by the user, which are represented by the
credentials used for the API. Global commits are no longer performed.
• Dynamic Access Groups for Node Blocking: Dynamic Access Group references a tag, which is
then assigned to a new IP address for objects that are created on the firewall. This will automatically
apply the global Guardian denylist rule to each new address without modifying the firewall ruleset.
1. Go to Administration > Settings > Firewall integration to access firewall integrations.
2. At the Firewall integration screen, click the + sign in the upper right corner to add a firewall.
3. At the Choose firewall popup, select Palo Alto Networks v9 from the dropdown menu to access
the Palo Alto Networks v9 firewall. Then, complete the following information:
a. Enter the host IP address in the Host (CA-Emitted TLS Certificate) field, if not entered by
default.
Note: Nozomi Networks recommends the use of SSL certificates in your environment.
b. (Optional) Enter a virtual system name in the Virtual System name (optional) field.
c. Enter your user name in the User field.
d. Enter your password in the Password field.
| User Interface Reference | 150

Figure 113: Palo Alto v9 configuration


4. From the Options section make any configuration changes, as needed. Each option is described
beneath its checkbox.
5. Save your changes.

Figure 114: Guardian policies inserted in the Palo Alto v9 Firewall

Palo Alto Networks v10


This topic describes how to configure Guardian firewall integration with the Palo Alto Networks v10
firewall.
Background
Starting with version 10.0, PAN-OS provides a REST API. The Guardian integration that relies on this
new API supports the same features as the previous Palo Alto integration and the following ones:
• Commit by user: Commits the current changes required by the user, which are represented by the
credentials used for the API. Global commits are no longer performed.
• Dynamic Access Groups for Node Blocking: Dynamic Access Group references a tag, which is
then assigned to a new IP address for objects that are created on the firewall. This will automatically
apply the global Guardian denylist rule to each new address without modifying the firewall ruleset.
Note: This firewall integration supports IPv6 addresses.
1. Go to Administration > Settings > Firewall integration to access firewall integrations.
2. At the Firewall integration screen, click the + sign in the upper right corner to add a firewall.
3. At the Choose firewall popup, select Palo Alto Networks v10 from the dropdown menu to access
the Palo Alto Networks v10 firewall. Then, complete the following information:
a. Enter the host IP address in the Host (CA-Emitted TLS Certificate) field, if not entered by
default.
Note: Nozomi Networks recommends the use of SSL certificates in your environment.
b. (Optional) Enter a virtual system name in the Virtual System name (optional) field.
c. Enter your user name in the User field.
d. Enter your password in the Password field.
| User Interface Reference | 151

Figure 115: Palo Alto v10 configuration section, block unlearned strategy

For "Block active alerts" Firewall rules strategy Guardian will create policies to block links
associated with selected alert types.

Figure 116: Palo Alto v10 configuration section, block active alerts strategy
4. From the Options section make any configuration changes, as needed. Each option is described
beneath its checkbox.
5. Save your changes.

Figure 117: Guardian policies inserted in the Palo Alto v10 Firewall
| User Interface Reference | 152

Cisco ASA
This topic describes how to configure Guardian firewall integration with the Cisco ASA firewall.
1. Go to Administration > Settings > Firewall integration to access firewall integrations.
2. At the Firewall integration screen, click the + sign in the upper right corner to add a firewall.
3. At the Choose firewall popup, select Cisco ASA from the dropdown menu to access the Cisco
ASA firewall. Then, complete the following information:
a. Enter the host IP address, in the Host field, if not entered by default.
b. Enter your user name in the User field.
c. Enter your password in the Password field.
SSL check is always skipped.

Figure 118: Cisco ASA configuration


4. From the Options section make any configuration changes, as needed. Each option is described
beneath its checkbox. For example, you can permit session kill by checking the Enable session kill
checkbox.
5. Save your changes.

Figure 119: Guardian policies inserted in the Cisco ASA

Cisco FTD
This topic describes how to configure Guardian firewall integration with the Cisco FTD firewall.
1. Go to Administration > Settings > Firewall integration to access firewall integrations.
2. At the Firewall integration screen, click the + sign in the upper right corner to add a firewall.
3. At the Choose firewall popup, select Cisco FTD from the dropdown menu to access the Cisco
FTD firewall. Then, complete the following information:
a. Enter the host IP address, in the Host field, if not entered by default.
b. Enter your user name in the User field.
c. Enter your password in the Password field.
| User Interface Reference | 153

SSL check is always skipped.

Figure 120: Cisco FTD configuration


4. From the Options section make any configuration changes, as needed. Each option is described
beneath its checkbox. For example, you can permit session kill by checking the Enable session kill
checkbox.
5. Save your changes.

Cisco ISE
This topic describes how to configure Guardian firewall integration with the Cisco ISE firewall.

Introduction
The integration between Cisco ISE and Nozomi Networks Guardian allows Cisco customers to extend
network access controls and policy enforcement to their OT and IoT networks from the Cisco ISE.
Nozomi Networks Guardian integrates with Cisco ISE using the pxGrid platform.
Along with the client associated with the certificate and the certificate password, you need to upload
the identity certificate and the private key.
The preferred method of authenticating with the Cisco ISE is via certificates. Guardian supports:
• Authentication using certificates issued by the Cisco ISE internal Certificate Authority (CA)
• Authentication using certificates issued by an external CA (third-party certificates)

Procedure
Perform these steps to authenticate using certificates issued by the Cisco ISE CA and by external CAs:
1. From the Web UI, go to the gear ( ) icon in the upper right corner of the screen, then select
Settings > Firewall integration to access firewall integrations.
2. At the Firewall integration screen, click the + sign in the upper right corner to add a firewall.
3. At the Choose firewall popup, select Cisco ISE from the dropdown menu to access the Cisco ISE
firewall. Then, complete the following information:
a. Enter the host IP address, in the Host field, if it is not present by default. The host IP address is
the IP address of the Cisco ISE firewall for which you are configuring the integration.
b. Enter the client name in the Client name field. The client name is taken from the Cisco
ISE pxGrid Services screen on the Cisco ISE Web UI. (See the appropriate Cisco ISE
documentation for additional information.)
4. To authenticate with a Cisco ISE internal CA certificate, check the Authenticate with certificate
box, then enter the password in the Password field.
| User Interface Reference | 154

Figure 121: Choose firewall - Cisco ISE configuration using an ISE internal CA certificate
5. If you are assigning a third-party certificate, check the Use third party certificate box, then import
the certificate(s), using one of the following methods:
• Click Import the CA certificate and then upload the CA certificate.
• Click Import the certificate and then upload the certificate.
• Click Import the key and then upload the certificate.
Note: If you import the CA certificate or import the certificate, the file must have the 'cer'
extension. If you import the key, the key file must have the have the 'key' extension.

Figure 122: Choose firewall - Cisco ISE configuration using a third-party certificate
6. (Alternative) If you have an existing client, you can also authenticate using a username and
password.
a. Check the Use existing client box.
b. Enter the password in the Password field.
| User Interface Reference | 155

Figure 123: Choose firewall - Cisco ISE configuration using an existing client
7. (Alternative) To create a new client from Guardian, at the Choose firewall screen:
a. In the Host field, enter the host IP address, which is the IP address of the Cisco ISE firewall for
which you are configuring the integration.
b. In the Client name field, enter the client name from the Cisco ISE pxGrid Services screen on
the Cisco ISE Web UI.
c. Click the Create client button.
d. Approve the new client from the Cisco ISE pxGrid Services screen. (See the appropriate Cisco
ISE documentation for additional information.)
Note: The password returned by Cisco ISE is not displayed, but is kept in the Guardian
configuration.

Figure 124: Choose firewall - Cisco ISE configuration to create a new client
8. (Optional) In the Options section, make any configuration changes as needed. Each option is
described beneath its checkbox. For example, to enable node blocking, check the Enable node
blocking checkbox.
9. Save your changes. You can see your changes in the Policies Cisco ISE popup.

Figure 125: Policies Cisco ISE


From the Web UI, you can perform field validations using the Save and Pull policies buttons. If fields
are missing, a warning message displays. For authentication errors, such as a wrong password or
| User Interface Reference | 156

certificate mismatch, the Web UI displays a message that details the reason for the error. For further
error details, search for the Cisco ISE string in the log file /data/log/n2os/n2osjobs.log.

TXOne EdgeIPS
This topic describes how to configure Guardian firewall integration with the TXOne EdgeIPS firewall.
Background
TXOne's OT Defense Console (ODC) provides a REST API v1.1. The Guardian integration relying on
this API supports the same features as previous integrations from Trend Micro.
1. Go to Administration > Settings > Firewall integration to access firewall integrations.
2. At the Firewall integration screen, click the + sign in the upper right corner to add a firewall.
3. At the Choose firewall popup, select TXOne EdgeIPS from the dropdown menu to access the
TXOne EdgeIPS firewall. Then, complete the following information:
a. Enter the host IP address, in the Host field, if not entered by default.
b. Enter the API Key server in the API Key field. The API Key is shown as the User name.
c. Enter the API Secret host in the API Secret field, a credential. The API Secret, along with the
API Key allows the Guardian firewall integration to access your account without the need for
providing your actual username and password.

Figure 126: TXOne EdgeIPS configuration


4. From the Options section make any configuration changes, as needed. Each option is described
beneath its checkbox.
5. Save your changes.

Figure 127: Guardian policies inserted in the TXOne's EdgeIPS firewall

Stormshield SNS
This topic describes how to configure Guardian firewall integration with Stornshield SNS firewall.
| User Interface Reference | 157

Guardian integration supports Stormshield CLI API v4.


1. Go to Administration > Settings > Firewall integration to access firewall integrations.
2. At the Firewall integration screen, click the + sign in the upper right corner to add a firewall.
3. At the Choose firewall popup, select Stormshield SNS from the dropdown menu to access the
Stormshield SNS firewall.

Figure 128: Stormshield SNS configuration section (credentials authentication)


4. In the Configuration section, in the Required tab of the Host (CA-Emitted TLS Certificate field,
enter the SSL certificate name.
5. In the Authentication field, click the Certificate tab to use a certificate to authenticate use.
6. To authenticate with a Stormshield CLI API v4 certificate, check the Import the certificate box.
Then enter the password in the Password field.
7. (Alternative) You can also authenticate using a key. Click the Import the key box. Then enter the
password in the Password field.

Figure 129: Stormshield SNS configuration section (certificate authentication)


8. From the Options section make any configuration changes, as needed. Each option is described
beneath its checkbox.
9. Save your changes.
| User Interface Reference | 158

Figure 130: Guardian policies inserted in the Stormshield SNS firewall

Barracuda
This topic describes how to configure Guardian firewall integration with the Barracuda firewall.
Guardian integration supports Barracuda API v8.3.
1. Go to Administration > Settings > Firewall integration to access firewall integrations.
2. At the Firewall integration screen, click the + sign in the upper right corner to add a firewall.
3. At the Choose firewall popup, select Barracuda from the dropdown menu. Then, complete the
following information:
a. Enter the host IP address, in the Host field, if not entered by default.
b. Enter the IP range in the Range field.
c. Enter the cluster in the Cluster field.
d. Enter the shared firewall service name in the Shared firewall service name field.
e. Enter rules list name in the Rules list name field.
f. Enter the token in the Token field.
4. If needed, tune the integration's behavior in the Options section of the Configuration popup.
• Click the Enable Nodes Blocking checkbox to control nodes communication in the firewall
according to the Environment status.
• Click the Enable Links Blocking checkbox to control links communication in the firewall
according to the Environment status.

Figure 131: Barracuda configuration


| User Interface Reference | 159

Figure 132: Guardian policies inserted in the Barracuda firewall


| User Interface Reference | 160

Data integration
This topic describes how the N2OS exchanges data between it and third-party systems.

Introduction
The Nozomi Networks solution uses third-party platforms to export data, using specific formats and
methods. After configuring the endpoints of the third-party platforms for data integration, Guardian
generates messages regarding alerts, health, and audits. Connectivity status and status is checked
with the data integration endpoint.
The third-party platforms that the Nozomi Networks solution uses for exporting data are:
• FireEye CloudCollector
• IBM QRadar (LEEF)
• ServiceNow
• Tanium
• Cisco ISE

Third-party platforms for exchanging data


The third-party platforms that the Nozomi Networks solution uses for exporting data are:
• FireEye CloudCollector
In addition to alerts, the FireEye CloudCollector integration allows you to send health logs, DNS
logs, HTTP logs and file transfer logs.

• IBM QRadar (LEEF)


The IBM QRadar integration permits you to send all alerts (and optionally health logs) in LEEF
format. You can also send asset information to QRadar beginning with version 2.0.0 of the QRadar
App. Click How this integration works to view additional details.
| User Interface Reference | 161

• ServiceNow
The ServiceNow integration allows you to forward incident and asset information to a ServiceNow
instance. Using the options below, you can decide to send just new incidents or historical incidents.
You can also choose if currently existing assets in ServiceNow need to be updated with the
information present in the sensor or if assets in ServiceNow will only be created if they do not exist
there yet. Click How this integration works to view additional details.

• Tanium
This integration allows you to forward asset information to a Tanium instance. Click How this
integration works to view additional details.
Note the following:
• If the Tanium instance does not have a valid signed HTTPS certificate authority (CA), users must
add an ! before the URL (ex. !https://192.168.1.1)
• Nodes are sent, not assets.
• Nodes are sent regardless of whether MAC addresses are confirmed or not (all nodes).
| User Interface Reference | 162

• If integrating with the CMC, use all-in-one mode. This is because multi-context CMC does not
have nodes.

• Cisco ISE
With the Cisco ISE integration, you can send the results of custom node queries to Cisco's ISE
asset information using the STOMP protocol. Click How this integration works to view additional
details about certificate usage and Cisco ISE environment requirements.

Perform these steps to configure the Cisco ISE:


1. Create the following custom string attributes: n2os_change_flag,
n2os_operating_system, n2os_product_name, n2os_vendor, n2os_type,
n2os_appliance_site, n2os_zone.
2. Create a new profile and set the required condition n2os_change_flag custom attribute equal to
change.
3. Modify the existing profiles or, if no profiles are expected to be assigned to assets from n2os,
create a new profile. Add the required condition for n2os_change_flag
Note: Due to a long-standing bug in Cisco's PxGrid API, the performance when sending assets
is halved, requiring two network calls for each updated record. Nozomi Networks is working with
Cisco to address this issue. Cisco has not provided a target date for this bug.
• Microsoft Endpoint Configuration Manager (WinRM RPC)
With the WinRM RPC, you can collect information coming from the Microsoft Endpoint Configuration
Manager to update Windows nodes.
Collected items
• OS information: Returns OS information as version, service pack, build and architecture.
| User Interface Reference | 163

• Hostnames: Returns host name information to configure the node label.


• Interfaces information: Returns interface data to populate the node MAC address.
• Installed software: Returns installed software and populates the node CPE.
• Hotfixes: Returns installed software version updates and checks to see if there are node CVEs
to close.
Note: It is important to filter the strategy nodes because without the filter, the strategy waits
for the timeout of non-Microsoft nodes that are not reachable. This significantly decreases the
performance of the data integration strategy.

• Microsoft Endpoint Configuration Manager (DB)


The goal of this integration is to collect information from the Microsoft Endpoint Configuration
Manager DB to update the existing Windows nodes.
Collected items
• OS information: Returns OS information, such as version, service pack, build, and architecture.
• Hostnames: Returns host name information to configure the node label.
• Interfaces information: Returns interface data to populate the node MAC address.
• Installed software: Returns installed software and populates the node CPE.
• Hotfixes: Returns installed software version updates and checks for node CVEs to close.
Note:
It is important to filter the strategy nodes because without the filter, the unreachable non-
Microsoft nodes time out, significantly decreases timing of the data integration.
The database name default value format is CM_[Site code], which may be changed by an
administrator.
| User Interface Reference | 164

Methods and formats for exchanging data


The methods or formats that can be used to export data from the Nozomi Networks solution are:
• Common Event Format (CEF)
CEF allows you to send alerts and health logs in CEF format. You can also enable encryption of the
data through the TLS checkbox and check the validity of the CEF server’s certificate with the CA-
Emitted TLS Certificate checkbox. Click How this integration works to view additional details.
| User Interface Reference | 165

Note that the Nozomi Networks solution has defined custom label fields in our CEF implementation.
Ensure that your integration recognizes these custom labels and deals with them appropriately.

Field Value Label Value Label Sample Field Sample

cs1 cs1Label Risk Risk level for the alert

cs2 cs2Label IsSecurity Is this a security alert

cs3 cs3Label Id Alert ID (not Alert Type ID)


of the alert in the Nozomi
system

cs4 cs4Label Detail Alert details

cs5 cs5Label Parents Parent IDs of the alert if


related to others

cs6 cs6Label n2os_schema This is the Nozomi


Schema version

flexString1 flexString1Label mitre_attack_techniques T0843


flexString2 flexString2Label mitre_attack_tactics Impair Process Control,
Inhibit Response Function,
Persistence

flexString3 flexString3Label Name Suspicious Activity

The CEF data integration now sends the name attribute of alerts in the flexString CEF field.
For example:

nozomi-ids.local n2osevents[0]: CEF:0|Nozomi Networks|N2OS|


21.9.0-01051414_C13FC|SIGN:MULTIPLE-UNSUCCESSFUL-LOGINS|Multiple
unsuccessful logins|8|
app=smb
dvc=172.16.193.105
dvchost=nozomi-ids.local
cs1=8.0
| User Interface Reference | 166

cs2=true
cs5=["22114bf0-813c-434c-b4d7-933d2a54b4e1"]
cs6=3 cs1Label=Risk
cs2Label=IsSecurity
cs3Label=Id
cs5Label=Parents
cs6Label=n2os_schema
flexString1=T0843
flexString1Label=mitre_attack_techniques
flexString2=impair_process_control, inhibit_response_function, persistence
flexString2Label=mitre_attack_tactics
flexString3=suspicious_activity
flexString3Label=name
dst=192.168.1.77
dmac=f0:1f:af:f1:40:5c
dpt=445
msg=Multiple unsuccessful logins detected with protocol smb. The usernames
'', 'DOMAIN\VCA07_12$' attempted at least 40 connections in 15 seconds
src=192.168.1.227
smac=d8:9e:f3:3a:cb:3a
spt=57280
proto=TCP
start=1651456283700
• Splunk - Common Information Model (JSON)
If you need to send alerts to a Splunk - JSON instance, you can use integration. Data are sent in
JSON format and you are also able to filter on alerts. You can also send health logs and audit logs.
Click How this integration works to view additional details.

• SMTP forwarding
To send reports, alerts and/or health logs to an email address, you can configure an SMTP
forwarding endpoint. In this case, you are also able to filter alerts.
| User Interface Reference | 167

• SNMP trap
Use this kind of integration to send alerts through an SNMP trap.

• Syslog forwarder
Use this type of integration to send syslog events captured from monitored traffic to a syslog
endpoint.
It is useful for passively capturing logs and forwarding them to a SIEM.
Note: In order to enable syslog events capture see Enable Syslog capture feature in the Basic
configuration section of the Configuration chapter of this manual.
| User Interface Reference | 168

• Custom JSON
This type of integration sends all alerts to a specific URI using the JSON format.

• Custom CSV
This type of integration sends the results of the specified query to a specific URI in CSV format.

• DNS Reverse Lookup


This integration sends reverse DNS requests for the nodes in the environment and uses the names
provided by the DNS as nodes' labels. You can pre-filter the nodes by specifying a query filter. The
strategy runs once a day by default, but you can run it on demand by selecting Rerun the strategy
on all the data.
| User Interface Reference | 169

• CheckPoint IoT
This integration allows you to forward asset information and node blocking policies to an instance of
the CheckPoint Smart Console. Click How this integration works to view additional details. This
integration is available only on CMCs.

• Kafka
The Kafka integration allows you to send the results of custom queries in JSON format to existing
topics of a Kafka cluster. Click How this integration works to view additional details.

• External storage
The external storage integration uploads files to an external machine. This enables the external
machine to keep remote copies of files that are kept beyond the retention settings. The file location
becomes transparent to the user, who can retrieve them seamlessly from external storage when the
files are removed from the local file system. You can also choose a connection protocol for storing
the files. Available protocols are smb, ftp, and ssh.
Important: The smb connection protocol is only supported by Microsoft operating systems.
Compatibility with third-party devices is not guaranteed. These devices may require additional
configuration changes, including permission changes, creation of new network shares, and creation
of new users. Kerberos authentication is not supported.
Note: This functionality is currently only available for trace pcap files on Guardian.
| User Interface Reference | 170

Configuring endpoints for data integration


You can configure endpoints using the Guardian Web UI (Administration > Settings > Data
Integration). Depending on the configuration, each endpoint may receive alerts, health logs, and other
items. You can find more information about data integrations in the Web UI.
Perform these steps to configure an additional data integration:
1. From the Web UI, go to Administration > Settings > Data Integration to configure endpoints. The
Data integration screen displays.

2. Select +Add. The New Endpoint dialog box displays.


3. Select an option from the Choose a configuration dropdown menu from the Endpoint Configured
as field.
| User Interface Reference | 171

Figure 133: Endpoint configured as


Integrations that can send data via UDP have a default maximum message size of 1024. You can
change the default value by adding a max-size query param to the URI, such as:

udp://host?max-size=2048

Figure 134: Examples of configured endpoints

See the individual data integrations below for specific information about the integration between the
Nozomi Networks solution and a third-party system. Some integrations have additional details about
the integration. If the integration provides additional information, click How this integration works to
view additional details.

Nozomi syslog data events and syslog messages


For customers implementing syslog, Guardian generates three types of syslog events: alerts, health,
and audit. Alert events should be identified by the alert type ID.
Note: As the set of alert messages inside each alert type ID category increases over time, perform
searches on alert type IDs, health type IDs, and audit type IDs, rather than on the alert message itself.
Alert events
There are many alert types in the Nozomi Networks environment. Refer to the Alerts Dictionary for a
full reference of alert types.
| User Interface Reference | 172

Alert events in CEF have the following format, as shown in this example:

<137>Oct 17 2019 22:32:23 local-sg-19.x n2osevents[0]: CEF:0|Nozomi


Networks|N2OS|19.0.3-10142120_A2F44|SIGN:MALWARE-DETECTED|Malware detected|
9|
app=smb
dvc=172.16.248.11
dvchost=local-sg-19.x
cs1=9.0
cs2=true
cs3=d25c520f-7f79-4820-b5ae-d1b334b05c75
cs4={trigger_type: yara_rules, trigger_id: MALW_DragonFly2.yar}
cs5=["5740a157-08e8-490f-85ad-eef23657e3cb"]
cs6=1
cs1Label=Risk
cs2Label=IsSecurity
cs3Label=Id
cs4Label=Detail
cs5Label=Parents
cs6Label=n2os_schema
flexString1=T0843
flexString1Label=mitre_attack_techniques
flexString2=Impair process (etc)
flexString2Label=mitre_attack_tactics
flexString3=Suspicious Activity
flexString3Label=name
dst=172.16.0.55
dmac=00:0c:29:28:dd:c5
dpt=445
msg=Suspicious transferring of malware named 'TemplateAttack_DragonFly_2_0'
was detected involving resource '\\172.16.0.55\ADMIN
\CVcontrolEngineer.docx' after a 'read' operation [rule author: US-CERT
Code Analysis Team - improved by Nozomi Networks] [yara file name:
MALW_DragonFly2.yar]
src=172.16.0.253
smac=00:04:23:e0:04:1c
spt=1148
proto=TCP
start=1571351543431

Note the highlighted part of the Alert message. This is the Alert Type ID. This should be used as
the key for performing searches once Nozomi syslog events have been ingested into the integration
platform.
Best practice: Ensure that your parsing logic extracts the appropriate data. If you are integrating with
CEF messages, a CEF parser must be used. Do not use regular expressions. This will ensure the
integration integrity in the future. When using the correct parser for the data that is expected, be sure to
test different inputs to ensure that data is correctly extracted from the messages.
Health events
Health events in CEF have the following format, as shown in this example:

<131>Oct 10 2019 15:57:48 local-sg-19.x n2osevents[0]: CEF:0|Nozomi


Networks|N2OS|19.0.3-10201846_FD825|HEALTH|Health problem|0|
dvchost=local-sg-19.x
cs6=1
cs6Label=n2os_schema
msg=LINK_DOWN_on_port_em0

Note the highlighted part of the health message. This is the health type ID. This should be used as
the key for performing searches once Nozomi syslog events have been ingested into the integration
platform.
Best practice: Ensure that your parsing logic extracts the appropriate data. If you are integrating with
CEF messages, a CEF parser must be used. Do not use regular expressions. This will ensure the
| User Interface Reference | 173

integration integrity in the future. When using the correct parser for the data that is expected, be sure to
test different inputs to ensure that data is correctly extracted from the messages.
Audit events
Audit events in CEF have the following format, as shown in this example:

<134>Oct 10 2019 16:00:18 local-sg-19.x n2osevents[0]: CEF:0|Nozomi


Networks|N2OS|19.0.3-10201846_FD825|AUDIT:SESSIONS:CREATE|User signed in|0|
dvchost=local-sg-19.x
cs1=Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:69.0) Gecko/20100101
Firefox/69.0
cs6=1
cs1Label=browser
cs6Label=n2os_schema
msg=User signed in
src=172.16.248.1
suser=admin
start=1570723218425

Note the highlighted part of the audit message. This is the Audit Type ID. This should be used as
the key for performing searches once Nozomi syslog events have been ingested into the integration
platform.
Best practice: Ensure that your parsing logic extracts the appropriate data. If you are integrating with
CEF messages, a CEF parser must be used. Do not use regular expressions. This will ensure the
integration integrity in the future. When using the correct parser for the data that is expected, be sure to
test different inputs to ensure that data is correctly extracted from the messages.

Connectivity status and status


Connectivity status represents the connectivity status with the data integration endpoint, which
is updated when a new connection is initiated. The value is OK if the data integration preliminary
connection check returns a success response, otherwise you will see the value of the error / exception
retrieved while performing these preliminary checks.
Status represents the state of the last data operation, i.e., was the data integration able to send data or
did it receive errors. The value is OK if the integration returns a success response, a 200 status code,
or similar. Otherwise the system displays the value of the error / exception retrieved while trying to
send the data.
Both Connectivity Status and Status support the OK value across all data integrations. If either is not
OK, one or more errors have occurred. The value then displays error details, depending on the specific
data integration.

Playbooks
Playbooks are instructions associated with alerts that guide users to take proper action when an alert is
raised.
The procedure to use playbooks is:
1. A playbook template is created. The template contains text (eventually using markdown syntax) that
describes the actions, tasks, and other guidelines to be taken or followed when a specific alert is
raised.
2. The playbook template is associated with a specific alert using an alert rule with the action Assign
playbook (see Alert tunings for information on creating alert rules). The alert rule matches the alert
using the usual alert rule matching criteria, then inserts a copy of the playbook template into the
alert when there is a match. Alert rules can assign the same playbook template to various alerts
using different matching criteria.
3. Once a playbook is assigned to an alert, the playbook can be modified independently of the original
playbook template. This is typically used to add notes for a specific alert or to mark actions as
performed.
| User Interface Reference | 174

Note: Playbooks and alert rules written in Vantage are automatically propagated to the connected
sensors.

Creating a playbook template


Perform these steps to create a playbook template:
1. From the Web UI, go to the gear icon ( ) in the upper right corner, then select Settings > Alert
playbooks. The list of available playbook templates is shown.
Note: The list is empty if no playbooks were previously created or propagated from CMC or
Vantage.

Figure 135: Alert playbooks


2. Press the +Add button. The Create Playbook popup appears.

Figure 136: Create playbook


3. At the Create playbook popup:
a. In the Name field, enter a playbook name.
b. In the Playbook field, enter steps to be followed when alerts associated with this playbook
occur.
c. Click Create playbook to save your changes.
Following are playbook examples, but there is no specific style or format required.
| User Interface Reference | 175

Figure 137: Playbook - Example 1

This is another example of a playbook.

Figure 138: Playbook - Example 2

(Example) How to assign a playbook to an alert


In order to assign a playbook to an alert, first create an alert rule entitled Assign playbook using the
instructions in the Alert tunings section of this manual.
In this example, we briefly outline the procedure to assign a playbook to an alert with a specific Type
ID.
1. From the Alerts tuning tab, click the +Add button to add a playbook to the alert rule. The
Configure alert popup appears.
| User Interface Reference | 176

Figure 139: Configure alert popup


2. At the Type ID field, insert the type ID for the alerts on which you would like to associate the
playbook.
3. Complete the Assign Playbook tab:
a. In the Playbook name field, select the name of the playbook from the dropdown menu.
b. Save your changes.

Figure 140: Assign Playbook tab

Editing/modifying a playbook template


Perform these steps to edit/modify a playbook template:
1. From the Web UI, go to the gear icon ( ) in the upper right corner, then select Settings > Alert
playbooks. The Alert playbooks screen appears.
| User Interface Reference | 177

2. Select a playbook template from the list, then select the configure ( ) icon next to it from the
Actions column. The Edit playbook popup appears.

Figure 141: Edit playbook template


3. Edit the playbook template fields, as needed.
a. In the Name field, edit the playbook template name.
b. In the Playbook field, edit the steps to be followed when alerts associated with this playbook
template occur.
c. Click Edit playbook to save your changes.

Deleting a playbook template


Perform these steps to delete a playbook:
1. From the Web UI, go to the gear icon ( ) in the upper right corner, then select Settings > Alert
playbooks. The Alert playbooks screen appears.

Figure 142: Alert playbooks


2. Select a playbook, then select the delete ( ) icon next to it from the Actions column.

Note: If you delete a playbook referenced by an alert rule(s), you will be asked to confirm the deletion.
If you proceed with the deletion, all alert rules associated with the playbook are also deleted.
| User Interface Reference | 178

Credentials manager
The Credentials manager regulates credentialing for node communication using protocols and Smart
Polling.

Introduction
Credentials manager is a feature that securely stores passwords and other sensitive information
used by Guardian to access hosts through Smart Polling, or to decrypt encrypted transmissions that
are passively detected. The migration task migrates existing credentials from the Smart Polling plan
configurations to the new Credentials manager to enhance sensitive data maintenance.
Go to Administration > Settings > Credentials manager to access the Credentials manager. The
Credentials screen appears, with a list of identities. Depending on the scope, the credentials are used
to access the corresponding host (e.g., Smart Polling) or to decrypt passively-obtained traffic (e.g.,
DLMS).

During the process, if the Credentials manager finds credentials for that node-scope pair, as in SSH for
Smart Polling, those credentials are used.
Important: To enhance performance, manually enable the Credentials manager for a specific
protocol. See the Configuring protocols section for more details.

Adding an identity
Click the Add identity drop-down menu at the top of the page to add a new identity to an available
scope.

Each identity has a unique name to distinguish it from other identities. Insert a node into the
applicability list of an identity by:
• Typing an IP address or a subnet mask into the dedicated box
• Selecting a set of nodes through the nodes selector beside the input box
| User Interface Reference | 179

Important: A node cannot belong to multiple identities for the same scope.

Importing credentials from CSV


Click the Import credentials from CSV drop-down menu at the top of the screen to see a list of CSV
from which a set of identities for a given scope can be imported.

Upload, then bind each CSV column to the credential fields. Also specify if the CSV file contains a
header.

Click the Import button to complete the CSV import process. The import results are displayed.

A list of errors, if any, displays. Click the Copy button to copy the error list.
| User Interface Reference | 180

Zone configurations
This topic describes how to add and configure network zones in the Nozomi Networks solution, and
how to propagate CMC zones to the connected Guardian sensors.
Zones can be configured and controlled in the CMC and can be propagated to the connected Guardian
sensors. Zone conflicts can be resolved through an execution policy specified in the CMC. To configure
CMC parameters, within CMC, go to Administration > Settings > Synchronization settings to
customize specific settings.
For additional information on synchronization settings, go to Data synchronization policy on page
322. For information on how to configure and propagate zones between Nozomi sensors and
Vantage, refer to the Vantage documentation.
Note: As shown in the following figure, zone configuration tooltips inform users about the currently
configured data synchronization policy.

1. Go to Administration > Settings > Zone configurations to access the network zones table. The
Zone configurations table displays.

Figure 143: Zone configurations table with callouts

A Actions Icons present available options. The dropdown menu


options include: Select all, Select none, Invert selection
B Name Zone name
C Matching Shows the IP range matched by the node discovered within
Segments the range (only the IP range is shown in the column, not the
nodes)
D Matching VLAN ID Lists only nodes that match the zone with this VLAN ID.
E Assigned VLAN ID Lists only nodes assigned to the zone with this VLAN ID.
F Level The level defines the position of the nodes pertaining to the
given zone within the Purdue model.
G Nodes Ownership Select zone as public or private
H Security A security profile applies to nodes within the zone. If set,
Configuration the security configuration overrides global security profile
settings.
I Source Select local or upstream
J Execution policy Zone configurations are controlled by Guardian. Zones
received from upstream are ignored. Local only
| User Interface Reference | 181

K Import Allows users to upload a config file to bulk-insert zones


instead of entering them one by one with the modal
L Export all Downloads the entire table as a config (.cfg ) file, to import
into a different machine for backups, etc.
M Live Makes the table update itself periodically
N +Add Add a zone
O Columns selected Lists the columns that are selected to display
2. Manually edit the zones, as needed, using the icons in the Actions column:

Figure 144: Zone configurations table

• Locked zones: Zones that are predefined or standard have a lock icon. These are
preconfigured and cannot be modified.
• Fallback (editable) zones: Inside the predefined or standard zones are two default zones that
act as fallbacks respectively for public and private nodes that don't belong to a specific zone.
Click the pencil icon to rename/edit these fallback zones. The Rename configuration popup
displays.

Figure 145: Zone rename configuration


• User-defined zones: User-defined zones are identified with the pencil and trash icons. To edit
or remove, click the corresponding icon, or select the checkbox.
• Export zones: User-defined zones can be exported. If no zone is selected, the Export all button
exports all user-defined zones, otherwise the Export selected button exports only selected
zones. Some table actions can help with the zone selection/deselection. Predefined and auto-
configured zones cannot be exported.
• Import zones: User-defined zones can be imported using the Import button. After the import
process, zones are reloaded.
| User Interface Reference | 182

Figure 146: Zone configuration import


3. Auto-configured zones are indicated with a plus (+) icon. Auto-configured zones are heuristically
discovered by the engine which pre-fills some fields. Click the icon to add and further configure the
zone. The system does not use auto-configured zones until when they are added and configured.

Figure 147: Edit zone configuration


4. Add new custom zones, as needed. The zone must be given a name without spaces. It must
include at least one network segment.
All nodes pertaining to one of the segments of a zone inherit the properties of that zone.
The following optional configuration settings are also available for every node:
• IP network segments: Specified in Classless Inter-Domain Routing (CIDR) notation
(e.g., 192.168.2.0/24), or by means of a range that includes both ends (e.g.,
192.168.3.0-192.68.3.255). Segments are concatenated by commas.
• MAC address ranges: Both ends of the range are included (e.g.,
08:14:27:00:00:00-08:14:27:ff:ff:ff)
• MAC address matching fallback: The node ID must match the zone network segments to
make the node part of zone. There are cases where this matching strategy is not enough, for
example we may want to have nodes with an IP as node ID match a zone defined with MAC
address ranges. In those cases we can enable this fallback matching strategy in order to match
against the MAC address of the node whenever the node IP does not match any segment.
• Matching VLAN ID: Lists only nodes that belong to such VLAN. For example, consider a zone
configured as 192.168.4.0/24, with VLAN ID set to 5, and two nodes within such network:
192.168.4.2 and 192.168.4.3, with only the former belonging to such VLAN. When filtering
the view with this zone, only node 192.168.4.2 is shown.
• Assigned VLAN ID: Nodes that belong to this zone are assigned this VLAN ID.
• Level: The level defines the position of the nodes pertaining to the given zone within the Purdue
model. Once a level has been set for a zone, all nodes included in that zone are assigned the
same level, unless a per-node configuration has been specified as well. This means that, if
| User Interface Reference | 183

two or more zones overlap, a node that belongs to all of them will inherit the level of the most
restrictive zone.
• Nodes ownership: Ownership of the nodes belonging to the given zone. Once the ownership
has been set for a zone, all nodes included in that zone inherit such ownership, overwriting the
single nodes' ownership.
• Detection approach: Used to override the global settings from the Learning section of Security
Configurations on page 214.
• Learning mode: Used to override the global settings from the Learning section of Security
Configurations on page 214.
• Security profile: Used to override the global settings from the Security profile section of Security
Configurations on page 214.
• Network Throughput History: If enabled, nodes pertaining to the zone will have an
extended history for bytes sent and received, and all links for bytes transferred. The fields
last_1hour_bytes, last_1day_bytes and last_1week_bytes that are 0 by default will typically
work like their counterparts for 5, 15, and 30 minutes. These fields are evaluated every 5
minutes and their timespan is as follows:
• last_1hour_bytes => the last hour at granularity of 5 minutes (ex. If it's 15:32 the field will
cover the timespan from 14:30 to 15:30);
• last_1day_bytes => the last day at granularity of 1 hour but updated every 5 minutes (ex.
If it's 15:32 of Tuesday the field will cover the timespan from 16:00 on Monday to 16:00 on
Tuesday and the data is updated at 15:30 of Tuesday);
• last_1week_bytes => the last week at granularity of 1 day but updated every 5 minutes
(ex. If it's 15:32 of Tuesday the field will cover the timespan from 00:00 on Wednesday of the
previous week to 24:00 on Tuesday of the current week and the data is updated at 15:30 of
Tuesday this week)
Note: Network Throughput History is disabled by default and needs to be explicitly enabled
in the Retention tab of the Features Control Panel. When activating it, be aware that it quickly
consumes extra disk space. Disk consumption is subject to a configurable limit of 512 MB by
default, but can be decreased to 64 MB or increased to 5 GB, from the Features Control Panel.
When the disk consumption limit is reached, older data is erased to make room for more recent
samples.
For additional information on how to synchronize zones between CMCs and Guardian sensors, go to
the Zone Configuration section of the Data synchronization policy on page 322. Included are details
about Upstream Only and Local Only, and how to use the Web UI to navigate to screens and adjust
settings.

Bulk deletion
It is possible to delete multiple zones in a single action, provided the execution policy allows users to
modify the zones. Click the ellpsis (three dots) on the left. Select Delete selected zones. Then, using
the checkboxes on the left, check the items to be removed.
| User Interface Reference | 184

Figure 148: Bulk deletion


| User Interface Reference | 185

System
This topic describes how to configure the Guardian and CMC sensors:
• Configuring date and time
• Configuring network interfaces
• Uploading traces
• Importing and exporting content packs
• Importing nodes
• Importing variables
• Importing asset types
• Importing configuration/project files
• System health
• Auditing
• Resetting data
• Continuous trace

General
This topic describes how to change the hostname and specify a login banner on a sensor.
Note: The login banner is optional, and displays on the login screen and at the beginning of all SSH
connections, when configured.
1. Go to the gear ( ) icon, then System > General. The General screen displays.

Figure 149: Hostname and login banner input fields


| User Interface Reference | 186

Figure 150: Login banner example

Date and time


This topic describes how to change the date and time of Guardian or the CMC.
From the date and time page you can:
• change the timezone of the sensor
• change the current time of the sensor (use the Pick a date or Set as client buttons to set a date in
a simple way)
• enable or disable the time synchronization to a NTP server by writing a list of comma-separated
server addresses
1. From the Web UI, go to Administration > System > Date and Time. The Date settings popup
displays.
2. Select a timezone from the Timezone dropdown menu. Then Save your selection.
3. Select a date from the Pick a date button and select Set as client, if appropriate. Then Save your
settings.
Note: If the sensor is connected to Vantage or a CMC, that date is automatically used, and there
are no options.
4. At the NTP (Network Time Protocol) heading, click the Enabled box to synchronize the servers in
the network to the computer clock time. Then, Save your settings.
| User Interface Reference | 187

Figure 151: Date settings popup


| User Interface Reference | 188

Network interfaces
This topic describes how to configure the network interfaces to sniff throughput traffic.
1. From the Web UI, go to Administration > System > Network interfaces. The Network interfaces
screen displays.
a. At the Throughput field, enable (using the On button) or disable (using the Off button) the
automatic update of the diagram.
b. At the Time window field, select from the following throughput timeframes: 1m (one minute), 1h
(one hour), 1d (one day), 1w (one week).
2. Modify/configure the network interfaces from the table.

Figure 152: Network interface throughput

Actions Define/modify the Network Address Translation (NAT) rule for the current
interface (see #unique_127/unique_127_Connect_42_title_a3q_kvn_mtb on
page 188 for additional information)
Interface Interface name or, if set, its label
Note: Click the Interface column header to list the interfaces in increasing
or decreasing order.

Enabled True if the interface is enabled to sniff traffic


Is mirror True if the interface is receiving mirror traffic and not only broadcast
Mgmt filter Filters sensor traffic, when On (default is On). To change the value, see the
specific configuration rule in Basic configuration rules on page 363.
BPF filter BPF filter applied to the sniffed traffic
NAT NAT rule applied to the current interface
Denylist enabled True if the denylist is configured and enabled for the current interface
Denylist file Denylist file used by the current interface. If the file contains a row starting
with #DESCRIPTION:, the description is shown here. Example:

#DESCRIPTION: denylist_1 for test

Configuring the Network Address Translation (NAT) rule


This topic describes how to configure the network interfaces and the BPF filters.
1. Go to Administration > Settings > Network interfaces.
2.
Select the configure ( ) icon beside an interface to configure the interface. The Configure
interface popup displays.
| User Interface Reference | 189

3. Enter a label in the Label field to name the interface. You can provide a label for a network
interface, which displays in place of the network interface name in any part of the user interface.
• Labels must differ from other labels and network interface names.
• Labels can contain only alphanumeric characters and '-' / '_' symbols.
• Interface names are used as labels if an empty value is provided.
4. Toggle the Enable button to On (On is the default). Toggle to Off to disable the network interface
from sniffing traffic.
5. Configure the Original subnet, the Translated (destination) subnet and the CIDR mask for the
Network Address Translation (NAT) rule.
• The NAT rule allows you to rewrite the source and destination IPs of packets sniffed on this
interface. For example, to translate 192.168.1.100 in 10.1.1.100 you have to configure the rule:
192.168.0.0 10.1.0.0 /16.

Figure 153: Configure interface popup


| User Interface Reference | 190

Configuring the Berkeley Packet Filter (BPF)


You can configure the BPF filter to apply to an interface via a visual editor or manually. Typically, more
complex filters are inserted manually.
1. In the BPF filter section of the Configure interface popup, filter using one of the following
methods:
a. Click BPF Filter editor to open the visual editor. The BPF filter editor popup with the most
common filters displays. Edit the filter using the BPF filter editor.

Figure 154: BPF filter editor


b. Alternatively, toggle the On position of the Manual insertion of a custom filter expression field
to manually enter a filter. The BPF filter popup displays. Enter the BPF filter manually.
2. Save your work before continuing.

Figure 155: Manual insertion of a BPF filter

Configuring Denylist
In the Denylist section of the Configure interface popup, you can upload a text file containing
a denylist, i.e., a list of IP addresses, explicit or with netmasks or using wildcards, that will not be
processed by the Guardian. A wildcard in digit 2,3 or 4 is equivalent to a /8, /16 or /24 netmask.
The effect is similar to that of the BPF filter, however a denylist can handle tens of thousands of IP
addresses, numbers that are beyond the capability of the BPF filter.
1. In the Denylist section of the Configure interface popup, toggle Enable denylist to the On
position.
2. Drop a file or click to upload a file. A denylist must contain one entry per line: a dash (-) followed by
a space and an IP address (optionally containing a wildcard or a netmask).
Note: The maximum file size is 2G. The supported file type is text files (.txt).
For example:
| User Interface Reference | 191

- 192.168.1.* Deny the range 192.168.1.1-192.168.1.255, 192.168.2.1, and


the range 192.168.3.1-192.168.3.255. Everything else is implicitly
- allowed.
192.168.3.1/24
- 192.168.2.1

- * The first line is invalid, as it would reject all traffic. Invalid lines in a matchlist
are ignored. The last line is redundant.
- 192.168.2.*
- 192.168.2.1
| User Interface Reference | 192

Upload traces
This topic describes how to upload/play a trace file (PCAP) into Guardian. The sensor ingests the traffic
as if it came through the network.
1. From the Web UI, go to Administration > System > Upload traces. The Upload traces page
displays. From this page, you can upload/play a trace file into Guardian.

2. To customize the behavior of the upload/play action, select or deselect any of the following options:
• Use trace timestamps: Check this option to use the time captured in the trace file. Otherwise,
the current time is used.
Important: We recommend not selecting this checkbox; otherwise data is hidden due to time
filters.
• Delete data before play: Check this option to delete the data in the sensor before running the
play action. When multiple traces are played at once, deletion is applied only before running the
first trace.
• Auto play trace after upload: Check this option to play the trace immediately after the upload.
3. Upload any trace files by dropping or uploading them in the upload space. Supported formats
include PCAP, and PCAPNG. Maximum file size is 2G.
4. In the Last uploaded traces section, filter and sort the traces before taking any action. You can
sort by decreasing or increasing value. Click the header again to toggle between the two.
a. Actions: Click the three dots for action options. See Step 5 on page 193 below for additional
action information.

b. Last uploaded time: Click to sort by last uploaded time.


| User Interface Reference | 193

c. Last played time: Click to sort by last played time.


d. Filename: Enter a file name value to filter by file name.

e. Note: Enter a note by which to sort.


f. Username: Enter a user name by which to sort.
5. From the Actions column, select trace file actions from the available options:


Select trace ( ): Click the checkbox to select the traces to be played. Multiple traces are
played sequentially in the order they are selected (as indicated by the number to the left of the
check box).

Replay trace ( ): This action replays the corresponding trace (only that single trace). To
run all of the selected traces, click the three dots under the Actions column, then click Play
selected.

Edit note ( ): Enter information to share a note about the uploaded trace.

Delete from the list ( ): Erase the trace file from the sensor, no environment data is affected.
Alerts may generate as result of trace usage. If the played file is artificial, the alert timestamp may be
not recognized by the system. In this case, a value containing InvalidDate is displayed in the time
column of the alert table.
Note: By default, the sensor retains 10 trace files. To configure this value see Configuring retention on
page 428

Content packs
Content packs are a reporting and query feature that packages multiple templates into a single file for
team collaboration. A single content pack may contain one or many queries and/or reports. Content
packs also support dashboards.
| User Interface Reference | 194

Introduction
Content packs are a feature that allows user groups with diverse requirements to package the same
reporting and query templates from a single file. Once you organize multiple reports and queries in
a single file, information is then distributed, shared, improved, or re-used by users across multiple
systems. This is especially useful in complex reporting arrangements, such as compliance with
government regulations, or hunting for a specific threat.
Content packs use a JSON file format that you can open and read with a text reader. The file expands
so you can add other JSON formatted information to the content pack. The Nozomi Networks product
ignores data that it doesn’t understand and continues parsing the file, which enables users to add data
for other systems to the content pack.
Note: Content packs also support dashboards. When you insert a content pack into a new Guardian
instance, dashboards load and function the same as those from the original Guardian. Imported
dashboards include all queries and widgets associated with the original saved dashboards.

Exporting content packs


After you create reports and queries for the content pack, place the reports or queries in a dedicated
report or query group, or on a dashboard before exporting. (See Reports on page 125 and Queries on
page 118 for additional information.)
To export the content pack:
1. From the Guardian dashboard, go to Administration > System > Export. The Export content
pack screen appears.
Note: If you are using a dashboard to export content packs, go to the Dashboard header menu
from the main Guardian Web UI, or go to Administration > Settings > Dashboards, then click the
Export button ( ) at the top right of the screen.

Figure 156: Export content pack


2. Click the checkbox next to the queries, reports, or dashboards that you wish to put in the content
pack to export (i.e., either entire sections or singular groups of data).
| User Interface Reference | 195

Figure 157: Export content pack details


3. Click Export data. A .JSON file is downloaded to your local machine. This file is your content pack.

Importing content packs


You can import content packs and the information is added to the corresponding sections of the
Nozomi Networks solution.
To import the content pack:
1. From the Guardian dashboard, go to Administration > System > Import. The Import data page
appears.
2. From the Import content pack section, upload the Nozomi Content Pack file, either by dropping a
file or clicking to upload a file in the space provided.
Allowed files are:
• Nozomi Content Pack (.json)
| User Interface Reference | 196

Figure 158: Import content pack feature


3. After adding the content pack, a summary of imported content appears.

Import
This topic describes how to import data from nodes, variables, asset types, configuration/project files,
and content packs.
From the Web UI, go to Administration > System > Import to access the Import feature. The Import
data page appears.

Import nodes - CSV file


This feature allows you to add nodes and assets (flag create non-existing nodes) or enrich existing
ones by binding CSV fields to those of Nozomi Networks.
1. From the Import nodes - CSV file section, load node information from CSV files, either by dropping
a CSV file or clicking to upload a CSV file in the space provided. For each modified node, the file
must include a row with columns specifying the additional data (i.e., vendor, serial number, custom
fields...). Optionally, a header can specify the column names. If the CSV file provides headers in
| User Interface Reference | 197

the first line of the file, check the Has header checkbox to view the column titles. The maximum
file size is 2G. CSV file data will not replace field values that were previously imported or manually
overridden, unless the Override source checkbox is selected.
Note: Some special fields and Confirmed Mac addresses are restricted to specific values and so
they may not be overwritten.

Figure 159: Import nodes - CSV feature


2. Set up the configuration by selecting External data field in the Match field, and Nozomi data field
in the With field.
3. Enter imported data from the CSV file in the correct Nozomi Networks field. For example, if the
imported CSV file contains a list of IP addresses, select the ip field from the Nozomi data field
dropdown.
• You can match CSV fields, but only Nozomi Networks mac_address and ip fields are used to
associate records. It is not possible to bind fields before choosing a match.
• The Nozomi Networks field type can only have values that match already existing types, either
built-in or custom. Other values are not considered.
• The Nozomi Networks field role can only have the following value (other values are not
considered):

antivirus_server jump_server
backup_server local_node
consumer power_quality_meter
db_server producer
dhcp_server protection_relay
dns_server security_scanner
engineering_station teleprotection
| User Interface Reference | 198

gateway terminal
historian time_server
HMI voip_server
hypervisor web_server

• The Nozomi Networks field zone must match an existing zone to bind the field. You can add a
zone to make it match.

Figure 160: Binding fields


4. To create a new field go to Administration > Settings > Data model and choose a name and
a type for your custom fields. After this operation the field is available in the Import page in the
Nozomi field binding dropdown.
Important: You can only create and import custom fields for an assets list.

Figure 161: Data model

Import node - CSV example:

ip,label,vendor,a_custom_fields
192.168.1.57,label from csv,vvvv,custom value 57
192.168.1.23,node 23,vvvv,custom value 23
172.21.88.61,node 61 from csv,vvvv,custom value 61

Import variables - CSV file


This feature allows you to add variables from scratch (flag create non-existing variables) or to enrich
existing variables.
1. From the Import variables - CSV file section, load variable information from CSV files, either
by dropping a CSV file or clicking to upload a CSV file in the space provided. For each modified
variable, the file must include a row with columns specifying the additional data (e.g., host, label,
unit...). Optionally, a header can specify the column names. If the CSV file provides headers in the
first line of the file, check the Has header checkbox to view the column titles. The maximum file size
is 2GB.
| User Interface Reference | 199

Figure 162: Import variables - CSV feature


2. Set up the configuration by selecting External data field in the Match field, and Nozomi data field
in the With field. Then, click the Create non-existent variables checkbox.
3. Enter imported data from the CSV file in the correct Nozomi Networks field. For example, if the
imported CSV file contains a list of names, select the name field from the Nozomi data field
dropdown.
• Match the CSV fields with the Nozomi name field. For matching fields, you must choose a
match before you bind the fields, otherwise binding is disabled.
| User Interface Reference | 200

Figure 163: Binding fields

Figure 164: Import variables - CSV example

Import asset types - CSV file


The import asset types feature allows you to enlarge the built-in set of asset types with a set of new
custom types.
1. From the Import asset types - CSV file section, load asset type information from CSV files, either
by dropping a CSV file or clicking to upload a CSV file in the space provided. The CSV file should
include a header row with Name and the list of asset type names in the following rows, one per row.
Each asset type is identified by its name. This implies that, during the import process, duplicate
names are ignored and notified. The supported file type is CSV and the maximum file size is 2G.
| User Interface Reference | 201

Figure 165: Import asset types CSV feature


The built-in asset types are:

actuator mobile_phone
audio_video network_security_appliance
AVR OT_device
barcode_reader other
camera PDU
computer PLC
controller power_generator
digital_io power_line_carrier
drone printer_scanner
DSL_modem radio_transmitter
firewall robot
gateway router
HMI RTU
IED sensor
infusion_system server
inverter switch
IO_module tablet
IOT_device time_appliance
light_bridge UPS
media_converter VOIP_phone
medical_imager WAP
| User Interface Reference | 202

meter

Figure 166: Import asset types CSV example

Import configuration/project file


With the import configuration/project file feature, a project file can be imported. The information written
in the project file is added to the asset data in the Nozomi Networks solution.
Note: For XML Profinet GSDML, you can import an XML Profinet GSDML file that describes a physical
device. Information in the file allows Guardian to extract and display information about the device
configuration when device configuration packets are displayed in traffic.
Allowed project files are:
• Rockwell Harmony (.conf)
• Yokogawa CENTUM VP (.gz, .zip)
• Siemens (.cfg)
• IEC 61850 SCL/SCD (.scd)
• Triconex (.pt2)
• Allen-Bradley (.l5x)
• Honeywell TDS (.txt, .zip)
• Profinet IOCM (.xml)
| User Interface Reference | 203

Figure 167: Configuration/project file import feature

The fields that can be imported for each file type are:

Supported file types Imported fields


Allen-Bradley ip, firmware version, product name, modules (i.e. port, address, product
code, firmware version, product name, vendor)
Honeywell TDS label, vendor
IEC 61850 SCL/SCD ip, VLAN, product name, asset type, vendor, AppID, data model
Note: Importing this file changes the knowledge of the IEC 61850 data
model in use in the system, thus improving Guardian's ability to accurately
extract variables. Existing variables related to this protocol are deleted to
avoid inconsistencies with those extracted from traffic after the import.

Profinet IOCM modules (i.e. slot, subslot, vendor, software release, hardware release)It
may also extract variables
Rockwell Harmony ip, product name
Siemens ip, mac address, vendor, product name, label, modules (i.e. rack, slot,
subslot, product code, module code)
Triconex ip, label, vendor
Yokogawa Centum ip, label, vendor, product name, modules (i.e. slots, each with vendor,
product name, firmware version)
| User Interface Reference | 204

Health
This topic describes how to evaluate the health of your Guardian.
All the sections described below are available for admin user. Additionally, access is granted to admin
users with health permission.
From the Web UI, go to Administration > System > Health. The Health page displays.

Performance
1. From the Web UI, go to Administration > System > Health > Performance tab. The Health page
displays. In this tab there are three charts showing, respectively, the CPU, RAM and disk usage
over time. The Services section provides information about IDS, Alerts, Sandbox, Trace, and
Vulnerabilities.
2. You can toggle performance to Off , or make changes to the timeframe by clicking 1 minute (1m)
(the default), 1 hour (1h), 1 day (1d), 1 week (1w) for the CPU, RAM and disk usage over time.

Figure 168: Performance charts

Health log
The health log reports the details of any kind of performance issues the sensor experiences. In general,
logs include information such as CPU, RAM, disk space, interface status, stale sensors, or generic high
load.
Note: The CPU percentage usage, RAM MB usage and Disk percentage usage will show as:
• Good
• Average, or
• Poor.
Note: Stale or unreachable describes the status of the communication between RC, Guardian, CMC
(sync). It means the last time the sensor communicated back to the CMC exceeded the configured
threshold.
| User Interface Reference | 205

Figure 169: Health log tables

Types of health log entries


A Guardian sensor can generate many types of health log entries, including the following:
• Interface portXX has not received any packets in the last minute
• is under high load
• is no longer under high load
• X% cpu usage
• cpu usage back to normal
• X% ram used
• ram usage back to normal
• X% disk space used
• disk space usage back to normal
• sensor is stale
• sensor is no longer stale
• LINK_UP_on_port_N
• LINK_DOWN_on_port_N
• Failed migrations
• Log_disk_full-starting-emergency-shutdown
| User Interface Reference | 206

Migration Tasks
This topic describes automated tasks that help the system migrate configuration and settings to newer
standards.
Migration tasks are a helper tool that can be used to apply specific changes to configuration, settings
and data to make use of new features or adapt to model changes. These tasks become available upon
the installation of new versions of N2OS and are described in the corresponding release notes.
Migration tasks can only be executed from the top-level CMC of an installation. Guardians that are not
connected to a CMC can also run migration tasks. If the installation is connected to Vantage, Migration
tasks can be run from there; in that case, please refer to the Vantage documentation to perform these
operations.

Go to the gear ( ) icon, then System > Migration tasks. The Migration tasks page displays.
Each migration task is presented separately, giving an overview of the changes that will be applied on
each connected sensor. Tasks can be executed on individual Guardians, or on CMCs, in which case
the entire subtree of sensors connected, directly or indirectly, to that CMC, receive the instruction to
execute the task. Tasks can also be applied globally by clicking Execute all. Using the same approach,
tasks can be ignored, which disables the execution of the corresponding task on the chosen sensor or
sensors.
Upon executing a task, a spinning wheel appears next to the sensors that are executing it. Since the
execution of a task is an asynchronous process, it can take up to several minutes to complete. During
this time, it is safe to leave the page or disconnect from the Web UI. The Migration tasks page will
report the result of each execution.

Figure 170: Example of migration task

Migration tasks can be hidden by clicking Hide permanently. In that case, the migration task is hidden
and cannot be executed again.

Audit
This topic describes the audit function.
| User Interface Reference | 207

Go to the Administration > System > Audit page for a list of all relevant user actions, from login/
logout to configuration operations, such as manually learning or deleting objects from the Environment.
This includes all recorded user actions based on the IP and username of the user who performed the
action. From the audit table, you can easily filter and sort this data.

Figure 171: Audit table

Reset data
This topic describes how to reset data.
1. Go to the Administration > System > Data page. From the Web UI, you can selectively reset
several kinds of data used by the Nozomi Networks solution. The Reset data popup displays.
2. Click All, Only data, or None, depending on the type of user data being reset.
3. As needed, check the appropriate option(s) to reset the specific type of data.

Environment Reset network nodes, assets, links and variables (learned data is lost)
Network Reset link event history, network charts data and captured URLs/files
Process history Reset the variables history
CPEs and CVEs Reset the information related to vulnerabilities
Alerts Reset the alerts
Traces Reset generated traces, both requested by users and automatically
generated
Time machine Reset the snapshots of the time machine
Queries Reset the queries and query groups
Assertions Reset the assertions
Smart Polling Delete Smart Polling node points
Learning Reset to 'Learning' phase
| User Interface Reference | 208

Figure 172: Reset data popup


| User Interface Reference | 209

Continuous trace and other trace actions


This topic describes how to request a continuous trace, as well as how to request a custom trace and
how to show requested traces.

Continuous trace
Continuous traces are packet captures using a given arbitrary Berkeley Packet Filter (BPF) that lack
time or storage limits. Continuous traces can be requested, managed, inspected and downloaded.
Traces are saved in PCAP files with a maximum size of 100MB. When a file reaches this threshold, it
is closed and a new file is created to keep collecting the network packets. Trace files are saved in the
sensor's hard disk. Guardian requires that 10% of the hard disk be continuously free. When the hard
disk usage approaches its limit, the oldest PCAP files belonging to the continuous traces are deleted.
Traces can be stopped and resumed. When a trace is resumed, a new PCAP file is created. When
a sensor is restarted, the continuous traces, their collected data, and their statuses are resumed
automatically.
Prerequisites
Non-admin users must belong to a group with Trace permission in order to perform actions in this
section.
1. To access continuous traces, in the Web UI, go to <Username> > Other actions. The Other
actions popup displays.

Figure 173: Other actions popup


2. Select Continuous trace. The Request new continuous trace popup displays.
| User Interface Reference | 210

Figure 174: Request new continuous trace


3. In order to request a trace, enter a BPF filter in the Packet filter field and click the Start button.
Guardian begins collecting packets corresponding to the provided filter. The filter can be left empty,
in which case all packets are collected by the requested continuous trace.
When complete, a table at the page displays that shows the requested continuous traces. The
following information is provided:

Time The time at which the trace has been requested.


ID A unique identifier of the trace request.
User The user who requested the trace.
Packet filter The BPF filter defining the collection.
In progress Whether the collection is active or stopped.

Several actions are available to manage the traces:

Table 15: Trace actions

Start the trace (disabled if the trace is currently in progress)

Stops the trace collection (disabled if the trace is currently paused)


| User Interface Reference | 211

Destroy the trace and discard all data collected

List and download the PCAP files collected by the trace

Request a custom trace


This topic describes how to request a trace specifying a custom packet filter.
1. To access a custom trace, in the Web UI, go to <Username>Other actions. The Other actions
page displays.
2. Select Request a custom trace. The Request a trace popup displays.

Figure 175: Request a trace


3. Complete the form to request a custom trace:
a. Enter the maximum packet size in the At the Trace max size (packets) field.
b. Enter the trace maximum duration in the Trace max duration (seconds) field.
c. Enter the filter request syntax in the Packet filter field. For additional information click the link for
BPF syntax or BPF examples.
4. Click Send trace request to send the request.

Show requested traces


This topic shows the trace requests executed by the current user.
1. To access requested traces, in the Web UI, go to <Username> > Other actions. The Other
actions popup displays.
2. Select Show requested traces. The Requested traces popup displays with the requested traces
executed by the current user.

Figure 176: Requested traces


Chapter

6
Security features
Topics: In this chapter we will explain how a tailored security shield can
be automatically built by Guardian and subsequently tuned to fit
• Security Control Panel specific needs.
• Security Configurations
Once the baselining has been performed, different kinds of Alerts
• Manage Network Learning will be raised when potentially dangerous conditions are met. There
• Alerts are four main categories of Alerts, each originating from different
• Custom checks: assertions engines within the product:
• Custom checks: specific 1. Protocol Validation: every packet monitored by Guardian will
checks be checked against inherent anomalies with respect to the
• Alerts Dictionary specific transport and application protocol. This first step is
• Incidents Dictionary useful to easily detect buffer overflow attacks, denial of service
• Packet rules attacks and other kind of attacks that aim to stress non-resilient
software stacks. This engine is completely automatic, but can be
• Hybrid threat detection
eventually tuned as specified in Security Configurations on page
214.
2. Learned Behavior: the product incorporates the concept of
a learning phase. During the learning phase the product will
observe all network and application behavior, especially SCADA/
ICS commands between nodes. All nodes, connections,
commands and variables profiles will be monitored and analyzed
and, after the learning phase is closed, every relevant anomaly
will result in a new Alert. Details about this engine are described
in Learned Behavior.
3. Built-in Checks: known anomalies are also checked in real
time. Similarly to Protocol Validation, this engine is completely
automatic and works also when in Learning mode, but can be
eventually tuned as specified in Security Configurations on page
214.
4. Custom Checks: automatic checks such as the ones deriving
from Protocol Validation and Learned Behavior are powerful and
comprehensive, but sometimes something specific is needed.
Here comes Custom Checks, a category of custom Alerts
that can be raised by the product in specific conditions. Two
subfamilies of Custom Checks exist and are described in Custom
checks: assertions on page 226 and Custom checks: specific
checks on page 230.
The powerful automatic autocorrelation of Guardian will generate
Incidents that will group specific Alerts into higher level actionable
items. A complete dictionary of Alerts is described at Alerts
Dictionary on page 233 and Incidents Dictionary on page 242.
Additionally, changing the value of the Security Profile changes the
visibility of the alerts shown by Guardian based on the alerts type.
| Security features | 214

Security Control Panel


The Security Control Panel gives an overview of the current status of the learning process and allows
the configuration of the features that manage the learning, the security profile, the zones and the alerts
tuning.

Figure 177: The Security Control Panel overview page

The learning section shows the progress of the engine for both network and process learning. The Last
detected change and the Learning started entries will report the point in time when the last behavior
change was detected and the time when the learning was started.

Security Configurations
The security features can be configured using the "Edit" tab of the security control panel. The page
guides the user through five configuration steps that allow an advanced yet simplified customization of
the features.
| Security features | 215

Learning

Figure 178: The learning editor

Guardian provides a flexible approach to anomaly-based detection, allowing to choose from two
different approaches:
• Adaptive Learning: uses a less granular and more scalable approach to anomaly detection
where deviations are evaluated at a global level rather than at a single node level. For example,
the addition of a device similar to the ones already installed in the learned network won't produce
alerts. This holds true for the appearance of a similar communication. Adaptive Learning shows its
maximum capabilities when combined with Asset Intelligence.
• Strict: uses a detailed anomaly-based approach, so deviations from the baseline will be detected
and alerted. This approach is called strict because it requires the learned system to behave like it
has behaved during the learning phase, and requires some knowledge of the monitored system in
order to be maintained over time.
The engine has two distinct learning goals: the network and the process. For both cases the engine
can be in learning and in protection mode, and they can be governed independently.
1. Network Learning is about the learning of Nodes, Links, and Function Codes (e.g. commands) that
are sent from one Node to another. A wide range of parameters is checked in this engine and can
be fine-tuned as described in Manage Network Learning on page 220.
2. Process Learning is about the learning of Variables and their behavior. This learning can be fine-
tuned also with specific checks as described in Custom checks: specific checks on page 230.
With the Dynamic Window option you can configure the time interval in which an engine considers a
change to be learned (every engine does this kind of evaluation per node and per network segment).
After this period of time, the learning phase is safely automatically switched to protection mode, with
the effect of:
• raising alerts when something is different from the learned baseline
• adding suspicious components to the Environment with the "is learned" attribute set to off, in such a
way that an operator can confirm, delete or take proper action from the manage panel.
In this way, stable network nodes and segments become protected automatically thus you are not
overwhelmed with alerts due to the premature closing of learning mode.
| Security features | 216

Security profile

Figure 179: The security profile editor

The Security Profile allows to change the visibility of alerts based on their type. Changing the value
of the Security Profile has immediate effect on newly generated alerts and it has no effect on existing
alerts. By default the Security Profile is set to Medium. Alerts which are not visible under the current
configurations are not stored in the database, unless they are part of an incident. This behaviour can
be changed setting to true the option save_invisible_alerts.

Zone configurations
All settings concerning the learning engine and the security profile can be customized on a per-zone
basis. Please refer to Zone configurations for the details.

Alert tunings

Figure 180: The alert rules editor

In the Tuning section of the Security Control Panel, it is possible to customize the alerts behavior.
Specifically, a matching criteria can be created by imposing conditions on several fields such as IP
addresses, protocol and many others.
This feature can be selectively enabled for specific user groups.
| Security features | 217

Figure 181: Alert tuning popup

Source/Destination IP Set the IP of the source/destination that you want to filter.


Source/Destination MAC Specify the MAC of the source/destination that you want to filter.
Match IPs and MACs in both Check this if you want to select all the communications between
directions two nodes (IP or MAC) independently of their role in the
communication (source or destination).
Source/Destination Zone Specify the zone of the source/destination that you want to filter.
Source/Destination Port Specify the port of the source/destination that you want to filter.
Type ID The type ID of the alert, this field is precompiled if you create a
new modifier from an alert in Alerts page.
Trigger ID Unique identifier corresponding to the specific condition that has
triggered the alert.
Protocol Set the protocol that you want to filter.
Note Enter free-form text that describes details of the alert rule.
Execute action Select an action to perform on the matched alerts:
• Mute: Switch ON/OFF: to mute or not the alert.
• Mute Until: Specify a date until which the alert will be muted.
| Security features | 218

• Change Security Profile Visibility: Set to ON to force the


visibility of the selected alert type for any selected profile, or to
OFF to hide it for any selected profile. Useful for extending or
reducing the default provided security profiles as needed.
• Change risk: Set a custom risk value for the alert.
• Change trace filter: Define a custom trace filter to apply to
this alert.
• Assign playbook: Define a playbook to be attached to the
matching alerts (The playbook to be attached has to be
selected from the list of the available playbook templates)

Priority Set a custom priority; when multiple rules trigger on an alert, the
rule with highest priority applies. "Normal" is the default value if
no selection is made.

As alert rules can be propagated from upstream connections, conflicts between rules are possible.
A conflict is detected when multiple rules, performing the same action, match an alert. To deal with
these collisions, the execution algorithm takes into consideration the source of the rules. The user can
choose three policies:
• upstream_only: alert rules are managed in the top CMC or with Vantage. Creation and
modification are disabled in the lower-level sensors. Only the rules received from upstream are
executed;
• upstream_prevails: in case of conflicts, rules coming from upstream are executed;
• local_prevails: in case of conflicts, rules created locally are executed.
A special case is represented by the 'mute' action. Consider the following example: the execution policy
is 'local_prevails' and a mute rule is received by Guardian from an upstream connection. This rule
will be ignored if at least one local rule matches the alert. Vice versa, with the execution policy set to
'upstream_prevails', local 'mute' will be ignored if at least one rule coming from upstream matches the
alert.

Alert closing options

Figure 182: The alert closing options editor

In the Alert closing options section of the Security Control Panel it is possible to customize
the details of the closure of alerts and incidents. When alerts and incidents are closed, the user must
choose the reason why the closure happens. There are two default reasons: actual incident and
baseline change. The list of reasons can be customized. Each reason has a description and a behavior
| Security features | 219

Figure 183: Alert closing option pop-up

Reason for closing A concise description that explains the reason


why an alert can be considered closed.
Treat as incident Select this entry if alerts closed using this option
will have to be considered deviations from the
baseline and not changes of the baseline. For
instance, actual incidents, attacks, and false
positives could fall into this category. If the alert is
closed with this option and the same event occurs
in the future, an equivalent alert will be issued
again.
Learn Select this entry if alerts closed using this option
will have to be considered as legitimate changes
to the baseline. For instance, new nodes correctly
connected to the network, configuration changes,
and new legitimately installed software could fall
into this category. The modifications that caused
the alert will be learned into the baseline, and, as
a result, equivalent alerts won't be generated if
the same event happens again.
| Security features | 220

Manage Network Learning


In the Manage Network Learning tab it is possible to review and manage the Network Learning status
in detail. The graph is initialized with the node and link not learned perspectives which highlight in
red or orange the items unknown to the system. In this way it is easy to discover new elements and
take an action on them.

Figure 184: The manage page with the selection on an unlearned link

A A node which is not learned


B A link which is not learned. If the link is highlighted in orange it is learned,
but some protocols in it are not
C The information correlated to the current selection, the user can select the
items in it using the checkboxes and then execute some actions. When an
item is not learned it will be red, otherwise it will be green
D With the delete button the user can remove the selected item(s) from the
system
E With the learn button the user can insert the selected item(s) in the system
F When the configuration is complete the user can make it persistent using the
save button
G The discard button undoes all the unsaved changes to the system

How to learn protocols


1. Click on red or orange link, information about the selection will be displayed on the right pane
| Security features | 221

2. Check the protocol that you want to learn. In this example we check browser. It is possible to
check more than one item at once

3. Click on the Learn button, a mark will appear on all the checked items which will be learned and
the Save button will start to blink indicating some unsaved changes

4. Click on the Save button, the protocol will be learned and it will become green. In this case also the
link will change color and become orange because some protocols are learned and some others are
not

5. Learning all remaining protocols will result in a completely learned grey link
| Security features | 222

How to learn function codes


If a protocol is a SCADA protocol, the information pane will also display the function codes. The
procedure for learning function codes is equivalent to the procedure for learning protocols.

Figure 185: A SCADA protocol with function codes

How to learn nodes


1. Click on a node in the graph window, its information will be displayed in the right pane
For each node an item containing the node id will be shown, and below two children items will be
shown containing the main attributes of the node, respectively the node IP and the MAC address

2. Check the items that you want to be learned (in this case both IP and MAC)
| Security features | 223

3. Click on the Learn button, a mark will appear on all the checked items which will be learned and
the Save button will start to blink indicating some unsaved changes
Please note that instead when the delete operation is performed (click on the Delete button) all the
items will be checked and then deleted

4. Click on the Save button, the information pane will turn to green, the learned items and the node in
the graph will become grey

Learning from alerts or incidents

Automatic learning
1. Click on the Close alert button.
| Security features | 224

2. Choose one of the preset reasons for closing the alert or incident. An informative text will indicate if
the reason is associated to learning a baseline change or not. Alternatively, you can set a custom
reason and choose whether a baseline change is to be learned or not.

Manual learning
1. Click on the gear icon to go to the learning page.

2. The graph will be focused on the link involved in the alert (by clicking on the X button the focus will
be removed). According to the alert there is a new node, follow the already explained procedure to
learn the desired items.
| Security features | 225

Alerts
This topic describes alerts and incidents. Alerts represent an event of interest in the observed system.
An incident is a group of alerts based on shared content.
Alerts are visible by default in the Alerts table. You can drill down on an alert for more specific
information about the alert.
An incident is a group of alerts based on shared content. The Nozomi Networks Operating System
(N2OS) correlation engine monitors the system and groups alerts when multiple alerts describe the
same situation differently. This provides a clear understanding of the monitored system. Users with
incomplete knowledge of the observed system find this useful.
Large numbers of alerts can impair performance, so we recommend that you carefully consider your
retention policy. For more information, see Configuring retention.
| Security features | 226

Custom checks: assertions


This topic describes how to configure a valid assertion.

Introduction
A valid assertion is a normal query with a special command appended at the end. Assertions can be
saved in a specific order and can be continuously executed in the system.
Queries are based on the Nozomi Networks Query Language (N2QL), described in Queries on page
118. Use the powerful query language to ensure that certain conditions are met on the observed
system. An assertion is typically either (1) an empty value, or (2) a specific value. When an unexpected
value appears, or when the value is different than the expected, the system alerts the user.

Managing an assertion
1. To manage assertions, at the Web UI, go to Analysis > Assertions to begin your query. The
Assertions page displays with a table of assertions.
Note: Click the arrow next to the heading to change the list from ascending to descending order.

Figure 186: Assertion table

A Actions Available actions:


• Edit ( )
• Delete ( )
• Show alerts ( )
• Edit assertion note ( )

B Name Assertion name


C Description Assertion description
D Note Any specific notes
E Failed since Time given in minutes, hours, days, months, custom time
frame, or never (i.e., 1m, 15m, 1h, 3h, 12h, 1d, or custom)
F # Failures Number of failures
G Packet filter Berkeley Packet Filters (BPF)
H Can send alerts Either true or false, depending on whether the system can
send alerts
I Is security Either true or false
J Can request Either true or false, depending on whether or not you can
trace request a trace
K Alert delay Alert delay
L Alert risk Alert risk
| Security features | 227

M Alert type ID Type of alert; examples:


• ASRT:FAILED
• SIGN:MALICIOUS-IP

N Created at Timestamp for alert creation (1m, 15m, 1h, 3h, 12h, 1d or
custom)
O Assertion Query assertion for link(s), node(s), alert(s), session(s)

2. Enter a valid query in the query field.


Assertion query commands are:

assert_all <field> The assertion is satisfied when each element in the query
<op> <value> result set matches the given condition.
assert_any <field> The assertion is satisfied when at least one element in the
<op> <value> query result set matches the given condition.
assert_empty The assertion is satisfied when the query returns an empty
result set.
assert_not_empty The assertion is satisfied when the query returns a non-empty
result set.
3. Save the assertion to be notified when someone uses the insecure telnet protocol:

links | where protocol == telnet | assert_empty

Figure 187: Example: Saved failing assertion during editing

Editing an assertion
To edit an assertion:
1. Enter the assertion query in the query field.
2. Execute the query by pressing the Enter key.
Note: Multiple assertions can be combined using the logical operators && (and) and || (or).
Round brackets change the logical grouping as in a mathematical expression.
3. (Optional) Press the debug ( ) button (on the right side of the textbox) to decompose the query
and execute the single pieces to show intermediate results. Assertions with logical operators and
brackets can quickly become complex.
| Security features | 228

Figure 188: Complex assertion being debugged

(links | where protocol == telnet | assert_empty && links | where


protocol == iec104 | assert_empty) && (nodes | where is_learned ==
false | assert_empty)

Saving an assertion
You can save assertions to have them continuously executed in the system.
To save an assertion:
1. Enter the assertion query in the query field.
2. Press the Enter key to execute it.
3. Click the Save button. The Save assertion popup displays.
| Security features | 229

4. To save the assertion:


a. Enter a name for the assertion in the Name field.
b. Enter a description for the assertion in the Description field.
c. Assign the assertion to a group. In the Group field, select one of the following:
• From the dropdown menu, select an existing group, then click the Save button.
• Create a new group by selecting the New group button. The Enter the group name popup
appears.
1. Enter a group name for the assertion in the Group name field.
2. Click the Save button.

d. Select either of the following:


• Is security
• Is operational
e. At the Assertion check interval field, choose the interval in seconds at which the assertion will
be rechecked (between 10 seconds and 1 day).
f. (Optional) Check the Can send alerts field to allow the assertion to trigger an alert.
g. At the Choose the asserted table's specific fields to include in the Description field, select
the fields to include in the assertion description from the dropdowm menu.
| Security features | 230

h. Type the assertion query in the Query field.


i. Save the assertion. The saved assertion will be listed at the bottom of the page with a green or
red color to indicate the result.
Note: When editing the alert risk, only newly raised alerts are affected.

Custom checks: specific checks


This topic describes how to configure specific checks on Links and Variables.

Introduction
The Nozomi Networks solution allows users to configure checks on Links and Variables in order to filter
alerts and display only those that the user wants to see.

Configuring checks on links


To configure checks on links:
1. At the Web UI, go to Network > Links tab to access a list of links and begin your query. The
Network page displays.

Figure 189: Network table with links


2. Select a link from the Actions column, then click the Configure ( ) icon to configure a check on a
link. The Configure popup displays.

3. Flag and configure these checks, as required:

Is persistent When enabled, this check raises a new alert whenever a


TCP handshake is successfully completed on the link
Alert on SYN When enabled, this check raises a new alert whenever a
TCP SYN is sent by a client on the link
| Security features | 231

Track availability When enabled, a link is considered non-functioning if it is


(seconds) unresponsive for a specified timeframe (in seconds)
Last Activity check When enabled, this check raises an alert when the link is
(seconds) not receiving any data for more than the specified amount
of seconds
4. Save your changes.

Configuring checks on variables


To configure checks on variables:
1. From the Web UI, go to Process to access the Process table, which displays detailed information
about variables.

Figure 190: Process table with variables


2. Select a variable from the Actions column, then click the Configure ( ) icon to configure a check
on a variable. The Configure popup displays.

3. Flag and configure these checks, as follows:

Label Provide a label for the check.


| Security features | 232

History size Sets the variable history size. When the size is 0, history is
disabled. When it is higher than 0, it is enabled and the size value
suggests how many values that the system should keep, according
to the available resources.

Last activity check When enabled, this check raises an alert when the variable is
either not measured or is changed for more than the specified
number of seconds.
Invalid quality When enabled, this check raises an alert when the variable
check maintains an invalid quality for more than the specified amount of
seconds.
Disallowed When enabled, this check raises an alert when the variable gains
qualities check one of the specified qualities.
4. Save your changes.
| Security features | 233

Alerts Dictionary
As explained at the beginning of this chapter, four categories of Alerts can be generated from the
Nozomi Networks Solution. Here we propose a complete list of the different kinds of Alerts that can
be raised. It should be noted that some Alerts can specify the triggering condition: for instance the
Malformed Packet Alert can be instantiated by each protocol by some specific checked information.
The tables contain the following information:
• Type ID: the strict identifier for an alert type. Use this field to setup integrations.
• Name: a friendly name identifier.
• Security profile: the default profile the alert type belongs to.
• Risk: the default base risk the alert shows. For specific instances, this value is weighted by other
factors (the learning state of the involved nodes and their reputation) and it will result in a different
number.
• Details: general information about the alert event, and what has caused it.
• Release: the minimum release version featuring that alert type. The minimum considered release
version is 18.0.0.
• Trace: whether a trace is produced or not. Note: Traces are always based on buffered data and,
depending on the overall network traffic throughput, the buffer might not contain all of the packets
responsible for the alert itself. Only the last packet responsible for triggering the alert is always
present as the trace is generated.

Protocol Validations
An undesired protocol behavior has been detected. This can refer to a wrong single message, to
a correct single message not supposed to be transmitted or transmitted at the wrong time (state
machines violation) or to a malicious message sequence. Protocol specific error messages indicating
misconfigurations also trigger alerts that fall into this category.

Type ID Name Sec. Prof. Risk Details Release Trace

NET:RST-FROM- Link RST request LOW 3 The link has been dropped because of a TCP RST sent 18.0.0 YES

PRODUCER by Producer by the producer.

Verify that the device is working properly, no

misconfigurations are in place and that network does not

suffer excessive latency.

PROC:SYNC-ASKED-AGAIN Producer sync PARANOID 3 A new sync (e.g. General Interrogation in iec101 and 18.0.0 YES

request by iec104) command has been issued, while in some links it


Consumer is sent only once per started connection. It may be due to

a specific sync request of an operator, a cyclic sync, or to

someone trying to discover the process global state.

Investigate on the protocol implementation and possible

presence of malicious actors.

PROC:WRONG-TIME Process time issue HIGH 3 The time stamp specified in process data is not aligned 18.0.0 YES

with current time. There could be a time sync issue with

the source device, a malfunctioning or a packet injection.

Verify the device configuration and status.

SIGN:ARP:DUP Duplicated IP HIGH 5 ARP messages have shown a duplicated IP address in 18.0.0 YES

the network. It may be a misconfiguration of one of the

devices, or a tentative of a MITM attack.

Investigate on the network configuration and the possible

presence of malicious actors.


| Security features | 234

Type ID Name Sec. Prof. Risk Details Release Trace

SIGN:DDOS DDOS attack HIGH 5 A suspicious Distributed Denial of Service has been 19.0.0 YES

detected on the network.

Verify that all the devices in the network are allowed and

behaving correctly.

SIGN:DHCP-OPERATION DHCP operation HIGH 4 A suspicious DHCP operation has been detected. This is 18.0.0 YES

related to the presence of new Mac addresses served by

DHCP server, and to DHCP wrong replies.

Investigate on the network configuration and the possible

presence of malicious actors.

SIGN:ILLEGAL- Illegal parameters MEDIUM 7 A request with illegal parameters (e.g. outside from a 19.0.0 YES

PARAMETERS request legal range) has been issued. This may mean that a

malfunctioning software is trying to perform an operation

without success or that a malicious attacker is trying to

understand the functionalities of the device.

Verify the device configuration and status, and the

possible presence of malicious actors.

SIGN:INVALID-IP Invalid IP HIGH 7 A packet with an IP reserved for special purposes (e.g. 18.0.0 YES

loopback addresses) has been detected. Packets with

such addresses can be related to misconfigurations or

spoofing/denial of service attacks.

Investigate on the network configuration and the possible

presence of malicious actors.

SIGN:MAC-FLOOD Flood of MAC MEDIUM 7 A high number of new MAC addresses has appeared in a 20.0.1 YES

addresses short time. This can be a flooding technique.

Investigate on the network configuration and the possible

presence of malicious actors.

SIGN:MALFORMED- Malformed traffic MEDIUM 7 A L7 malformed packet has been detected. A maliciously 18.0.0 YES

TRAFFIC malformed packet can target known issues in devices

or software versions, and thus should be considered

carefully as a source of a possible attack.

Investigate on the protocol implementation and the

possible presence of malicious actors.

SIGN:MALICIOUS- Malicious protocol LOW 6 An attempted communication by a protocol known to be 19.0.0 YES

PROTOCOL related to threats has been detected.

Verify the device configuration and status, and the

possible presence of malicious actors.

SIGN:MULTIPLE-ACCESS- Multiple Access MEDIUM 8 A host has repeatedly been denied access to a resource. 19.0.5 YES

DENIED Denied events


Verify whether the calling device is supposed to access

those resources and tune the authorization permissions

accordingly.

SIGN:MULTIPLE- Multiple OT device HIGH 8 A host has repeatedly tried to reserve the usage of an OT 19.0.0 YES

OT_DEVICE- reservations device causing a potential denial-of-service.


RESERVATIONS
Verify the device configuration and status, and the

possible presence of malicious actors.

SIGN:MULTIPLE- Multiple MEDIUM 8 A host has repeatedly tried to login to a service without 18.0.0 YES

UNSUCCESSFUL-LOGINS unsuccessful success. It can be either an user or a script, and due to a


logins malicious entity, or a wrong configuration.

Verify whether the calling device is supposed to access

the target device and tune the authentication credentials

accordingly.
| Security features | 235

Type ID Name Sec. Prof. Risk Details Release Trace

SIGN:NET-MALFORMED Malformed MEDIUM 7 A packet containing a semantically invalid sequence 20.0.0 YES

Network/Transport below the application layer has been observed.


layer
Investigate on the protocol implementation, and the

possible presence of malicious actors.

SIGN:NETWORK-SCAN Network Scan MEDIUM 7 An attempt to reach many target hosts or ports in a target 19.0.0 YES

network (vertical or horizontal scan) has been detected.

Investigate whether it is an expected behavior or a

malicious scan activity is undergoing.

SIGN:PROC:MISSING-VAR Missing variable HIGH 6 An attempt to access an unexisting variable has been 18.0.0 YES

request made. This may be due to a misconfiguration or a

tentative to discover valid variables inside a producer.

Example: COT 47 in iec104.

Verify the device configuration and status, and the


possible presence of malicious actors.

SIGN:PROC:UNKNOWN- Missing or MEDIUM 6 An attempt to access an unexisting virtual RTU 18.0.0 YES

RTU unknown device (controller's logical portion) has been made. This may be

due to a misconfiguration or a tentative to discover valid

virtual producer RTU. Example: COT 46 in iec104.

Verify the device configuration and status, and the

possible presence of malicious actors.

SIGN:PROTOCOL-ERROR Protocol error HIGH 7 A generic protocol error occurred, this usually relates 18.0.0 YES

to a wrong field, option or other general violation of the

protocol.

Investigate on the protocol implementation, and the

possible presence of malicious actors.

SIGN:PROTOCOL-FLOOD Protocol-based MEDIUM 7 One or more hosts have sent a suspiciously high amount 19.0.4 YES

flood of packets with the same application layer (e.g., ping

requests) to a single, target host.

Verify the device configuration and status, and the

possible presence of malicious actors.

SIGN:PROTOCOL- Protocol packet LOW 9 A correct protocol packet injected in the wrong context 18.0.0 YES

INJECTION injection has been detected: this may cause equipment to operate

improperly. Example: a correct GOOSE message sent

with a wrong sequence number (that, if received in the

right moment, would just work instead).

Investigate on the protocol implementation, and the

possible presence of malicious actors.

SIGN:TCP-FLOOD TCP flood MEDIUM 7 One or more hosts have sent a great amount of 19.0.4 YES

anomalous TCP packets or TCP FIN packets to a single,

target host.

Verify the device configuration and status, and the

possible presence of malicious actors.

SIGN:UDP-FLOOD UDP flood MEDIUM 7 One or more hosts have sent a great amount of UDP 20.0.7.1 YES

packets to a single target host.

Verify the device configuration and status, and the

possible presence of malicious actors.


| Security features | 236

Type ID Name Sec. Prof. Risk Details Release Trace

SIGN:UNSUPPORTED- Unsupported MEDIUM 7 An unsupported function (e.g. not defined in the 19.0.0 YES

FUNC function request specification) has been used on the OT device. This

may be a malfunctioning software trying to perform an

operation without success or a malicious attacker trying

to understand the device functionalities. Example: COT

44 in iec104.

Verify the device configuration and status, and the

possible presence of malicious actors.

Virtual Image
Virtual image represents a set of information by which Guardian represents the monitored network.
This includes for example node properties, links, protocols, function codes, variables, variable
values. Such information is collected via learning, smart polling, or external contents, such as Asset
Intelligence. Alerts in this group represent deviations from expected behaviors, according to the learned
or fed information.
Note: when an alert of this category is raised, if the related event is not considered a malicious attack
or an anomaly, it can be learned.

Type ID Name Sec. Prof. Risk Details Release Trace

VI:CONF-MISMATCH Configuration MEDIUM 7 A parameter describing a configuration version that was 20.0.0 YES

Mismatch previously imported from a project has been observed

having a different value in the traffic.

Validate the event and learn it if legitimate, or treat it as

anomaly.

VI:GLOBAL:NEW-FUNC- New global MEDIUM 5 A previously unknown protocol Function Code for has 19.0.4 YES

CODE function code appeared in the network.

Validate the event and learn it if legitimate, or treat it as

anomaly.

VI:GLOBAL:NEW-MAC- New global MAC MEDIUM 5 A previously unknown MAC vendor has appeared in the 19.0.4 YES

VENDOR vendor network.

Validate the event and learn it if legitimate, or treat it as

anomaly.

VI:GLOBAL:NEW-VAR- New global HIGH 5 A node has started sending variables. It can be a new 21.3.0 YES

PRODUCER variable producer command, a new object, or a tentative of enumerating

existing variables from a malicious attacker.

Validate the event and learn it if legitimate, or treat it as

anomaly.

VI:KB:UNKNOWN-FUNC- Unknown asset HIGH 5 The node has communicated using a function code 20.0.0 YES

CODE function code that is not known for this kind of Asset. This detection is

possible by knowing the specific Asset's profile.

Validate the event and learn it if legitimate, or treat it as

anomaly.

VI:KB:UNKNOWN- Unknown asset's HIGH 5 The node has communicated using a protocol that is not 20.0.0 YES

PROTOCOL protocol known for this kind of Asset. This detection is possible by

knowing the specific Asset's profile.

Validate the event and learn it if legitimate, or treat it as

anomaly.
| Security features | 237

Type ID Name Sec. Prof. Risk Details Release Trace

VI:NEW-ARP New ARP HIGH 4 A new MAC Address has started requesting ARP 18.0.0 YES

information.

Validate the event and learn it if legitimate, or treat it as

anomaly.

VI:NEW-FUNC-CODE New function code HIGH 6 A known protocol between two nodes has started using a 18.0.0 YES

new function code (i.e. message type). For example, if a

client A normally uses a function code 'read' when talking

to server B, this alert is raised if client A begins to use a

function code 'write'.

Validate the event and learn it if legitimate, or treat it as

anomaly.

VI:NEW-LINK New link HIGH 4 Two nodes have started communicating with each other 18.0.0 YES

with a new protocol.

Validate the event and learn it if legitimate, or treat it as

anomaly.

VI:NEW-LINK-CONFIRMED New confirmed link HIGH 5 Two nodes have started communicating with each other 18.0.0 YES

with a new, confirmed protocol.

Validate the event and learn it if legitimate, or treat it as

anomaly.

VI:NEW-LINK-GROUP New link group HIGH 5 Two nodes have started communicating with each other. 18.0.0 YES

Validate the event and learn it if legitimate, or treat it as

anomaly.

VI:NEW-MAC New MAC address HIGH 6 A new MAC Address has appeared in the network. 18.0.0 YES

Validate the event and learn it if legitimate, or treat it as

anomaly.

VI:NEW-NET-DEV New network MEDIUM 3 A new network device (switch or router) has appeared on 18.0.0 YES

device the network.

Validate the event and learn it if legitimate, or treat it as

anomaly.

VI:NEW-NODE New node MEDIUM 5 A new node has appeared on the network. 18.0.0 YES

Validate the event and learn it if legitimate, or treat it as

anomaly.

VI:NEW-NODE:MALICIOUS- new node LOW 5 A node with a bad reputation IP has been detected. It is 20.0.0 YES

IP suggested to validate the health status of communicating

nodes, as they may be infected by some malware.

Validate the event and learn it if legitimate, or treat it as

anomaly.

VI:NEW-NODE:TARGET New target node HIGH 4 A new target node has appeared on the network. This 18.0.0 YES

node is not yet confirmed to exist as it still has not sent

back any data.

Validate the event and learn it if legitimate, or treat it as

anomaly.

VI:PROC:NEW-VALUE New OT variable HIGH 6 A variable has been set to a value never seen before. 18.0.0 YES

value
Validate the event and learn it if legitimate, or treat it as

anomaly.
| Security features | 238

Type ID Name Sec. Prof. Risk Details Release Trace

VI:PROC:NEW-VAR New OT variable HIGH 6 A new variable has been sent, or accessed by a client. It 18.0.0 YES

can be a new command, a new object, or a tentative of

enumerating existing variables from a malicious attacker.

Validate the event and learn it if legitimate, or treat it as

anomaly.

VI:PROC:PROTOCOL- Protocol flow HIGH 8 A message aimed at reading/writing one or multiple 18.0.0 YES

FLOW-ANOMALY anomaly variables which is sent cyclically, has changed its

transmission interval time. Example: a iec104 command

breaking its normal transmission cycle.

Validate the event and learn it if legitimate, or treat it as

anomaly.

VI:PROC:VARIABLE-FLOW- Variable flow HIGH 6 A variable which is sent cyclically has changed its 18.0.0 YES

ANOMALY anomaly transmission interval time.

Validate the event and learn it if legitimate, or treat it as

anomaly.

Built-in Checks
Built-in checks are based on specific signatures or hard-coded logics with reference to: known
ICS threats (by signatures provided by Threat Intelligence), known malicious operations, system
weaknesses, or protocol-compliant operations that can impact the network/ICS functionality. They
might also leverage the Learning process to be more accurate.

Type ID Name Sec. Prof. Risk Details Release Trace

SIGN:CLEARTEXT- Cleartext password MEDIUM 7 A cleartext password has been issued or requested. 19.0.0 YES

PASSWORD
Consider to update to secure communication or evaluate

the risks of having this data exposed on the network.

SIGN:CONFIGURATION- Configuration MEDIUM 6 A changed configuration has been uploaded to the 18.0.0 YES

CHANGE change OT device. This can be a legitimate operation during

maintenance and upgrade of the software or an

unauthorized tentative to disrupt the normal behavior of

the system.

Verify the device configuration and status.

SIGN:CPE:CHANGE CPE change LOW 0 An installed software change has been detected. 18.0.0 YES

The change relates to the vulnerabilities list, possibly

changing it.

Verify the device configuration and status, and the reason

behind the software change.

SIGN:DEV-STATE-CHANGE Device state MEDIUM 7 A command that can alter the device state has been 18.0.0 YES

change detected. Examples are a request of reset of processor's

memory, and technology-specific cases.

Verify the device configuration and status, and the reason

behind the command.

SIGN:FIRMWARE- Firmware transfer HIGH 6 A firmware has been transferred to the device. This 19.0.0 YES

TRANSFER can be a legitimate operation during maintenance or an

unauthorized attempt to change the behaviour of the

device.

Verify the device configuration and status, and the

possible presence of malicious actors.


| Security features | 239

Type ID Name Sec. Prof. Risk Details Release Trace

SIGN:MALICIOUS-DOMAIN Malicious domain LOW 8 A DNS query towards a malicious domain has been 19.0.0 YES

detected.

Investigate on why this domain has been contacted and

consider to ban it from your network.

SIGN:MALICIOUS-HID Malicious USB LOW 10 Suspicious behaviour detected in a device announcing 23.0.0 NO

device itself as Human Interface Device (HID). It may be

compromised, including malicious software running on

it and performing dangerous actions targeting the main

system it is connected to.

Disconnect the Human Interface Device (HID) and

inspect it carefully, it might have an miniature embedded

chip inside. Find the root cause of the unexpected

behavior.

SIGN:MALICIOUS-IP Bad ip reputation LOW 8 A node with a bad reputation IP has been found. 19.0.0 YES

Investigate on why this IP has been contacted and

consider to ban it from your network.

SIGN:MALICIOUS-URL Malicious URL LOW 8 A request towards a malicious URL has been detected. 19.0.0 YES

Investigate on why this URL has been contacted and

consider to ban it from your network.

SIGN:MALWARE- Malware detection LOW 9 A potentially malicious payload has been transferred. 18.0.0 NO

DETECTED
Investigate on the malware source and infected device,

and consider to remove the file.

SIGN:MITM MITM attack LOW 10 A potential MITM attack has been detected. The attacker 20.0.5 NO

is ARP-poisoning the victims. The attacker node could

alter the communication between its victims.

Investigate on the network configuration and the possible

presence of malicious actors.

SIGN:OT_DEVICE-REBOOT OT device reboot HIGH 6 An OT device program has been requested to reboot 18.0.0 YES

request (e.g. by the engineering workstation). This may be

something due to Engineering operations, for instance

the maintenance of the program itself or a system

updates. However, it may indicate suspicious activity from

an attacker trying to manipulate the device execution.

Investigate on the network configuration and the possible

presence of malicious actors.

SIGN:OT_DEVICE-START OT device start HIGH 6 An OT device program has been requested to start 18.0.0 YES

request (e.g. by the engineering workstation). This may be

something due to Engineering operations, for instance

the maintenance of the program itself or a system

updates. However, it may indicate suspicious activity from

an attacker trying to manipulate the device execution.

Investigate on the network configuration and the possible

presence of malicious actors.

SIGN:OT_DEVICE-STOP OT device stop HIGH 9 An OT device program has been requested to stop 18.0.0 YES

request (e.g. by the engineering workstation). This may be

something due to Engineering operations, for instance

the maintenance of the program itself or a system

updates. However, it may indicate suspicious activity from

an attacker trying to manipulate the device execution.

Investigate on the network configuration and the possible

presence of malicious actors.


| Security features | 240

Type ID Name Sec. Prof. Risk Details Release Trace

SIGN:OUTBOUND- High rate of LOW 9 A host has shown a sudden increase of outbound 21.0.0 YES

CONNECTIONS outbound connections. This could be due to the presence of a


connections malware.

Investigate on the reason behind such connections to

the outside on the device, and consider to update the

network configuration to prevent them.

SIGN:PACKET-RULE Packet rule match LOW 9 A packet has matched a Packet rule. 18.0.0 YES

Verify the device configuration and status, and the

possible presence of malicious actors.

SIGN:PASSWORD:WEAK Weak password HIGH 5 A weak password, possibly default, has been used to 18.5.0 YES

access a resource.

Consider to update your passwords.

SIGN:PROGRAM:CHANGE Program change MEDIUM 6 A changed program has been uploaded to the OT device. 18.0.0 YES

This can be a legitimate operation during maintenance

and upgrade of the software or an unauthorized tentative

to disrupt the normal behavior of the system.

Verify the device configuration and status, and the

possible presence of malicious actors.

SIGN:PROGRAM:TRANSFER Program transfer HIGH 6 A program has been transferred between an OT Device 18.0.0 YES

(e.g. an IED) and a workstation / local SCADA. This

can be a legitimate operation during maintenance and

upgrade of the software or an unauthorized attempt to

read the program logic.

Investigate on the entity that has initiated the transfer and

on the program content.

SIGN:PUA-DETECTED PUA detection MEDIUM 8 A potentially unwanted application payload (PUA) has 20.0.6 NO

been transferred. This is normally less dangerous than a

malware payload.

Investigate on the malware source and infected device,

and consider to remove the payload.

SIGN:SIGMA-RULE Sigma rule match LOW 9 Rule-dependent. A suspicious local event has been 23.0.0 YES

detected on a machine.

Rule-dependent. Verify the device configuration

and status, and the possible presence of malicious

processes.

SIGN:SUSP-TIME Suspicious time HIGH 7 A suspicious time has been observed in the network. 20.0.0 YES

value There could be a malfunctioning device or a packet

injection.

Verify the device configuration and status.

SIGN:USB-DEVICE New USB device PARANOID 6 This is most likely a human driven event. 23.0.0 NO

plugged
USB devices might be a physical infiltration vector

carrying files with malicious behaviour. Check the device

nature and its content.

SIGN:WEAK-ENCRYPTION Weak encryption PARANOID 6 The communication has been encrypted using an 19.0.5 YES

obsolete cryptographic protocol, weak cipher suites or

invalid certificates.

Consider to update to more secure algorithms or evaluate

the risks of having this technology still used on the

netowrk.
| Security features | 241

Custom Checks
These are checks set in place by the user. Typically the nature of an event related to a custom check
cannot generally be referred to a problem per se, if not contextualized to the specific network and
installation.

Type ID Name Sec. Prof. Risk Details Release Trace

ASRT:FAILED Assertion failed LOW 0 An assertion has failed. 18.0.0 YES

Assertion dependent. Check out the elements making the

assertion to fail.

GENERIC:EVENT Generic Event LOW 0 A generic event has been generated by the SDK. 20.0.5 YES

Event dependent. More details are available in the

description of the event.

NET:INACTIVE-PROTOCOL Inactive protocol LOW 3 The link has been inactive for longer than the set 18.0.0 YES

threshold.

Investigate whether that is the expected behavior or

something prevents the link from working.

NET:LINK-RECONNECTION Link reconnection LOW 3 The link configured to be persistent has experienced a 18.0.0 YES

complete TCP reconnection.

Investigate whether the reconnection is legitimate.

NET:TCP-SYN TCP SYN LOW 3 A connection attempt (TCP SYN) has been detected on 18.0.0 YES

a link.

Investigate on the entity that has attempted to connect.

PROC:CRITICAL-STATE- Critical state off LOW 1 The system has recovered from a user-defined critical 18.0.0 YES

OFF process state.

Investigate such critical state.

PROC:CRITICAL-STATE-ON Critical state on LOW 9 The system has entered in a user-defined critical process 18.0.0 YES

state.

Investigate on the values and whether the process is at

risk.

PROC:INVALID-VARIABLE- Invalid variable LOW 3 A variable has showed a quality bit set for longer than the 18.0.0 YES

QUALITY quality set threshold.

Investigate on the protocol implementation and the

process status.

PROC:NOT-ALLOWED- Not allowed LOW 3 A variable has shown one or more specific quality bits the 18.0.0 YES

INVALID-VARIABLE variable quality user set as not allowed.

Investigate on the protocol implementation and the

process status.

PROC:STALE-VARIABLE Stale variable LOW 3 A variable has not been read/written for longer than the 18.0.0 YES

set threshold.

Investigate on the protocol implementation and the link

and process status.


| Security features | 242

Incidents Dictionary

Protocol Validations
An undesired protocol behavior has been detected. This can refer to a wrong single message, to
a correct single message not supposed to be transmitted or transmitted at the wrong time (state
machines violation) or to a malicious message sequence. Protocol specific error messages indicating
misconfigurations also trigger alerts that fall into this category.

Type ID Name Details

INCIDENT:ANOMALOUS-PACKETS Anomalous Packets Malformed packets have been detected during the
deep packet inspection.

Investigate on the protocol implementation, and the

possible presence of malicious actors.

Virtual Image
Virtual image represents a set of information by which Guardian represents the monitored network.
This includes for example node properties, links, protocols, function codes, variables, variable
values. Such information is collected via learning, smart polling, or external contents, such as Asset
Intelligence. Alerts in this group represent deviations from expected behaviors, according to the learned
or fed information.
Note: when an alert of this category is raised, if the related event is not considered a malicious attack
or an anomaly, it can be learned.

Type ID Name Details

INCIDENT:INTERNET-NAVIGATION Internet Navigation A node has started surfing the Web.

Investigate the network and firewall configuration, and

the reason why the endpoint shows this behavior, to

validate this is a legitimate action.

INCIDENT:VARIABLES-FLOW-ANOMALY Variables Flow Anomaly An updated time interval on a variable that used to
be written or read with a regular interval has been

detected.

Validate the set of events and learn them if legitimate,

or treat them as anomalies.

INCIDENT:VARIABLES-FLOW- Variables Flow Anomaly on Consumer A consumer which used to write or read a variable with
ANOMALY:CONSUMER a regular interval has been detected to have changed

its update interval.

Validate the set of events and learn them if legitimate,

or treat them as anomalies.

INCIDENT:VARIABLES-FLOW- Variables Flow Anomaly on Producer A Producer which used to write or read a variable with
ANOMALY:PRODUCER a regular interval has been detected to have changed

its update interval.

Validate the set of events and learn them if legitimate,

or treat them as anomalies.

INCIDENT:VARIABLES-NEW-VALUES New Values on Producer New variable values have been detected in a device.

Validate the set of events and learn them if legitimate,

or treat them as anomalies.

INCIDENT:VARIABLES-NEW-VARS New Variables on Producer New variables have been detected in the system.

Validate the set of events and learn them if legitimate,

or treat them as anomalies.


| Security features | 243

Type ID Name Details

INCIDENT:VARIABLES-NEW-VARS:CONSUMER New variables request from consumer A new variable has been detected in a Consumer
device.

Validate the set of events and learn them if legitimate,

or treat them as anomalies.

INCIDENT:VARIABLES-NEW-VARS:PRODUCER New variables transmission from producer A new variable has been detected in a Producer
device.

Validate the set of events and learn them if legitimate,

or treat them as anomalies.

INCIDENT:VARIABLES-SCAN Variable Scan A node in the network has started scanning not existing
variables.

Investigate whether this is a malicious operation or the

devices configuration should be updated.

Built-in Checks
Built-in checks are based on specific signatures or hard-coded logics with reference to: known
ICS threats (by signatures provided by Threat Intelligence), known malicious operations, system
weaknesses, or protocol-compliant operations that can impact the network/ICS functionality. They
might also leverage the Learning process to be more accurate.

Type ID Name Details

INCIDENT:BRUTE-FORCE-ATTACK Brute-force Attack Several failed login attempts to a node, using a specific
protocol, are detected.

Investigate on the host attempting the login attempts.

INCIDENT:ENG-OPERATIONS Engineering Operations Various operations to modify the configuration, the


program, or the status of a device have been detected.

Validate the engineering operations.

INCIDENT:FUNCTION-CODE-SCAN Function Code Scan A node has performed several actions that are not
supported by the target devices.

Investigate the source and destination devices

configuration.

INCIDENT:ILLEGAL-PARAMETER-SCAN Illegal Parameter Scan A node has performed a scan of the parameters
available on a device.

Investigate the source authenticity.

INCIDENT:MALICIOUS-FILE Malicious File A compressed archive with some malware inside has
been transferred.

Investigate on the malware source and infected device,

and consider to remove the file.

INCIDENT:SUSPICIOUS-ACTIVITY Suspicious Activity Suspicious activity that can be potentially related to


known malware has been detected over two nodes.

Investigate on the malware source and infected device.

INCIDENT:FORCE-COMMAND Force Command A command to manually force a variable value has


been detected.

Investigate on the entity that has initiated the forcing.

INCIDENT:WEAK-PASSWORDS Weak Passwords Several weak passwords have been detected on this
communication.

Consider to update to secure communication or

evaluate the risks of having this data exposed on the

network.
| Security features | 244

Hybrid Threat Detection


The Hybrid Category is assigned when Alerts belonging to different categories as defined in the Alerts
Dictionary are grouped within such one incident. The other categories are as defined in the Alerts
Dictionary.

Type ID Name Details

INCIDENT:NEW-COMMUNICATIONS New Communications A node has started to communicate with a new


protocol.

Investigate whether such communication is legitimate.

INCIDENT:NEW-NODE New Node A new node has started to send packets in the network.

Validate the set of events and learn them if legitimate,

or treat them as anomalies.

INCIDENT:PORT-SCAN Network Scan A node has executed a series of scans in the network.

Investigate whether it is an expected behavior or a

malicious scan activity is undergoing.


| Security features | 245

Packet rules
This topic describes the packet rules used to detect malicious network activity and generate alerts.

Introduction
Packet rules are a tool provided by the Nozomi Networks solution to detect malicious network activity
and to generate alerts. Packet rules enrich and expand the checks that are already performed on the
network traffic. The Nozomi Networks solution checks and analyzes all traffic against packet rules.
An alert of type SIGN:PACKET-RULE is sent when a match is found. To explore packet rules and learn
how to edit them, go to Threat Intelligence on page 271.

Packet rule format


Packet rules have two logical sections: rule header and rule options. The rule header contains the
rule's action, transport protocol, source IP address, source ports to match, destination IP address and
destination ports to match. The rule options describe conditions for the match, with details about the
alert that will be generated in case of a match.

Basic packet rule sections


This topic describes the basic packet rule sections.

action Action to execute on match (only alert is currently supported)


protocol Transport protocol to match, which can be ethernet, tcp, udp, ip, ipv4,
ipv6, icmp
src_addr Source IP address to match; this can be any (to match everything), or a
valid IP address. In the former case, no check is performed; in the latter, the
source node ID is compared against the specified IP address.
src_port(s) Source ports to match. The format can be any (to match everything), a
single number, a set (e.g., [80,8080]), a range (e.g., 400:500), a range
open to the left bound (e.g., :500), or a range open to the right bound (e.g.,
400:). A set can contain a combination of comma-separated single ports and
ranges (e.g., [:5,9,10,12:]).
dst_addr Destination IP address to match; this can be any (to match everything), or a
valid IP address. In the former case, no check is performed; in the latter, the
destination node ID is compared against the specified IP address.
dst_port(s) Destination ports to match. The format can be any (to match everything),
a single number, a set (e.g., [80,8080]), a range (e.g., 400:500), a range
open to the left bound (e.g., :500), a range open to the right bound (e.g.,
400:). A set may contain single ports and ranges separated by commas
(e.g., [:5,9,10,12:]).
options Options alter the behavior of the packet rule and attach information to it.
Options are a list of key-value pairs separated by semi-colons (e.g.,
content: <value1>; pcre: <value2>).
Options are further explained in the next section.
| Security features | 246

Options
There are two categories of options: general rule options and detection options.
• General rule options provide information about the rule but do not have any affect during
detection. General rule options include msg and reference.
• The msg rule option tells the logging and alerting engine what message to print with a packet
dump or an alert.
• The reference keyword allows rules to include references to external attack identification
systems.
• Detection rule options allow the user to set rules that search for specific content in the packet
payload and trigger a response based on that data.
Detection rule options include:

Payload options: Options that look for data inside the packet payload.
Non-payload Options that look for non-payload data.
options:
Post-detection Options that are based on rule-specific triggers that occur after a rule
options has "fired".

The set of supported detection options includes: content, byte_extract, byte_jump, byte_math,
byte_test, dsize, flags, flow, flowbits, flag_data, frag_bits, id, isdataat,
packet_data, pcre, urilen.

msg Defines the message that will be present in the alert.

Example: msg:"a sample description"

reference Defines the CVE associated with the packet rule.

Example usage: reference:cve,2017-0144;

content Specifies the data to be found in the payload; may contain printable chars, bytes
in hexadecimal format delimited by pipes, or some combination of them.
Examples:
• content: "SMB" searches for the string SMB in the payload
• content: "|FF FF FF|" searches for 3 bytes FF in the payload
• content: "SMB|FF FF FF|" searches for the string and 3 bytes FF in the
payload
The content option may have several modifiers that influence the behavior:
• depth: specifies how far into the packet the content should be searched
• offset: specifies where to start searching in the packet
• distance: specifies where to start searching in the packet relatively to the last
option match
• within: to be used with distance that specifies how many bytes are between
pattern matches
Examples:
Given the rule alert tcp any any -> any any (content:\"x\";
content:\"y\"; distance: 2; within: 2;) the packet {'x', 0x00, 0x00,
0x00, 'y'} will match, the packet {'x', 0x00, 0x00, 0x00, 0x00, 'y'} will not because
the distance and constraints are not respected.
| Security features | 247

byte_extract Reads bytes from the packet and saves them in a variable.

Syntax: byte_extract:<bytes_to_extract>, <offset>, <name> [,


relative][, big|little]
For example: byte_extract:2,26,TotalDataCount,relative,little
reads two bytes from the packet at the offset 26 and puts them in a variable
called TotalDataCount. The offset is relative to the last matching option and the
data encoding is little endian.
byte_jump Reads the given number of bytes at the given offset and moves the offset by
their numeric representation.
Syntax: byte_jump:<bytes to convert>,<offset>[,relative]
[,little][,align]
For example: byte_jump:2,1,little; reads two bytes at offset 1, interprets
them as little endian and moves the offset.
byte_math Reads the given number of bytes at the given offset, performs an arithmetic
operation, saves the result in a variable and moves the offset.
Syntax: byte_math: bytes <bytes to convert>, offset
<offset>, oper <operator>, rvalue <r_value>, result
<result_variable>[,relative][,endian <endianess>]
For example: byte_math:bytes 2, offset 1, oper +, rvalue 23,
result my_sum; reads two bytes at offset 1, interprets them as big endian,
adds 23 to the value, stores the result into the variable my_sum and move the
offset.
byte_test Tests a byte against a value or a variable.

Syntax: byte_test:<bytes to convert>, <operator>, <value>,


<offset> [, relative][, big|little] where <operator> can be = or >.
For example: byte_test: 2, =, var, 4, relative; reads two bytes at
offset 4 (relative to the last matching option) and tests if the value is equal to the
variable called var.
dsize Matches payloads of a given size.

Syntax: dsize: min<>max; or dsize: <max; or dsize: >min;


Matches if the size of the payload corresponds to the given boundaries. The IP,
TCP and UDP headers are not considered in the payload dimension.
id Matches IP packets with a given ID.

Syntax: id: <id>;

isdataat Verifies that the payload has data at the given position.

Syntax: isdataat:<offset>[,relative]
For example: isdataat:2,relative; verifies that there is data at offset in
relation to the previous match.
flags Matches TCP packets with given flags.

Syntax: flow:
[established,not_established,from_client,from_server,
to_client,to_server]
For example: flow: established,from_server; matches responses in an
established TCP session.
| Security features | 248

flow Matches TCP packets with given flags.

Syntax: flow:
[established,not_established,from_client,from_server,
to_client,to_server]
For example: flow: established,from_server; matches the responses
in an established TCP session.

flowbits Checks and sets boolean flags in sessions.

Syntax: flowbits:
[set,setx,unset,toggle,reset,isset,isnotset]
For example: flowbits: set,has_init; sets the has_init flags on
the session if the packet rule matches the packet. flowbits: isnotset,
has_init matches on packets whose session does not have the flag
has_init set.
file_data Moves the point to the beginning of the content in an HTTP packet.

Syntax: file_data;

frag_bits Checks the flags of the header of IP packets.

Syntax: fragbits: (MDR+*!);


For example: fragbits: MR*; matches on packet that have the More
fragments or Reserved bit flags set.
pkt_data Moves the pointer to the beginning of the packet payload.

Syntax: pkt_data;

pcre Specifies a regex to be found in the payload.

Syntax: pcre:"(/<regex>/[ismxAEGR]"
Pcre modifiers:
• i: case insensitive
• s: include newline in dot metacharacter
• m: ^ and $ match immediately following or immediately before any newline
• x: ignore empty space in the pattern, except when escaped or in characters
class
• A: match only at the start
• E: $ will match only at the end of the string ignoring newlines
• G: invert the greediness of the quantifiers
• R: match is relative to the last matching option

urilen Matches on HTTP packets whose URI has a specified size.

Syntax: urilen: min<>max; or urilen: <max; or urilen: >min;


| Security features | 249

Hybrid threat detection


This topic describes the types of threat analysis that the Nozomi Networks solution uses in its threat
detection strategy.

Introduction
Hybrid threat detection considers not just one method of threat detection, but several types. Guardian
correlates the output from the hybrid threat detection to provide input for a powerful and comprehensive
threat detection strategy. The purpose of the hybrid risk detection is to understand the current
framework and environment and to identify risks by evaluating the information/data obtained from the
four types of threat analysis.

Types of threat analysis


The types of threat analysis are:

Anomaly-based Guardian learns the behavior of the observed network and alerts users
analysis when a significant deviation is detected in the system. This analysis is
generic and can be applied to every system.
Yara rules Guardian extracts files transferred by protocols such as HTTP or SMB
and triggers an inspection by the Yara engine. Guardian raises an alert
when a Yara rule matches. Yara rules typically detect the transfer of
malware. The Nozomi Networks solution provides a set of Yara rules
that can be expanded by users.
Packet rules Packet rules enable users to define a criterion to match a malicious
packet and raise an alert. The Nozomi Networks solution provides a set
of packet rules that can be expanded by users.
Indicators of Indicators of Compromise (IoC) loaded via Structured Threat
Compromise (IoC) Information eXpression (STIX) provide several hints of threats, such as
malicious domains, URLs, IPs, etc.
Chapter

7
Vulnerability assessment
Topics: The Vulnerability assessment module finds weaknesses in OT, IoT,
IIoT, and IT systems, then analyzes them to identify, quantify, and
• Basics rank the vulnerabilities in the environment.
• Passive detection
• Configuring vulnerability
detection
| Vulnerability assessment | 252

Basics
The Nozomi Networks solution continuously discovers vulnerabilities in monitored devices. Detection is
configured through a series of settings that enable users to filter the data.

Introduction
The Nozomi Networks solution matches a device's Common Platform Enumeration (CPE), using its
structured IT naming scheme, with the National Vulnerability Database and other data sources, to
continuously discover vulnerabilities.
To access the Vulnerability assessment module, from the Web UI, go to the main menu dropdown list
( ) in the upper left corner of the screen and select Vulnerabilities. The Vulnerabilities screen opens
at the Assets tab. Other tabs include: List tab and Stats tab.

Assets tab
The Assets tab screen displays a list of assets with known vulnerabilities, along with a summary
of the vulnerability severity. Click the asset name to open the Asset details pop-up window, which
displays a detailed list of the vulnerabilities discovered in that asset. Go to Assets on page 78 for more
information.

Figure 191: Vulnerabilities screen (Assets tab)

Figure 192: Asset details pop-up window

List tab
The List tab displays a comprehensive list of vulnerabilities in the environment, from which you may
perform a global, in-depth analysis.
| Vulnerability assessment | 253

1. From the Web UI, go to the collapsible ( ) icon in the upper left corner of the screen and select
Vulnerabilities from the dropdown menu. The Vulnerabilities screen appears. Select the List tab.

Figure 193: List tab

Note: Click the column heading or the arrow to the right of it to sort the assets in ascending or
decreasing order. Click the x button to remove the sorting information.

2. Perform any of the following actions from the top right part of the List tab screen:

a. Toggle to Only unresolved to display only the unresolved vulnerabilities.


b. Toggle to Live to automatically reload the page every few seconds to continuously display the
most updated list.
c. Click the Export ( ) icon to export the report, as needed.
3. Select the columns to display from the # Selected drop-down menu at the right of the table. Some
of the available table column selections and definitions are described below:

Actions Action to be performed on the asset: change resolution ( ). Click


to change the asset resolution, as needed.
CVE CVE (Common Vulnerabilities and Exposures) name (for CVE
details, click the link)
Node Node IP address
Score CVSS (Common Vulnerability Scoring System) score assigned to
the CVE
CWE CWE (Common Weakness Enumeration) number to identify the
vulnerability
CWE name Name of the category for the vulnerability
CVE creation date Time and date information about when the vulnerability was
discovered (not installation specific, but CVE-specific)
CVE update date Time and date information about when the vulnerability was
updated (not installation specific, but CVE-specific)
| Vulnerability assessment | 254

Discovery date Date CVE was discovered in the monitored environment


Matching CPEs List of CPEs (Customer Premise Equipment) allowed to match this
vulnerability
Likelihood A value between 0.1 and 1.0 where 1.0 represents the maximum
likelihood of the CVE to be present.
Resolved True if the vulnerability has been resolved
Resolution status Resolution status: mitigated (i.e.: the vulnerability is solved) or
accepted (i.e.: the vulnerability is not considered harmful)
Resolution reason Specifies how the vulnerability has been resolved
Resolution source Specifies the source of the resolution
Summary Description of the vulnerability
CVE source CVE source: Nozomi-curated, Nozomi-original, or National
Vulnerability Database (NVD)

Figure 194: CVE Details


4. (Optional) From the Actions column, select the resolution ( ) icon to mark the vulnerability as
resolved.

Stats tab
The Stats tab displays high level information in graph format with top CPEs, CVEs, and CWEs.
1. From the Web UI, go to the collapsible ( ) icon in the upper left corner of the screen and select
Vulnerabilities from the dropdown menu. The Vulnerabilities screen appears. Select the Stats
tab. The Stats screen appears.
2. Hover over the pie charts for specific details about the vulnerabilities.
| Vulnerability assessment | 255

Figure 195: Stats tab

Top CPEs Displays title of vulnerability, date of vulnerability type, percentage of


the total vulnerabilities, and actual count of that vulnerability
Top CWEs Displays title of vulnerability, percentage of the total vulnerabilities, and
actual count of that vulnerability
Top CVEs Displays title of vulnerability, date of vulnerability type, percentage of
the total vulnerabilities, and actual count of that vulnerability
| Vulnerability assessment | 256

Passive detection
The Nozomi Networks solution offers continuous vulnerability detection by passively listening to
network traffic. To detect known and unknown threats, the Nozomi Networks solution matches known
vulnerabilities and signatures with anomalous behavior analysis.

Introduction
Through passive monitoring, the Nozomi Networks solution provides comprehensive device discovery
and inventory of ICS (Industrial Control Systems) and IT assets.

Vulnerability Matching
The Nozomi Networks solution compares discovered asset information against vulnerabilities identified
in the Common Vulnerabilities and Exposures (CVE) database to determine if there is a match. We use
the U.S. government’s National Vulnerability Database (NVD) for standardized naming, description,
and scoring for vulnerability descriptions.

Process variable tracking


Nozomi Networks not only identifies the make, model, serial number, and firmware version of OT end
devices, but also tracks the process variables communication between them. Process variable tracking
is the digital and analog input and output information that provides detailed analysis of anomaly
behavior. See Unknown threat detection.

Known threat detection


Nozomi Networks uses hard-coded logic and signatures to detect threats. Hard-coded logic within
Guardian looks for: (1) known bad behavior, such as the use of default passwords, and (2) potentially
dangerous actions, such as changes to Industrial Controller programs. The use of signatures includes:
• packet rules applied to communication within the OT network,
• Yara rules looking for malware packets within files transferred across the OT network,
• Stix indicators that reflect the cyber reputation of external sites contacted by devices in the OT
network.
The system continuously updates Threat Intelligence via an always-on connection with the Nozomi
Network Threat Intelligence portal or via a Vantage cloud platform.

Unknown threat detection


The Nozomi Networks solution creates a baseline to begin detecting anomalous system behavior. The
baseline learns devices in the entire environment including identifying devices that talk to each other,
how frequently they communicate, and the industrial protocols used. Learning mode lasts between a
few days and a few weeks, depending on the process being monitored. After baseline creation, the
Nozomi Networks solution moves into protecting mode to detect deviations from the baseline, such as
a new laptop connecting to the network, or an end device talking to a new device.
The Nozomi Networks solution monitors assets down to the process level and detects subtle
differences in device communication. If device A typically reads from device B, then suddenly begins
writing to device B, this might mean a minor change in system operations, or it might mean that a
power supply is switching off, or other similar change.

Existing operational system insights


The passive listening system also detects existing system changes that provide insight into possible
cyber attacks, or reasons for under-performing OT systems:
• Devices that stop communicating or are communicating less frequently than expected
• High levels of retransmissions
• High or low throughput
• Failed devices (communication has stopped or slowed)
| Vulnerability assessment | 257

• Unexpected communication paths


Configuring vulnerability detection
Through comprehensive risk monitoring and threat detection, the Nozomi Networks solution provides
detailed passive vulnerability analysis.

Introduction
The Nozomi Networks solution receives vulnerability-related information from the following sources:
• Nozomi's vulnerabilities-only database, if Nozomi Networks Threat Intelligence (TI) is not subscribed
• Nozomi's Threat Intelligence service, if this service is subscribed (see Threat Intelligence on
page 271 for more information), which enriches OT, IoT, and IIoT information to improve threat
detection and vulnerability identification

Procedure
To use the vulnerability-only database to configure vulnerability detection:
1. Download the vulnerability-only database from Nozomi Networks at https://nozomi-
contents.s3.amazonaws.com/vulns/vulnassdb.tar.gz).
2. Use a tool like scp or WinSCP to upload the database to the /data/tmp folder:

scp vulnassdb.tar.gz admin@<sensor_ip>:/data/tmp


3. Execute these commands in the sensor:

enable-me

cd /data/contents

tar xzf /data/tmp/vulnassdb.tar.gz


4. Now reload the database with the command:

service n2osva stop


5. Additional vulnerabilities can be added to the system. They must be in the National Vulnerability
Database (NVD) format, and they must be placed in the /data/contents/vulnass folder.
However, Nozomi Networks gives full support only for its own-distributed files.
Chapter

8
Smart Polling
Topics: Smart Polling is a solution that allows Guardian to gather
information about new nodes and to enrich existing nodes by
• Plans actively contacting them.
• Strategies
Smart Polling allows you to define plans that give polling instructions
• Configuring Smart Polling to Guardian. For example: Poll specific nodes, at specific
plans times, using certain method(s) (i.e., poll known PLCs in the
• Extracted information 192.168.38.0/24 subnet every hour using the EtherNet/IP protocol).
• Customizing the log level Guardian uses the data extracted by Smart Polling to enrich its
• Smart Polling on CMC knowledge about the assets in the environment. For example:
• Smart Polling Progressive
• PLC nodes polled using the EtherNet/IP protocol are enriched
mode
with information, such as vendor, device type, or serial number in
Assets or Network;
• Windows computers polled using the WinRM protocol provide a
list of the installed software in the Node points tab;
• Linux machines polled using SSH appear in Assets and
Network with the exact name of the distribution and their up-
time.
Note: To enable Smart Polling, install and upgrade using the
advanced bundle, that is VERSION-advanced-update.bundle. Do
not use VERSION-standard-update.bundle.
| Smart Polling | 260

Plans
Plans are user-defined directives that provide polling instructions to Guardian to obtain information
about devices for enrichment or monitoring purposes.
Each plan is characterized by:
• Strategy: Protocol or application used to connect with the desired service.
• Label: Text string used to identify the plan.
• Query: Queries are used to define the subset of devices to poll.
• Schedule: Time interval in seconds between executions of the plan.
• Any additional parameters defined by the chosen strategy. For example, the SEL strategy requires
an identity, while the SNMPv2 lets you restrict the requests to selected OIDs.
From the Web UI, select Smart Polling. The Smart Polling screen appears.

Figure 196: Smart Polling screen

List of Smart Polling plans; allows you to add a new plan (upper
right corner) and perform actions on the plans. See Actions,
A Plans tab
Adding a plan, Modifying a plan, or Adding nodes from Network for
additional information.
Provides details of the node points:

B Node points tab


| Smart Polling | 261

Smart Polling progressive settings:


• Disable progressive mode and disable all progressive plans
• Enable progressive mode for selected strategies

C Settings tab

Identifies the status of the CPU threads being used for Smart
Polling and queued jobs

D Health tab

Allows users to take action on existing plans, including:


• Disable ( ) / enable ( ) the plan

Perform an on-demand execution ( )
• Edit the plan ( )
• Show the plan's execution history ( )

E Actions

Figure 197: Example of Activity Log


• Delete the plan ( )
| Smart Polling | 262

F Filter Allows you to filter plans by node ID


Lists plans. Click the arrow next to the plan. At the plan popup,
select +Add nodes to plan or Poll selected nodes, then check
the box next to the node(s) to add or to poll it
G List of Plans

H Time last polled Displays the last time that the node was polled
I Nodes in the plan Lists the nodes in the plan
J +Add plan Allows you to add a new plan

Strategies
A strategy is a protocol or application used to on-board new devices, grant varying access levels, and
keep networks secure.
Currently, Nozomi Networks supports the following internal and external strategies:

Internal Strategies
EthernetIP Extract information using the EtherNet/IP protocol
HTTP Extract information using the http/https service
Modicon Modbus Extract information using Modicon Modbus devices
SEL Extract information using SEL devices
SNMPv1 Extract information using the SNMPv1 service
SNMPv2 Extract information using the SNMPv2 service
SNMPv3 Extract information using the SNMPv3 service
SSH Extract information using the SSH service
WinRM Extract information using the WinRM service
WMI Extract information using the WMI service
UPnP Extract information using the UPnP protocol
External strategies
CB Defense Used with Carbon Black services
DNS reverse lookup Extracts information about nodes by using DNS protocol
Sends and extracts asset information from ClearPass through
Aruba ClearPass HTTP Rest APIs. See Configuring Smart Polling plans on page
263 (Step 2).
Extracts asset information from Cisco ISE using the pxGrid
Cisco ISE
HTTP API
Extracts asset information from ServiceNow using the
REST Table API. It also allows you to automatically close
ServiceNow
Guardian's incidents whenever their corresponding incidents in
ServiceNow are closed.
Extracts asset information from Tanium using the Tanium
Tanium
Server REST API
| Smart Polling | 263

Configuring Smart Polling plans


Smart Polling allows you to add, modify, remove, and enable/disable plans, as well as to see the
execution history.

Adding a Smart Polling plan


Perform these steps to add a Smart Polling plan:
1. From the Web UI, select Smart Polling.
2. From the Smart Polling screen, click the top-right New plan button. From the Plan configuration
popup, define plan parameters and check plan functionality.

Figure 198: Plan configuration popup


3. In the Label field, add the plan name.
4. In the Strategy field, select a strategy from the dropdown menu.
Note: Use the Credentials manager to add credentials to the nodes targeted by the plan. See
Credentials manager on page 178 for additional information.
5. In the Schedule field, enter a run interval in seconds.
6. In the Query field, (a) enter a query or (b) add one or more identities to define the nodes to be
polled. Identities contain the nodes to be polled and the credentials needed to poll them.
7. In the Host to test field, enter an IP address, then click Check connection to perform a poll on
the corresponding node. The test result includes executed steps, retrieved data, and any error
messages. Use the Host to test field to determine if the Smart Polling plan is correct and to
troubleshoot potential plan hurdles, such as incorrect credentials, or the inability of Guardian to
reach plan nodes.

Figure 199: Example of successful connection check


8. Click the New plan button to create the new plan.
| Smart Polling | 264

9. (Optional) To add nodes to the plan, or to poll the nodes, should they not be returned by the plan's
automatic query, click the arrow next to the plan.
Note: The plan works normally even if you don't manually add nodes to it.

Figure 200: Add node or poll nodes

a. To add nodes to the plan, click the +Add nodes to plan button. The Add nodes to plan popup
appears. Add one IP address per line, then click the Add button to add the list of nodes to the
plan.

Figure 201: Add nodes to plan popup


b. To poll specific nodes, click the box next to the node(s) to poll, then click the Poll selected
nodes button. The Poll selected nodes button turns green and the line next to the node
displays a few seconds ago as the last updated time.
c. Repeat this step to add more nodes to the plan or to poll selected nodes.

Modifying a plan
Perform an action on an existing plan to modify the plan:
1. From the Smart Polling screen, click the configuring plan ( ) icon next to a plan to modify it. The
Plan configuration popup appears.
| Smart Polling | 265

Figure 202: Plan configuration popup


2. In the Strategy field, note the existing strategy.
Note: When you are modifying a plan, you cannot change its strategy. For strategies that require
credentials, use the Credentials manager on page 178 to add credentials to the set of nodes to be
polled when using a query.
Important: Nozomi Networks uses the ClearPass Policy Manager external strategy, among
other external strategies. ClearPass sets the bearer token Access Token Lifetime expiration
date, so exercise caution when configuring the bearer token due to its timely expiration date. See
Credentials manager on page 178 for additional information.
3. In the Schedule field, enter a run interval in seconds.
4. In the Data to be collected dropdown menu:
a. Select the fields from the dropdown menu on which to collect data.
b. Select the fields from Vulnerabilities Detection dropdown menu to specify which vulnerabilities
to search.
5. In the Target field, select either Use identities or Use query.
a. If you select Use identities, choose the identities corresponding to the targeted nodes from the
list on the left.
b. If you select Use query, the result of the query determines the list of node points. Use the
Credentials manager on page 178 to add credentials to the nodes targeted by the plan.
6. In the Timeout field, enter the time to wait before plan execution times out and fails. The plan
executes again according to the time interval entered in the Schedule field.
7. Click the Edit plan button to save your edits.
Note: Available plans vary by plan.
| Smart Polling | 266

Adding nodes from Network


Once you configure a plan, you can add arbitrary nodes to its target from the Network screen. These
nodes are polled by the plan, even if the plan's query does not return them.
Note: The additional nodes added from the Network are added independent of the configured query.
1. From the Web UI, go to Network. The Network screen appears.
2.
At the Nodes tab, click the Smart Polling icon ( ) next to a node to add it to the plan. The Smart
polling configuration popup appears.
3. From the dropdown menu of the Select an existing plan to add the node to field, select a plan
whose parameters you would like to change.
Note: Fields that are not modified in this popup are automatically populated with the plan-
configured values.
4. Update the specific field(s) on which to change the existing plan parameter(s).
Note: Fields that are not modified in this popup are automatically populated with the plan-
configured values.
5. Click Add to to save your changes.

Figure 203: Additional nodes configuration

Extracted information
Smart Polling strategies extract information during normal activity to enrich existing targeted nodes.
The enriched information permeates throughout the Nozomi Networks solution, and can be found in
Assets on page 78, Network on page 86 and Vulnerabilities on page 142.

Extracted information examples and history


Examples of Smart Polling asset enriched information:
| Smart Polling | 267

Figure 204: Product name source information tooltip

Figure 205: Name, type, and operating system retrieved with Smart Polling

To see the enriched information history for each node:


1. In the Web UI, select Smart Polling from the main menu.
2. Select the Node points tab.

Figure 206: Node points screen


The Smart Polling screen features three columns. When you select a node point from the first column,
each subsequent column represents an increasing level of detail for the extracted information for that
node point.

Table 16: Smart Polling history

Column Description
A Nodes contacted by a plan
Most recent values for the extracted node points from the nodes listed in
B
the first column
Details (i.e., last twenty-five values) of the extracted node point identified
C
in the second column; a graph displays how values changed over time

For some unstructured, complex information, such as user account or installed software details
found in servers and workstations, click the link next to the historical values in the third column to see
additional historical details, as in this History details popup window example:
Note: History details are available for all nodes and node points, but extended details like this are
presented only for node points whose value is not a scalar.
| Smart Polling | 268

Figure 207: Complex history details available

Figure 208: History details popup

Querying extracted information


To query extracted information, use the queries function with the node_points data source.
Query examples
To access the product name history for a node, use the following query:

node_points | where node_id == 192.168.1.3 | where human_name ==


product_name | select content | uniq | count

To access more complex information, such as details about a particular vendor's software that is
installed on a node, use the following query:

node_points | where node_id == 10.41.48.63 | where human_name ==


Installed_Software | expand content | select expanded_content | where
expanded_content.vendor include? vendor_name | uniq

You can access the entire polling history using the node_points data source. To access just the
latest polling information, use the node_points_last data source. For example, use the following
query to access the latest installed hotfixes for each polled node:

node_points_last | where human_name == hotfixes | select node_id content

Customizing the log level


Smart Polling logs self-diagnostic information about its operations and activities during execution.
| Smart Polling | 269

When Smart Polling logs self-diagnostic information, the logs are collected in the /data/log/n2os/
n2ossp.log file.
Add the following lines to the configuration file to change the level of detail in the logs:

/data/cfg/n2os.conf.user:

sp log_level <LEVEL>

where <LEVEL> is one of the following values (in increasing order of verbosity): FATAL, ERROR,
WARN, INFO, DEBUG.
Note: The default level is INFO.
After saving the file, restart Smart Polling with this command: service n2ossp stop.
Example: To configure the file to see only ERROR and FATAL messages, add the following rule to the
/data/cfg/n2os.conf.user file and restart the process via service n2ossp stop:

sp log_level ERROR

Note: The configured level is the minimum to be printed, so ERROR will print log lines for both
ERROR and FATAL messages, whereas FATAL will print log lines only for FATAL messages.
The service restarts automatically after the execution of the command.

Smart Polling on CMC


In the CMC, plans are read-only. Plans created in Guardian can be synchronized, along with the
associated data. By default, synchronization is disabled, but it is possible to enable it as described in
the CMC - Data synchronization tuning chapter.

Smart Polling Progressive mode


Progressive mode is a Smart Polling option that increases visibility by: (1) automating plan creation
and execution, and (2) polling the right nodes with the right parameters based on passively detected
asset information.
To enable Progressive mode:
1. In the Web UI, select Smart Polling from the main menu, then select the Settings tab.
2. At the Smart Polling Progressive Mode Setting screen, select Enable Progressive mode.
3. From the dropdown menu of the Strategy field, select one or more strategies to enable. Then, save
your settings.
Note: Smart Polling automatically identifies the optimal target for polling, i.e., only nodes returned by
the query corresponding to the selected strategy are polled. Select the Smart Polling strategies that
you want to create a Progressive Smart Polling plan. Enable Progressive Smart Polling plans appear in
the Plans tab.
Figure 209: Enabling Smart Polling Progressive Mode Settings

Note: If you disable Progressive mode by selecting the Disable progressive mode option, the newly
created Progressive mode Smart Polling plans stop executing, and are grayed out in the Plans tab. To
re-enable Progressive mode, select the Enable Progressive Mode option.
You can manually adjust some aspects of Progressive mode plans. For example, by default each plan
runs every 24 hours, but you can manually adjust this value.
To manually adjust the run interval for a plan:
1. At the Smart Polling screen, go to the Plans tab, and select the configure ( ) icon for the plan that
you'd like to change.
2. At the Plan configuration popup, adjust the time in seconds. Note that the default of 24 hours =
86400 seconds.
Note: The Query field is shown, but is not editable.

Figure 210: Adjusting Smart Polling Progressive mode plans


3. Click the Edit plan button to save your changes.
Chapter

9
Threat Intelligence
Topics: Threat Intelligence is a feature that enriches assets with additional
information to improve detection of malware and anomalies.
• Configuring and updating
• Checking software version and Introduction
license status
Threat Intelligence is a feature that continuously analyzes network
traffic and asset configuration details, and compares them
with industry-standard packet rules, Yara rules, indicators of
compromise, vulnerability assessments, and Common Platform
Enumeration (CPE) mapping content in order to identify malicious
events. Threat Intelligence packages can be modularly controlled
to disable or enable individual rules, and to manually add rules to
investigate and deliver customer alerts. Curated and proprietary
Threat Intelligence content is offered to Nozomi Networks and non-
Nozomi Networks customers as a subscription. This subscription
allows users to receive an automatic and continued flow of updated
Threat Intelligence information into Guardian sensors to detect the
most up-to-date methods of attack. Threat Intelligence content can
be managed from the Guardian sensor, from Vantage, or from the
Central Management Console (CMC) sensor.
This makes it easy to propagate Threat Intelligence contents to an
unlimited number of Guardian sensors. Threat Intelligence contents
can be set to automatically update, or you can be manually update
the Guardian sensor via local file to allow the system to operate in a
fully air-gapped environment.
The Threat Intelligence screen allows you to manage Packet rules,
Yara rules, STIX indicators and Vulnerabilities to provide detailed
threat information.
• Packet rules are executed on every packet. They raise an
alert of type SIGN:PACKET-RULE if a match is found. For an
explanation of how to format packet rules, see Packet rules on
page 245.
• Yara rules are executed on every file transferred over the
network by protocols like HTTP or SMB. When a match is found,
an alert of type SIGN:MALWARE-DETECTED is raised. Yara
rules conform to the specifications found at Yara Rules.
• STIX indicators contain information about malicious IP
addresses, URLs, malware signatures, or malicious DNS
domains. This information enriches existing alerts and raises
new ones.
• Vulnerabilities are assigned to each node, depending on the
installed hardware and operating system, and the software
identified in the traffic. The Nozomi Networks solution leverages
CVE, a dictionary that provides definitions for publicly disclosed
cybersecurity vulnerabilities and exposures.
| Threat Intelligence | 272

Configuring and updating


Threat Intelligence requires an additional license to enable the service. Threat Intelligence must be
connected to the Internet, to Vantage, or to an upstream with access to Threat Intelligence to allow
automatic updates.

Configuring a Threat Intelligence license


To configure the Threat Intelligence license:
1. From the Web UI, select the gear ( ) icon in the upper right corner of the screen, then select
System > Updates & Licenses. The Updates & Licenses screen appears.

Figure 211: Updates & Licenses screen

Note: Alternatively, click TI in the utility navigation at the top of the screen to access the Updates &
Licenses screen.

2. At the Threat Intelligence section of the Updates & Licenses screen, click the Set new license
button in the upper right corner of the section. The Updates popup appears. You can monitor the
status of the update from this screen (in green font). For more information, see the corresponding
section in the License on page 21 page.
| Threat Intelligence | 273

Figure 212: Updates screen - Set new license

Figure 213: Updates screen - Connected to Vantage or a CMC

Note: The Updates screen information varies, depending on whether Guardian is connected to
Vantage / CMC, or is standalone. In the above image, Threat Intelligence is connected to and
managed by Vantage or CMC. The Nozomi Networks solution synchronizes updates.

Enabling automatic updates


You can enable automatic Threat Intelligence updates if you are connected to Vantage or a CMC:
1. Confirm that you can reach https://nozomi-contents.s3.amazonaws.com from your
Guardian / CMC to allow the Nozomi Networks solution to obtain Threat Intelligence updates.
2. From the Web UI, select the gear ( ) icon in the upper right corner of the screen, then from the
Administration screen, select System > Updates & Licenses to monitor the status of the update.
Note: Alternatively, click TI in the utility navigation bar at the top of the screen to access Threat
Intelligence licenses.
3. At the Update service configuration screen, select the Nozomi Networks Update Service
button.
4. Check the box for the Enable network connection to update service field.
5. Click the Check connection button to check Guardian's connection to Vantage or a CMC.
Note: Monitor the status of the connection (i.e., Connection to endpoint is working) in green
font.
| Threat Intelligence | 274

Figure 214: Update service configuration


6. (Optional) Click the Update now button to update the Threat Intelligence output immediately. The
update occurs by default once an hour or upon reboot.
Note: The status of the update appears in green font.
7. Save your selections.
Note: The request for input shown on the Update service configuration screen presents different
options, depending on how Guardian receives Threat Intelligence information. (1) If Guardian is
connected to a CMC or Vantage, it receives Threat Intelligence through that connection. (2) If Guardian
is not connected upstream, it receives Threat Intelligence through an S3 instance cloud service, which
requires an Internet connection. If a proxy is required for that connection, it may require authentication.
In the following image, Threat Intelligence is managed through a proxy server.
• Check the Use proxy connection field.
• Check the Enable Proxy Authentication field.
| Threat Intelligence | 275

Figure 215: Connection through a proxy server

Configuring manual updates


You can manually update Threat Intelligence if your sensor or CMC is not connected to the Internet:
1. From the Web UI, select the gear ( ) icon in the upper right corner of the screen, then from the
Administration screen, select System > Updates & Licenses to monitor the status of the update.
Note: Alternatively, click TI in the utility navigation bar at the top of the screen to access Threat
Intelligence licenses.
2. Click the Manual contents upload button.

Figure 216: Manual update service configuration


3. Contact Support for the manual update package and drop it in the area shown in the image above
or click to upload the update package. After the update, new contents are propagated to the
downstream sensors.
Note: Should you switch to automatic online Threat Intelligence updates, click
the Nozomi Networks Update Service button, and follow the #unique_159/
unique_159_Connect_42_title_ydk_ghy_bwb on page 273 procedure.
4. Select Close to save your settings.
| Threat Intelligence | 276

Checking software version and license status


The Threat Intelligence (TI) license and software version status includes additional information about
when TI was updated, and if it is the latest version.
To check the Threat Intelligence version and license information:
1. From the navigation header, hover your mouse over TI to see the Threat Intelligence version
number.
2. From the navigation header, click TI, which brings you to the Threat Intelligence license and status
information.

Figure 217: Threat Intelligence update status and version number


3. (Alternative) From the Web UI, select the gear ( ) icon in the upper right corner of the screen, then
select System > Updates & Licenses. The Updates & Licenses screen appears.
Note: From this screen, confirm that the Threat Intelligence contents are up-to-date, and see the
version number, and the time stamp when this version was installed.

Figure 218: Updates & Licenses screen


| Threat Intelligence | 277

Note: For additional information on configuring Threat Intelligence, go to Configuring and updating
on page 272.
Chapter

10
Asset Intelligence
Topics: This topic describes Nozomi Networks Asset Intelligence, including
fully enriched and not matched assets.
• Enriched asset information
Asset Intelligence (AI) is a Nozomi Networks Operating System
• Needed input data
(N2OS) feature with a constantly expanding database of modeling
• Asset Intelligence license asset behavior.
Once an asset is recognized by the N2OS, it is in the Asset
Intelligence database. More information is added from the Asset
Intelligence feed for that specific asset representation to enrich the
asset.
Asset Intelligence enriches assets in the inventory by modeling
expected behavior that improves overall visibility, asset
management, and security. This strengthens the learned behavior
from the baseline independently from the monitored network data.
The following terms help to define assets:
• Enriched asset – The asset is confirmed against the Asset
Intelligence database, matched to a known entity, and any
additional database information is applied to the asset. This
example shows an Enriched asset:

Figure 219: Enriched asset

When Enriched, all asset information is visible, including


End of Sale and Product lifecycle status. With this additional
information, we are able to see more specific vulnerabilities for
the individual asset.
Go to Enriched asset information on page 281 for more
information on enriched information.
• Not Active – The asset is not confirmed against the database
and additional information is not applied. This example shows an
asset that is Not Active:
| Asset Intelligence | 280

Figure 220: Not Active asset

An asset might be Not Active for several reasons, but typically


it is because traffic with the asset's full protocol is not visible to
the sensor, which could be the result of sensor placement. If, for
example, the sensor only sees traffic for DNS from the asset, it
is unable to see enough information to match against the Asset
Intelligence database and the asset is tagged as Not Active.
| Asset Intelligence | 281

Enriched asset information


This topic describes Asset Intelligence enrichment.
Through Asset Intelligence enrichment, Nozomi Networks enriches assets and delivers regular profile
updates for anomaly detection. An active Asset Intelligence license is required for this feature.
The Asset Intelligence effect on assets is shown by the following widget in the Asset dialog, and via the
is_ai_enriched attribute of the Assets table.

Asset state Description


Enriched asset A match was found, and the asset is enriched
Asset not matched A match was not found, and the asset is not enriched
Inactive -> Not active No content was found. This can be due to a missing license, or
content not yet imported.

Details about enriched assets follow, but the specific enriched fields vary depending on the asset:

Asset detail Description


Type Asset type) Users have the ability to overwrite an existing value for
more precision, such as updating an OT device to IED.
End of Sale Date when the hardware is officially no longer for sale.
End of Support Date when the hardware is officially no longer supported.
Lifecycle status Depending on the End of Sale and End of Support values, this
value is set to Active, End of Sale, End of Support. This generic
indication applies when precise dates for other fields are not
specified, such as "I know it's Active, but I'm not sure when it will go
End of Sale."
Protocols List of expected protocols for the asset on the N2OS learning
engine, leading to alert creation when behavior deviates from the
profile.
Function Codes List of expected function codes for the asset on the N2OS learning
engine, leading to alert creation when behavior deviates from the
profile.
Image (only Vantage) Asset picture.
Description (only Description of the asset.
Vantage)

Needed input data


In order to see a match and thus trigger the enrichment, the Asset Intelligence needs to see,
depending on the specific asset model, one or more of these fields: Vendor, Product Name, URL.
Asset Intelligence license
This topic describes how to access the Asset Intelligence license.
Asset Intelligence is a separately licensed feature. To see if you have Asset Intelligence, go to
Administration > System > Updates > Licenses. You can see if your Guardian has a valid and up-to-
date license for Asset Intelligence.

Figure 221: Asset Intelligence license


Chapter

11
Queries
Topics: The Nozomi Networks Query Language (N2QL) syntax is used to
create complex data processes to obtain, filter, and analyze lists of
• Overview information from the Nozomi Networks solution.
• Reference
Queries consist of data sources, commands and functions in N2QL.
• Examples
| Queries | 284

Overview
This topic briefly describes the query syntax.

Data source
Queries start by calling a data source.
For example:

nodes | sort received.bytes desc | head

This shows, in table format, the first 10 nodes that received the most bytes.
When adding the pie command at the end of a query, the results are displayed in pie chart format,
where each slice has node id as the label and the received.bytes field as data.
For example:

nodes | sort received.bytes desc | head | pie ip received.bytes

Functions
Query commands alone may not achieve the desired result. Consequently, query syntax supports
functions. With functions, apply calculations to the fields and use the results as a new temporary field.
For example, the query:

nodes | sort sum(sent.bytes,received.bytes) desc | column ip


sum(sent.bytes,received.bytes)

uses the sum function to sort on the aggregated parameters, which produces a chart with the columns
representing the sum of the sent and received bytes.

Prefix
The $ is a prefix that changes the interpretation of the right hand side (rhs) of a where clause. By
default the rhs is interpreted as a string. With the $ prefix, the interpretation of the rhs changes to a
field name.
For example, in a query such as:

nodes | where id == 17.179.252.2


| Queries | 285

the right side of the == is expected to be a constant. If you create a query such as:

nodes | where id == id

the query tries to match all of the nodes having id equal to the string id.
If, however, you use the $, the second field is interpreted as a field, not a constant:

nodes | where id == $id

and returns the full list of records.


| Queries | 286

Reference

Data sources
These are the available data sources with which you can start a query:

alerts Raised events


appliances Downstream connected sensors synchronizing data to this, local one
assertions Assertions saved by the users. An assertion represents an automatic
check against other query sources
assets Identified assets. Assets represent a local (private), physical system to
care about, and can be composed of one or more Nodes. Broadcast
nodes, grouped nodes, internet nodes, and similar cannot be Assets
accordingly
audit_log System’s log for important operational events, e.g., login, backup
creation, etc.
captured_files Files reconstructed for analysis
captured_logs Logs captured passively over the network
captured_urls URLs and other protocol calls captured over the network. Access to
files, requests to DNS, requested URLs and other are available in this
query source
cpe_items CPE maps definitions
cve_files CVE definitions
dhcp_leases IP to Mac bindings due to the presence of DHCP
function_codes Protocols' function codes used in the environment
health_log System's Health-related events, e.g. high resource utilization or
hardware-related issues or events
link_events Events that can occur on a Link, like it being available or not
links Identified links, defined as directional one-to-one associations with a
single protocol (i.e. source, destination, protocol)
microsoft_hotfixes Microsoft hotfixes informations
node_cpe_changes Common Platform Enumeration changes identified over known nodes.
On the event of update of a CPE (on hardware, operating system and
software versions), an entry in this query source is created to keep
track of software updates or better detection of software
node_cpes Common Platform Enumeration identified on nodes (hardware,
operating system and software versions)
node_cves Common Vulnerability Exposures: vulnerabilities associated to
identified nodes' CPEs
node_points Data points extracted over time, via Smart Polling or via Arc, from
monitored Nodes
node_points_last node_points last samples per each included data point
nodes Identified nodes, where a node is an L2 or L3 (and above) entity able
to speak some protocol
packet_rules Packet rules definitions
| Queries | 287

protocol_connections Identified protocol handhsakes/connections needed to decode process


variables
report_files Generated report files available for consultation
report_folders Generated report folders
sessions Sessions with recent network actvity. A Session is a specific
application-level connection between nodes. A Link can hold one or
more Session at a given time
sessions_history Archived sessions
sigma_rules Sigma rules definitions
sp_executions Executions of Smart Polling plans
sp_node_executions Results of Smart Polling plans executions per node
stix_indicators STIX definitions
subnets Identified network subnets
threat_models Threat Modeling definitions
trace_requests Trace requests in processing
variable_history Process variables' history of values
variables Identified process variables
yara_rules Yara rules definitions
zone_links A list of protocols exchanged by the defined zones
zones Defined network zones

Using Basic Operators


When writing queries, keep the following in mind:

Operator |(pipe, AND logical operator)


Add a where clause with a logical AND, append it using the pipe
Description character (|). For example, the query below returns links that are from
192.168.254.0/24 AND going to 172.217.168.0/24.
links | where from in_subnet? 192.168.254.0/24 | where
Example
to in_subnet? 172.217.168.0/24

Operator OR
To add a where clause with a logical OR, append it using the OR operator.
Description For example, the query below returns links with either the http OR the https
protocols.
Example links | where protocol == http OR protocol == https

Operator ! (exclamation point, NOT logical operator)


Put an exclamation point (!) before a term to negate it. For example, the
Description
query below returns links that do NOT (!) belong to 192.168.254.0/24.
Example nodes | where ip !in_subnet? 192.168.254.0/24 | count

Operator ->
| Queries | 288

To change a column name, select it and use the -> operator followed by the
new name. It is worth noting that specific suffixes are parsed and used to
visualize the column content differently. For example:
• _time data is shown in a timestamp format (1647590986549 becomes
2022-03-18 09:09:46.549)
Description • _bytes adds KB or MB, as applicable (50 becomes 50.0 B)
• _percent adds a percentage sign (50 becomes 50%)
• _speed adds a throughput speed in Mb/s (189915 becomes 1.8 Mb/s)
• _date converts numbers into a date format (2022-06-22 15:43:31.297
becomes 2022-06-2214:24:09.280 becomes 2022-06-24 (current day))
• _packets adds pp after the number of packets (50 becomes 50 pp)

nodes | select created_at created_at->my_integer | where


Example 1
my_integer > 946684800000
Example 2 nodes | select created_at->my_creation_time
nodes | select tcp_retransmission.bytes-
Example 3
>my_retrans_bytes

Operators ==, =, <, >, <=, and >=


Description Queries support the mathematical operators listed above.

Operator " (Quotation marks)


Use quotation marks (") to specify an empty string. Consider these two
cases where this technique is useful:
• Finding non-empty values. Example 1 below returns assets where the os
Description field is not blank.
• Specifying that a value in the query is a string (if its type is ambiguous).
Example 2 below tells concat to treat the "--" parameter as a fixed
string to use rather than as a field from the alerts table.

Example 1 assets | where os != ""


Example 2 alerts | select concat(id_src,"--",id_dst)

Operator in?
in? is only used with arrays; the field type must be an array. The query
looks for the text strings you specify using in? and returns arrays that
Description match one of them.
The example below uses in? to find any node having computer or
printer as elements in the array.

Example assets | where type in? ["computer","printer_scanner"]

Operator include?
The query looks for the text string you specify using include? and returns
strings that match it.
Description
The example below uses include? to find assets where the os field
contains the string Win.

Example assets | where os include? Win


| Queries | 289

Commands
Here is the complete list of commands:

Syntax select <field1> <field2> ... <fieldN>

Parameters • the list of field(s) to output

The select command takes all the input items and outputs them with only
Description
the selected fields

Syntax exclude <field1> <field2> ... <fieldN>

Parameters • the list of field(s) to remove from the output

The exclude command takes all the input items and outputs them without
Description
the specified field(s)

where <field> <==|!=|<|>|<=|>=|in?|include?|start_with?|


Syntax
end_with?|in_subnet?> <value>

• field: the name of the field to which the operator will be applied
• operator
• value: the value used for the comparison. It can be a number, a string, or
other data type. Advanced operators can use other data types, such as:
Parameters
• a list (using JSON syntax) when using the in? operator, for example:
nodes | where ip in? ["172.18.41.44"]
• another property when using the '$' symbol, for example: nodes |
where ip != $id

The where command will send to the output only the items which fulfill the
Description specified criterion, many clauses can be concatenated using the boolean OR
operator

• nodes | where roles include? consumer OR zone ==


office
• nodes | where ip in_subnet? 192.168.1.0/24
Example • <value> can also be another <field>, as in:
links | where from_zone == $to_zone | select from_zone
to_zone

Syntax sort <field> [asc|desc]

• field: the field used for sorting


Parameters
• asc|desc: the sorting direction

The sort command will sort all the items according to the field and the
Description direction specified, it automatically understands if the field is a number or a
string

Syntax group_by <field> [ [avg|sum] [field2] ]

• field: the field used for grouping


Parameters
• avg|sum: if specified, the relative operation will be applied on field2
| Queries | 290

The group_by command will output a grouping of the items using the field
value. By default the output will be the count of the occurrences of distinct
Description
values. If an operator and a field2 are specified, the output will be the
average or the sum of the field2 values

Syntax head [count]

Parameters • count: the number of items to output

The head command will take the first count items, if count is not specified
Description
the default is 10

Syntax uniq [<field1> <field2> ... <fieldN>]

Parameters • an optional list of fields on which to calculate the uniqueness

Description The uniq command will remove from the output the duplicated items

Syntax expand <field>

Parameters • field: the field containing the list of values to be expanded

The expand command will take the list of values contained in field and for
Description each of them it will duplicate the original item substituting the original field
value with the current value of the iteration

Syntax expand_recursive <field>

Parameters • field: the field to be recursively expanded

The expand_recursive command will recursively parse the content of


field, expanding each array or json structure until a scalar value is found.
It generates a new row for each array element or json field. For each new
Description
row, it duplicates the original item substituting the original field value with
the current value of the iteration and adding a new field that represents the
current iteration path from the root

Syntax sub <field>

Parameters • field: the field containing the list of objects

Description The sub command will output the items contained in field

Syntax count
Parameters
Description The count command outputs the number of items

Syntax pie <label_field> <value_field>

• label_field: the field used for each slice label


Parameters • value_field: the field used for the value of the slice, must be a numeric
field

The pie command will output a pie chart according to the specified
Description
parameters

Syntax column <label_field> <value_field ...>


| Queries | 291

• label_field: the field used for each column label


Parameters
• value_field: one or more field used for the values of the columns

The column command will output a histogram; for each label a group
of columns is displayed with the value from the specified value_field(s).
Description
The variant column_colored_by_label returns bars of different colors
depending on their labels.

Syntax history <count_field> <time_field>

• count_field: the field used to draw the Y value


Parameters
• time_field: the field used to draw the X points of the time series

The history command will draw a chart representing an historic series of


Description
values

Syntax distance <id_field> <distance_field>

• id_field: the field used to tag the resulting distances.


Parameters
• distance_field: the field on which distances are computed among entries.

The distance command calculates a series of distances (that is,


differences) from the original series of distance_field. Each distance
value is calculated as the difference between a value and its subsequent
occurrence, and tagged using the id_field.
Description
For example, assuming we're working with an id and a time field, entering
alerts | distance id time returns a table where each distance entry
is characterised by the from_id, to_id, and time_distance fields that
represent time differences between the selected alerts.

Syntax bucket <field> <range>

• field: the field on which the buckets are calculated


Parameters
• range: the range of tolerance in which values are grouped

The bucket command will group data in different buckets, different records
Description will be put in the same bucket when the values fall in the same multiple of
<range>

Syntax join <other_source> <field> <other_source_field>

• other_source: the name of the other data source


• field: the field of the original source used to match the object to join
Parameters
• other_source_field: the field of the other data source used to match the
object to join

The join command will take two records and will join them in one record
Description
when <field> and <other_source_field> have the same value

Syntax gauge <field> [min] [max]

• field: the value to draw


Parameters • min: the minimum value to put on the gauge scale
• max: the maximum value to put on the gauge scale

Description The gauge command will take a value and represent it in a graphical way
| Queries | 292

Syntax value <field>

Parameters • field: the value to draw

Description The value command will take a value and represent it in a textual way

Syntax reduce <field> [sum|avg]

• field: the field on which the reduction will be performed


Parameters
• sum or avg: the reduce operation to perform, it is sum if not specified

The reduce command will take a series of values and calculate a single
Description
value

Syntax size()

Parameters • field: the field to calculate the size of

If the field is an array, then the size function returns the number of entries
in the array. If the field contains a string, then the size function returns the
number of characters in the string.
Description
Note: The size function may only be used on the following data sources:
alerts, assets, captured_files, links, nodes, packet_rules, sessions,
stix_indicators, subnets, variables, yara_rules, zones, and zone_links.

Example: assets | where size(ip) > 1

Nodes-specific commands reference

where_node <field> < ==|!=|<|>|<=|>=|in?|include?|


Syntax
exclude?|start_with?|end_with? > <value>

• field: the name of the field to which the operator will be applied
• operator
Parameters • value: the value used for the comparison. It can be a number, a string
or a list (using JSON syntax), the query engine will understand the
semantics.

The where_node command will send to the output only the items which
fulfill the specified criterion, many clauses can be concatenated using
the boolean OR operator. The where_node command is similar to the
Description where command, but the output will also include all the nodes that are
communicating directly with the result of the search.
Note: This command is only applicable to the nodes table.

where_link <field> < ==|!=|<|>|<=|>=|in?|include?|


Syntax
exclude?|start_with?|end_with? > <value>

• field: the name of the links table's field to which the operator will be
applied.
• operator
Parameters
• value: the value used for the comparison. It can be a number, a string
or a list (using JSON syntax) the query engine will understand the
semantics.
| Queries | 293

The where_link command will send to the output only the nodes which
are connected by a link fulfilling the specified criterion. Many clauses can be
Description concatenated using the boolean OR operator.
Note: This command is only applicable to the nodes table.

graph [node_label:<node_field>]
Syntax [node_perspective:<perspective_name>]
[link_perspective:<perspective_name>]

• node_label: add a label to the node, the label will be the content of the
specified node field
• node_perspective: apply the specified node perspective to the resulting
graph. Valid node perspective values are:
• roles
• zones
• transferred_bytes
• not_learned
• public_nodes
• reputation
Parameters • appliance_host
• link_perspective: apply the specified link perspective to the resulting
graph. Valid link perspectives are:
• transferred_bytes
• tcp_firewalled
• tcp_handshaked_connections
• tcp_connection_attempts
• tcp_retransmitted_bytes
• throughput
• interzones
• not_learned

The graph command renders a network graph by taking some nodes as


Description
input.

Link Events-specific commands reference

Syntax availability
Parameters
The availability command computes the percentage of time a link is
Description UP. The computation is based on the link events UP and DOWN that are
seen for the link.

Syntax availability_history <range>

• range: the temporal window in milliseconds to use to group the link


Parameters
events

The availability_history command computes the percentage of time


Description a link is UP by grouping the link events into many buckets. Each bucket will
include the events of the temporal window specified by the range parameter.

Syntax availability_history_month <months_back> <range>


| Queries | 294

• months_back: number of months to go back in regards to the current


Parameters month to group the link events
• range: the temporal window in seconds to use to group the link events

The availability_history command computes the percentage of time


a link is UP by grouping the link events into many buckets. Each bucket
Description
will include the events of the temporal window specified by the range and
months parameters.
| Queries | 295

Functions
Please note that functions are always used in conjunction with other commands, such as select. In
the following examples, functions are shown in bold:
• Combining functions with select: nodes | select id type color(type)
• Combining functions with where: nodes | where size(label) > 10
• Combining functions with group_by: nodes | group_by size(protocols)
Here is the complete list of functions:

Syntax abs(<field>)

Parameters • the field on which to calculate the absolute value

Description The abs function returns the absolute value of the field

Syntax bitwise_and(<numeric_field>,<mask>)

• numeric_field: the numeric field on which apply the mask


Parameters
• mask: a number that will be interpreted as a bit mask

The bitwise_and function calculates the bitwise & operator between the
Description
numeric_field and the mask entered by the user

Syntax coalesce(<field1>,<field2>,...)

Parameters • a list of fields or string literals in the format "<chars>"

Description The coalesce function will output the first value that is not null

Syntax color(<field>)

Parameters • field: the field on which to calculate the color

Description The color function generates a color in the rgb hex format from a value
Note Only available for nodes, links, variables and function_codes

Syntax concat(<field1>,<field2>,...)

Parameters • a list of fields or string literals in the format "<chars>"

Description The concat function will output the concatenation of the input fields or values

Syntax date(<time>)

Parameters • time defined as unix epoch

Description The date function returns a date from a raw time

Syntax day_hour(<time_field>)

Parameters • time_field: the field representing a time

The day_hour function returns the hour of the day plus the sensor's local
time offset from UTC, i.e. a value in the range 0 through 23. Be careful when
Description
accounting for daylight saving time. Use day_hour_utc when absolute
precision is desired

Syntax day_hour_utc(<time_field>)
| Queries | 296

Parameters • time_field: the field representing a time

The day_hour_utc function returns the hour of the day expressed in UTC
Description
for the current time field, i.e. a value in the range 0 through 23

Syntax days_ago(<time_field>)

Parameters • time_field: the field representing a time

The days_ago function returns the amount of days passed between the
Description
current time and the time field value

Syntax dist(<field1>,<field2>)

Parameters • the two fields to compute the distance on

The dist function returns the distance between field1 and field2, which is the
Description
absolute value of their difference

Syntax div(<field1>,<field2>)

Parameters • field1 and field2: the two field to divide

Description The div function will calculate the division field1/field2

Syntax hours_ago(<time_field>)

Parameters • time_field: the field representing a time

The hours_ago function returns the amount of hours passed between the
Description
current time and the time field value

Syntax is_empty(field) == true | false

Parameters • field: the field to check to evaluate whether it is empty or not

The is_empty command takes a field as input and returns only the entries
Description
that are either empty / not empty.
Example nodes | where is_empty(label) == false

Syntax is_recent(<time_field>)

Parameters • time_field: the field representing a time

The is_recent function takes a time field and returns true if the time is not
Description
farther than 30 minutes

Syntax minutes_ago(<time_field>)

Parameters • time_field: the field representing a time

The minutes_ago function returns the amount of minutes passed between


Description
the current time and the time field value

Syntax mult(<field1>,<field2>,...)

Parameters • a list of fields to multiply


| Queries | 297

Description The mult function returns the product of the fields passed as arguments

Syntax round(<field>,[precision])

• field: the numeric field to round


Parameters
• precision: the number of decimal places

Description The round function takes a number and outputs the rounded value

Syntax seconds_ago(<time_field>)

Parameters • time_field: the field representing a time

The seconds_ago function returns the amount of seconds passed between


Description
the current time and the time field value

Syntax split(<field>,<splitter>,<index>)

• field: the field to split


Parameters • splitter: the character used to separate the string and produce the tokens
• index: the 0 based index of the token to output

The split function takes a string, separates it and outputs the token at the
Description
<index> position

Syntax sum(<field>,...)

Parameters • a list of fields to sum

Description The sum function returns the sum of the fields passed as arguments
| Queries | 298

Examples

Creating a pie chart


In this example we will create a pie chart to understand the MAC vendor distribution in our network. We
choose nodes as our query source and we start to group the nodes by mac_vendor:

nodes | group_by mac_vendor

We can see the list of the vendors in our network associated with the occurrences count. To better
understand our data we can use the sort command, so the query becomes:

nodes | group_by mac_vendor | sort count desc

In the last step we use the pie command to draw the chart with the mac_vendor as a label and the
count as the value.

nodes | group_by mac_vendor | sort count desc | pie mac_vendor count

Creating a column chart


In this example we will create a column chart with the top nodes by traffic. We start by getting the
nodes and selecting the id, sent.bytes, received.bytes and the sum of sent.bytes and received.bytes.
To calculate the sum we use the sum function, the query is:

nodes | select id sent.bytes received.bytes sum(sent.bytes,received.bytes)

If we execute the previous query we notice that the sum field has a very long name, we can rename it
to be more comfortable with the next commands:

nodes | select id sent.bytes received.bytes


sum(sent.bytes,received.bytes)->sum

To obtain the top nodes by traffic we sort and take the first 10:

nodes | select id sent.bytes received.bytes


sum(sent.bytes,received.bytes)->sum | sort sum desc | head 10

Finally we use the column command to display the data in a graphical way:

nodes | select id sent.bytes received.bytes


sum(sent.bytes,received.bytes)->sum | sort sum desc | head 10 | column id
sum sent_bytes received_bytes
| Queries | 299

Note: You can access an inner field of a complex type with the dot syntax, in the example the dot
syntax is used on the fields sent and received to access their bytes sub field.
Note: After accessing a field with the dot syntax, it will gain a new name to avoid ambiguity; the dot is
replaced by an underscore. In the example sent.bytes become sent_bytes.

Using where with multiple conditions in OR


With this query we want to get all the nodes with a specific role, in particular we want all the nodes
which are web or DNS servers.
With the where command it is possible to achieve this by writing many conditions separated by OR.
Note: The roles field contains a list of values, thus we used the include? operator to check if a value
was contained in the list.

nodes | where roles include? web_server OR roles include? dns_server |


select id roles

Using bucket and history


In this example we are going to calculate the distribution of link events towards an IP address. We start
by filtering all the link_events with id_dst equal to 192.168.1.11.
After this we sort by time, this is a very important step because bucket and history depend on how
the data are sorted.
At this point we group the data by time with bucket. The final step is to draw a chart using the
history command, we pass count as a value for the Y axis and time for the X axis.
The history command is particularly suited for displaying a big amount of data, in the image below we
can see that there are many hours of data to analyze.

link_events | where id_dst == 192.168.1.11 | sort time asc | bucket time


36000 | history count time
| Queries | 300

Using join
In this example we will join two data sources to obtain a new data source with more information. In
particular we will list the links with the labels for the source and destination nodes.
We start by asking for the links and joining them with the nodes by matching the from field of the links
with the id field of the nodes:

links | join nodes from id

After executing the query above we will get all the links fields plus a new field called
joined_node_from_id, it contains the node which satisfies the link.from == node.id
condition. We can access the sub fields of joined_node_from_id by using the dot syntax.
Because we want to get the labels also for the to field of the links we add another join and we
exclude the empty labels of the node referred by to to get more interesting data:

links | join nodes from id | join nodes to id | where


joined_node_to_id.label != ""

We obtain a huge amount of data which is difficult to understand, just use a select to get only the
relevant information:

links | join nodes from id | join nodes to id | where


joined_node_to_id.label != "" | select from joined_node_from_id.label to
joined_node_to_id.label protocol

Computing availability history


In this example we will compute the availability history for a link. In order to achieve a reliable
availability it is recommended to enable the "Track availability" feature on the desired link.
| Queries | 301

We start from the link_events data source, filtered by source and destination ip in order to precisely
identify the target link. Consider also filtering by protocol to achieve a higher degree of precision.

link_events | where id_src == 10.254.3.9 | where id_dst == 172.31.50.2

The next step is to sort the events by ascending time of creation. Without this step the
availability_history might produce meaningless results, such as negative values. Finally, we compute
the availability_history with a bucket of 1 minute (60000 milliseconds). The complete query is as
follows.

link_events | where id_src == 10.254.3.9 | where id_dst == 172.31.50.2 |


sort time asc | availability_history 60000

Note: link_events generation is disabled by default, to enable it use the configuration rule described in
Configuring links

Query complex field types


Complex field types are typically one of the following:
1. Single, scalar values
To query them: Apply the commands as explained in the chapter.
2. Objects
How to recognize them: They appear as an object included in a {..} :

{
"source": "ARP",
"likelihood": 1,
"likelihood_level": "confirmed"
}

Example: How to query only 'confirmed' Mac addresses (possibly values are confirmed, likely,
not confirmed)? Since mac_address:info is an object, the user can access subfields like
mac_address:info.likelihood_level to apply the "where" condition:

nodes | select mac_address:info mac_address:info.likelihood_level |


where mac_address:info.likelihood_level == confirmed
3. Arrays (e.g. parent in the alerts table)
How to recognize them: They appear as an array included in a [..] :

[
"5b867836-2b41-4c15-ab6f-4ae5f0251e30"
]

Example: How to query only alerts having a parent incident with a known incident id having value
"d36d0"? Since "parents" field is an array, use expand first to get an entry for each parent, then
apply your condition:

alerts | expand parents | where expanded_parents include? d36d0


4. Object arrays (e.g. function_codes in the links table)
How to recognize them: They are a combination of the above, and therefore appear as an object
included in a [{..},{..},.. ] :

[
{
"name": "M-SEARCH",
"is_learned": true,
"is_fully_learned": true
}
]

Example: How to query learned function codes? Since function_codes is an object


array, use expand first to get an entry for each function code, then use the "." operator
(function_code.is_learned) to apply your "where" condition:

links | select from to protocol function_codes | expand function_codes


| where expanded_function_codes.is_learned == true
Chapter

12
Maintenance
Topics: This chapter describes how to maintain the Nozomi Networks
solution. It includes information on how to: perform backups, restore
• System overview the system, reboot the system, and shut down the system. These
• Data backup and restore operations can be performed from the sensor shell console or from
• Reboot or shutdown the Web UI.
• Software update and rollback
• Data factory reset
• Full factory reset with data
sanitization
• Host-based intrusion detection
system
• Action on log disk full usage
• Support
| Maintenance | 304

System overview
The Nozomi Network solution includes partitions and a filesystem structure, along with core system
services to help administer and maintain the solution in a production environment.

Partition and filesystem layout


The standard structure of the Nozomi Networks Operating System (N2OS) is a four disk partition:

N2OS - First partition (/) Main partition that includes a copy of the operating system.
The operating system runs from this partition. Two different
partitions deliver fast-switch between a running release and the
new version.
N2OS - Second partition (/) Partition that coordinates with the first partition to provide
reliable update paths.
OS Configuration partition (/ Partition on which files are copied at the start of the bootstrap
cfg) process. Contains low-level OS configuration files, such as
network configurations, shell admin users, SSH keys, etc.
Data partition (/data) Partition that contains all user data, such as learned
configuration, user-imported data, traffic captures, and
persistent database.

Figure 222: N2OS standard partition table

The data partition includes these sub-folders:


• cfg: Includes all automatically learned and user-provided configurations. Two main configuration
files are stored here:
• n2os.conf.gz: for automatically learned configurations
• n2os.conf.user: for additional user-provided configurations.
• data: Working directory for the embedded relational database, used for all persistent data
• traces: Includes all traces, which are rotated when necessary
• rrd: Includes aggregated network statistics, which is used for traffic, among others.

Core services
System services used for proper configuration and troubleshooting include:
• n2osids: main monitoring process that can be controlled with:

service n2osids <operation>

(<operation> can be either start or stop. After a stop, the service restarts automatically. This
holds true for every service.) The log files start with n2os_ids* and are located under /data/
log/n2os.
| Maintenance | 305

• n2ostrace: tracing daemon that can be controlled with:

service n2ostrace <operation>

The log files start with n2os_trace* and are located under /data/log/n2os.
• n2osva: asset Identification and Vulnerability Assessment daemon that can be controlled with:

service n2osva <operation>

The log files start with n2os_va* and are located under /data/log/n2os.
• n2ossandbox: file sandbox daemon that can be controlled with:

service n2ossandbox <operation>

The log files start with n2os_sandbox* and are located under /data/log/n2os.
• nginx: web server behind the web interface that provide secures https service, and can be
controlled with:

service nginx <operation>

Use of these services requires that you obtain permission using the enable-me command. For
instance, the following commands allow you to restart the n2osids service:

enable-me
service n2osids stop

Several other tools and daemons are running in the system to deliver N2OS functionality.

Data backup and restore


There are several methods available for backing up the system and subsequently restoring it. Note that
a backup contains just the data -- the system software is left untouched.
Two different types of backup are available: Full Backup and Environment Backup. The former
contains all data, while the latter lacks historical data, extended configurations, and other information.
Both can be executed while the system is running. Environment Backup can be used to restore the
most important part of the system on another sensor for analysis, or as a delta backup when a full
backup is available.

Full Backup
You can perform a full backup from either the sensor shell console or the Web UI.
Background information
When you perform a full backup, only the following traces are retained:
• Those generated via Request custom trace (from Other actions)
• Those generated via Request a trace (from Nodes and Links)
• Those from Alerts

Shell console
From the shell console, execute the following command to create a new backup:

n2os-fullbackup

You can now download the backup file.


| Maintenance | 306

For example:

scp admin@<sensor_ip>:/data/tmp/
<backup_hostname_date_version.nozomi_backup> .

Web UI

To perform a backup, click the gear ( ) icon, then System > Backup/Restore.
You can perform the following functions from the Backup/Restore screen:
• Generate Backup Archive (click the Download button)
• Restore Previous Backup (select a previous backup to restore and upload the file)
• Schedule Backup Archive generation (schedule a backup for a chosen date or recurrence by
clicking the Schedule backup button)

Figure 223: Backup/Restore


When scheduling a backup, you can configure:
• frequency of recurrences
• maximum number of backups to retain
• location to store backup files, with locations being either local or remote:
• • If local, backup files are stored in the /data/backups/ folder on the sensor.
• If remote, backup files are stored on a dedicated host. This host provides a user/password
authentication method using a listed protocol. You must have folder permission to list, read, and
write to store the backup files. During the backup process, the backup file generates locally, then
uploads to the remote folder. When restored, the backup file first downloads from the remote
folder and is then used in the restore process.
| Maintenance | 307

When a new scheduled backup generates, the system confirms that the maximum number of backups
will not be exceeded, and if needed, eliminates the oldest backup.
You can choose a remote location for storing backups, using a protocol, such as SMB, FTP, or SSH/
SCP (SSH/SFTP is not supported).
Important: The smb remote backup is supported only for use with Microsoft operating systems.
Compatibility with third-party devices is not guaranteed. These devices may require additional
configuration changes, including, but not limited to, permission changes, creation of new network
shares, and the creation of new users. Kerberos authentication is not supported.
Note: By default, traces are not included in backups. You can include traces by checking the Include
traces option, which is also available for scheduling. Continuous traces are not included in the Include
traces option.

Full Restore
You can perform a full restore from the sensor shell console or from the Web UI.

Shell console
In order to restore from a full backup you may do the following from the shell console:
| Maintenance | 308

1. Use the SFTP or SCP commands to copy the backup archive from the
location where it was saved, to the admin@<appliance_ip>:/data/tmp/
<backup_hostname_date_version.nozomi_backup> path of the sensor.
For example, enter the command:

scp <backup_location_path>/<backup_hostname_date_version.nozomi_backup>
admin@<appliance_ip>:/data/tmp/
<backup_hostname_date_version.nozomi_backup>
2. Go to the shell console and execute this command:

n2os-fullrestore /data/tmp/<backup_hostname_date_version.nozomi_backup>

Note: Use the --etc_restore option to restore the files from the /etc folder, the feature can be
used with a backup produced from version 20.0.1 and newer.

Web UI
If automatically scheduled backups are present on the disk, they are listed in the Restore Previous
Backup part of the table.

For each entry the following actions can be performed:

Download the backup file


This action starts the download process for the selected backup file. The file can be used manually for
the Full Restore process.

Restore the selected backup file


This action starts the Full Restore process using the selected backup file.
| Maintenance | 309

Delete the selected backup file


This action will delete the selected backup file from the disk.
Finally, it is possible to upload a backup archive from your local machine, for instance a previously
produced backup via command-line, or downloaded from the Web UI.

Environment backup
You can create an environment backup of an existing installation from the sensor shell console.
1. From the shell console, issue the save command.
2. Use the SFTP command to copy the content of the /data/cfg folder to a safe place.

Environment restore
You can perform a Nozomi Networks solution environment restore on an existing installation from the
sensor shell console.
1. Copy the saved content of the cfg folder to the /data/cfg folder into the sensor.
2. From the shell console, issue the service n2osids stop command.

Reboot or shutdown
You can reboot or shutdown the system from either the Web UI or the sensor shell console.
The reboot and shutdown commands are performed from the Web UI. Alternatively, both commands
can be performed from the shell console (inside an SSH session).

Web UI

To perform a backup, click the gear ( ) icon, then System > Operations to begin. Then, select either
the Reboot or Shutdown button.

Shell console
• Reboot the system using the following command:

enable-me
shutdown -r now
| Maintenance | 310

• Shutdown the system using the following command:

enable-me
shutdown -p now
| Maintenance | 311

Software update and rollback


The Nozomi Networks Operating System (N2OS) can be updated to a newer release or rolled back to a
previous release on physical or virtual sensors.
Whether updating or rolling back, we suggest that you always have a Full backup of the system in a
safe place.

Updating
The update file applies to both Guardian and CMC, and works for all physical and virtual deployments,
making the update experience frictionless. Use different update commands and procedures for the
container architecture (see Installing the container on page 15 for additional information).
Note: You cannot update to a version that is more than one major version ahead of your current
version (e.g., 23.0.0 -> 25.0.0). Before updating a sensor, refer to the Update remarks topic in the
Release Notes, which recommends update paths.
Important: Manually scheduled updates apply regardless of whether the version is locked or the
update policy is disabled. You can schedule updates from the Web UI: click the gear ( ) icon, then go
to System > Operations > New scheduled update.
Important: If either of the following conditions is met, automatic sensor updates will not occur:
• In the CMC or Vantage, the sensor is locked. See Lock the sensor software version in the
Sensors list on page 326 for additional information.
• In the CMC, the sensor Update Policy is set to Do not update sensors. (From the CMC, click the
gear ( ) icon, then go to Settings > Synchronization settings).

Refer to Update: Web UI method on page 312 and Update: Command line method on page 312 for
specific information on updating your sensor.

Rolling back
Rolling back to a previously installed release is transparent, and all data is migrated back to the
previous format.
Refer to Rollback to the previous version on page 313 for specific information on rolling back the
N2OS release on your sensor.
| Maintenance | 312

Update: Web UI method


Update the Nozomi Networks solution software in an existing installation from the Web UI.
You must have the new VERSION-update.bundle file that you want to install.
A running system is updated with a more recent N2OS release, as follows:
1. At the Web UI, click the gear ( ) icon, then go to System > Operations.

2. Click Software Update and select the VERSION-update.bundle file.


Note: The system must be at least version 18.5.9 to support the .bundle format. If your system is
running a version lower than 18.5.9 you must first update to 18.5.9 to proceed.
The file is uploaded.
3. Click the Proceed button.
Note: If updating from version 18.5.9, the system prompts you to insert the checksum that is
distributed with the .bundle; the button is enabled only after checksum is verified.
The update process begins. The update may take several minutes to complete.

Update: Command line method


Update the Nozomi Networks solution software in an existing installation from the sensor shell console
command line.
You must have the new VERSION-update.bundle file that you want to install.
A running system is updated with a more recent N2OS release, as follows:
1. From the shell console, type cd to navigate to the directory where the VERSION-update.bundle file
is located.
2. Copy the VERSION-update.bundle file to the sensor using the following command:

scp VERSION-update.bundle admin@<sensor_ip>:/data/tmp

Note: The system must be at least version 18.5.9 to support the .bundle format. If your system is
running a version lower than 18.5.9 you must first update to 18.5.9 to proceed.
The file is uploaded.
3. Start the installation of the new software with the following commands:

ssh admin@<sensor_ip>

enable-me

install_update /data/tmp/VERSION-update.bundle
| Maintenance | 313

Note: If updating from version 18.5.9, the system prompts you to insert the checksum that is
distributed with the .bundle; the button is enabled only after checksum is verified.
The update process begins, which may take several minutes to complete. Following completion, the
sensor reboots with the new software.

Rollback to the previous version


Rollback to the previous version of the software using these instructions. To rollback to a release that is
older than the previous one, follow the instructions in the Data factory reset section.
You should have performed a release update at least once.
1. From the sensor shell console, type the following command:

rollback

2. Answer y to the confirmation message, and wait while the system is rebooted. All configuration and
historical data is automatically converted to the previous version, and no manual intervention is
required.
| Maintenance | 314

Data factory reset


Erase the N2OS data partition to perform a data factory reset. The IP configuration is kept, and the
procedure is safe to execute remotely. Executing this procedure will cause the system to lose all
data!
1. Go to the sensor shell console and execute the command:

n2os-datafactoryreset -y

2. The system restarts with a fresh data partition. Refer to Set up phase 2 (web interface configuration)
on page 20 to complete the configuration of the system.

Data factory reset with sanitization


Completely erase the N2OS data partition sanitizing the disk space using the U.S. DoD 5220-22M 7-
pass scheme by following the instructions in this section.
This process erases the N2OS data partition in accordance with clear guidelines suggested by the
NIST in document 800-88 rev1.
Configurations such as network and console password settings are kept.
Executing this procedure causes the system to lose all data!
1. Go to the sensor shell console and execute the command:

n2os-datasanitize -y

2. The system restarts with a fresh data partition. Refer to Set up phase 2 (web interface configuration)
on page 20 to complete the configuration of the system.

Full factory reset with data sanitization


Erase data from a sensor, and clear disk space using the U.S. DoD 5220-22M 7-pass scheme by
following the instructions in this section.
This process erases ALL data inside the sensor in accordance with clear guidelines suggested by the
NIST in document 800-88 rev1.
All data and configurations (e.g., network and console password settings) are permanently deleted.
Reboot the sensor after this procedure. The installed N2OS version remains the same.
Executing this procedure will cause the system to lose all data and configurations!
1. Go to the sensor shell console and execute the command:

n2os-fullfactoryreset -iknowwhatimdoing

2. The system restarts and requires reconfiguration from scratch. Refer to Set up phase 1 (basic
configuration) on page 18 to configure the system.

Host-based intrusion detection system


The Nozomi Networks solution internal Host-based intrusion detection system (HIDS) sensors detect
changes to the basic firmware image and note the change.

Host-based intrusion detection system


When a change is detected in the N2OS sensor's basic firmware image, a new event is logged in the
system's Audit log and replicated in Vantage or the CMC.
Default HIDS settings can be changed to best suit your security requirements.
| Maintenance | 315

This feature is not available in the container version due to the different security approach.

Parameter Default value Description


hids execution 18 hours HIDS check execution interval
interval
hids ignore files Coma separator list of files to be ignored by
HIDS (ex: /etc/file1, /etc/file2)

Action on log disk full usage


The N2OS solution allows several actions to be taken in order to preserve disk usage.
System log files are keep in a dedicated log partition and are automatically rotated in order to preserve
disk usage. However, if a log partition fills up, you may want to shut down the sensor.
You can enable this feature by adding the configuration key in the sensor shell console:

conf.user configure shutdown_when_log_disk_full true

Log emergency shutdown will also raise an alert in the sensor health log.
This feature is not available in the container version.

Support
In this section you will learn how to generate the archive needed to ask support to Nozomi Networks.
Go to Administration > Support click on download button and your browser will start
downloading the support archive file. Upload the file to the support case opened via the Nozomi
Networks Support Portal.
The Anonymize option removes sensitive network information from the generated archive.
Note: An anonymized support archive does not contain sensitive information about the network. It
should be used only when the normal archive cannot be shared.
Chapter

13
Central Management Console
Topics: In this section we will cover the Central Management Console
product, a centralized monitoring variant of the standalone sensor.
• Overview
The main idea behind the Central Management Console is to deliver
• Deployment
a unified experience with the sensor, consequently the two products
• Settings appear as similar as possible.
• Connecting sensors
• Troubleshooting
• Data synchronization policy
• Data synchronization tuning
• CMC or Vantage connected
sensor - Date and Time
• Sensors list
• Sensors map
• Configuring High Availability
(HA)
• Alerts
• Functionalities overview
• Updating
• Single-Sign-On (SSO) through
the CMC
| Central Management Console | 318

Overview
The Central Management Console (CMC) has been designed to support complex deployments that
cannot be addressed with a single sensor.
A central design principle behind the CMC is the Unified Experience, that allows to access information
in the same manner as the sensor. Some additional functionalities have been added to allow the simple
management of hundreds of sensors, and some other functionalities relying on live traffic availability
have been removed to cope with real-world, geographic deployments of the Nozomi Networks Solution
architectures. In Functionalities overview on page 335 a detailed overview of differences will be
given.
In the sensors page all connected sensors can be seen and managed. A graphical representation of
all the hierarchical structure of the connected sensors and the sensor Map is presented to allow a quick
health check on a user-provided raster map. In Sensors list on page 326 and Sensors map on page
329 these functionalities will be explained in detail.
Once sensors are connected, they are periodically synchronized with the CMC. In particular, the
Environment of each sensor is merged into a global Environment and Alerts are received for a
centralized overview of the system. Of course, Alerts can also be forwarded to a SIEM directly from the
CMC, thus enabling a simpler decoupling of components in the overall architecture. To synchronize
data, the sensors must be running the same major release or one of the two prior major ones. For
example, if the CMC is running the version 19.0.x (the major is 19.0), sensors can synchronize if
running one of the following versions: 19.0.x, 18.5.x or 18.0.x.
Firmware update is also simpler with a CMC. Once the new Firmware is deployed to it, all connected
sensors are also automatically updated. In Updating on page 336 an overview of the update process
is provided for the CMC scenario.
| Central Management Console | 319

Deployment
The first step to setup a CMC is to deploy its Virtual Machine (VM).
The CMC VM can be deployed following the steps provided in Installing the Virtual Machine (VM) on
page 14. The main difference here is that the CMC version of N2OS must be used in the installation.
The difference is during the Initial Setup phase: you have to locate and configure the management NIC
but not the sniff interfaces. The reason is that the CMC does not have to sniff live traffic.
Note: Nozomi Networks recommends that you use Intel-based hardware when deploying to the cloud.

Deployment to AWS
Before starting, use the Nozomi Networks Support Portal to open a support case to request access
to the CMC Amazon Machine Image (AMI). Include your organization’s AWS account ID and the
AWS region where the AMI is intended to be deployed. To find your AWS ID, refer to Amazon's
documentation on AWS identifiers. Upon receipt, we will grant access to the CMC Amazon Machine
Image (AMI).

Deployment to Microsoft Azure


The Nozomi Networks CMC image has been delivered in a special Azure VHD for use in the Azure
cloud.
Prerequisites
1. The Azure storage account that is to be used must have the capabilities to store Page Blobs. This
is an Azure requirement when uploading vhd images to be used for virtual machines in the Azure
environment.
2. The Azure user performing the installation must have permissions to access the Storage Explorer.
3. Make sure there are well-defined security groups for accessing the virtual machine to be
instantiated in Azure.
4. Nozomi Networks platform images for running on Azure have a number of prerequisites. Please
contact your Nozomi Networks support team for details.
Deploying via the Azure Web UI
1. Log in to the Azure console.
2. Create a resource of type Storage Account if there aren’t any in your subscription (default
values).
Make sure the Storage Account type supports Page Blobs.
3. Select the Storage Account and Storage Explorer (preview) > Blob Container from the
menu.
Make sure the Azure user has permissions to access the Storage Explorer
4. Create a Blob Container if it doesn’t exist and select Upload for the VHD
5. When the upload is completed, from Azure home select Create a resource and choose
Managed Disks with following settings:
• SourceType=Storage Blob
• select the Nozomi Networks VHD as SourceBlob
• Size = <deployment size>
• OS = Linux, Gen1
• Leave the other parameters with their defaults
6. Once the disk is created, select it:
• click +Create VM
• choose required CPU and RAM
• Network Firewall rules - allow SSH, HTTPS and HTTP
• In the Management tab, for Boot Diagnostics select Enable with custom storage
account, then choose or create a Diagnostics storage account
• Leave the other parameters with their defaults
| Central Management Console | 320

7. Once the virtual machine is created, select it, scroll down to the Support + troubleshooting
section and select Serial Console.
8. Log in to the console. The default console credential has no password initially and must be changed
upon first login. The console will display a prompt with the text "N2OS - login:". Type admin and
then press [Enter].
9. Elevate the privileges with the command: enable-me
10.Now launch the initial configuration wizard with the command:

env TERM=xterm /usr/local/sbin/setup

Refer to Set up phase 1 (basic configuration) on page 18 to configure the system.


11.Run data_enlarge to expand the disk space

data_enlarge
12.You can login to the Web UI with:
username: admin
Password: nozominetworks
| Central Management Console | 321

Settings
From the Web UI, click the gear ( ) icon, then go to the Settings > Synchronization settings screen
that allows you to customize Vantage or CMC related parameters.

Sync token The token that must be used by all the sensors allowing for
synchronization to the CMC.
Appliance ID The current Appliance ID, also known as CMC ID, which will
be shown in the CMC we want to replicate data with. This
information is also required when connecting a sensor to
Vantage.
CMC context A CMC's context is either Multi-context or All-in-
one. Multi-context indicates that the data gathered from
the sensors connected to the CMC will be collected and
kept separately, whereas All-in-one indicates that the
information will be merged:
• In Multi-context mode, the user can focus on a single
Guardian to access their data in their separate contexts.
This is the default operational mode; it allows the highest
scalability and supports multitenancy (ideal for MSSPs).
• In All-in-one mode, the user gets a unique, merged
Environment section. This configuration is recommended for
smaller and cohesive environments
The pages Alerts and Assets are common to both modes.

Sensor update policy Determines whether the sensors connected to the CMC will
automatically receive updates when a new version of the
software is available.
Remote access to connected Enables/disables remote access of a sensor by passing
sensors through the CMC.
Allow remote to replicate on When a CMC attempts to replicate data on the current CMC, its
this CMC Sync ID is shown in the corresponding text-field. This validates
that the CMC that is trying to replicate is really the one that you
intended to work with.
HA (High Availability) The High Availability mode allows the CMC to replicate its own
data on another CMC. In order to activate it, you must insert
the other CMC Host and Sync Token.
| Central Management Console | 322

Connecting sensors
To start connecting a sensor to a CMC open the web console of a CMC, go to Settings on page 321.
Copy the Sync Token, which you will need for configuring the sensor.
To connect a sensor to the CMC you can use the Upstream connection section on the same page.
In this section you can enter the parameters to connect the sensor:

Host The CMC host address (the protocol used will be https). If no CA-emitted
certificates are used you can make the verification of certificates optional.
Sync token The Synchronization token necessary to authenticate the connection, the
pair of tokens can be generated from the CMC.
Use proxy Enables connecting to the CMC through a proxy server.
connection

The Check connection button indicates if the pairing between the CMC and the sensor is valid.
After entering the endpoint and the Sync token. Click Save to keep the configuration and open the web
console of the CMC, navigating to Sensors on the main menu.

The table will list all the connected sensors. When a sensor is connected for the first time, it will notify
its status and receive Firmware updates. However, it will not be allowed to perform additional actions.
To enable a complete integration of the sensor you will need to "allow" it (see Sensors list on page
326 for details).
To configure the synchronization intervals between a sensor and the CMC see Configuring
synchronization on page 434.

Troubleshooting
In this section a list of the most useful troubleshooting tips for the CMC is given.
1. If the sensor is not appearing at all in the CMC:
• Ensure that firewall(s) between the sensor and the CMC allows traffic on TCP/443 port (HTTPS),
with the sensor as Source and the CMC as the Target
• Check that the tokens are correctly configured both in the sensor and the CMC
• Check in the /data/log/n2os/n2osjobs.log file for connection errors.
2. The Sensor ID is stored in the /data/cfg/.appliance-uuid file. Do not edit this file after the
sensor is connected to the CMC or Vantage, since it is the unique identifier of the sensor inside
the CMC and Vantage. In case a forceful change of the Appliance ID is needed, you will need to
remove the old data from the CMC or Vantage by removing the old Appliance ID entry.
3. If an issue occurs during the setup of a sensor, follow the instructions at Sensors list on page 326
to completely delete the sensor or just to clear its data from the CMC or Vantage.

Data synchronization policy


This topic describes centralized configuration available for CMCs and Guardians.
| Central Management Console | 323

CMC and Guardian deployments each have their own configurations. To simplify management of
sensors connected to an upstream sensor, centralized configuration is available for:
• Users and user groups (see also Users on page 31 for additional details)
• Alert rules (see also Alerts for additional details)
• Zone configurations (see also Zone configurations on page 180 for additional details)

To configure Vantage/CMC parameters, within CMC, go to the gear ( ) icon in the upper right corner,
then go to Settings > Synchronization settings > Policy tab to customize specific settings. The
Synchronization settings screen displays.

Figure 224: Synchronization settings

For details, see the next sections.

Users and user groups


Admin users can specify which users and user groups will be propagated to connected sensors.
All roles described in the SAML response are used and mapped to the defined user groups on the
Guardian/CMC sensor.
1. Go to Administration > Settings > Users to access the synchronization settings. The Users
Management screen displays.

Figure 225: Users management


2. Select the Groups tab.
3.
Select the Edit icon ( ) for each user group to be propagated to the connected sensor. The Edit
group popup displays.
a. Make any changes to the Name field, if needed.
b. Make any changes to the External UUID field, if needed.
4. In the Propagate this users group to all of the connected sensors field, toggle the button to
enable (green)/disable (gray). By default, this field is set to disable (gray).
| Central Management Console | 324

The following constraints apply if you enable synchronization:


• All of the users and user groups that arrived in the Guardian from the CMC cannot be modified.
• All of the users and user groups created in the Guardian are not synced with the CMC.
• If name conflicts exist, users and user groups in Guardian are overwritten with those from the CMC.
For details, see Users on page 31.

Alert rules
Specify a synchronization policy for alert rules from the CMC.
1. If specifying a synchronization policy for alert rules, within CMC, go to Administration > Settings >
Synchronization settings > Policy tab. The Synchronization settings page displays.

Figure 226: Synchronization settings


2. From the Alert Tuning execution policy, select a policy from the Local prevails field dropdown
menu. Alert synchronization can be performed with one of the following policies:
• Upstream only: Alert rules are controlled by top CMC or by Vantage. Local rules are ignored.
| Central Management Console | 325

• Upstream prevails: With multiple alert rules, performing the same action, match an alert, only the
ones received from upstream are executed. Mute actions, created in Guardian, are ignored if at
least one rule, received from upstream, matches the alert.
• Local prevails: If multiple alert rules exist that perform the same action to match an alert, only the
rules created in Guardian are executed. Mute actions, received from upstream, are ignored if at
least one local rule matches the alert.
For details, see Security Control Panel.

Zone configurations
Specify a synchronization policy for zone configurations from the CMC.
1. In CMC, go to Administration > Settings > Synchronization settings > Policy tab to access
zones. The Synchronization settings page displays.

Figure 227: Synchronization settings


2. From the Zone configuration definition policy, select a policy from the Local only field dropdown
menu. Zone synchronization can be performed with one of the following policies:
• Upstream only: Zone configurations are controlled by top CMC or by Vantage. Local zones are
ignored.
• Local only: Zone configurations are controlled by Guardian. Zones received from upstream are
ignored.
For details, see Zone configurations.

Data synchronization tuning


Configure synchronization in the CMCs in the Tuning tab under Administration /
Synchronization settings page. You can enable or disable the synchronization for the following
entities:
• Alerts.
• Assets.
• Zone configurations.
• Audit items.
• Health logs.
• Environment (Nodes, links and variables).
• SmartPolling (plans, executions, status information).
• Node Points (SmartPolling information) .
The Environment option is only visible in CMCs where context is set to All-in-one.
| Central Management Console | 326

The configuration is applied only to sensors directly connected to the CMC in which the configuration
has been set. If the CMC has an HA connected, the tuning must be configured in both the CMCs.
Disabling synchronization for an entity will cause the deletion of all the items already received.

CMC or Vantage connected sensor - Date and Time


Note that when a sensor is attached to a CMC or to Vantage, its date and time cannot be manually set
as described in Date and time on page 186. sensors connected to a CMC or Vantage (and with no NTP
configured) will automatically get time synchronization from the parent CMC or Vantage.

Sensors list
The sensors section shows the complete list of sensors connected to the current CMC. For each
sensor, you can see some information about its status (system, throughput, alerts, license and running
N2OS version).
| Central Management Console | 327

Actions on sensors:

Allow/Disallow a sensor

After allowing a sensor (an allowed sensor has the icon)


• Nodes, Links and Variables coming from the sensor become part of the Environment of the CMC.
• Alerts coming from the sensor can be seen in the Alerts section.

Focus on sensor
Allows to filter out only the sensor chosen data, such as Alerts and Environment.

Remote connect to a sensor


Connect to a remote sensor directly from the CMC. Click on this action to open a new browser tab to
the sensor selected login page. The action is hidden if the CMC isn't configured to allow this type of
communication between sensors and CMC; to enable it go to Settings on page 321 page.

Place a sensor on the map


This action is used to place the sensor within the map (if you did not upload a map go to Sensors map
on page 329), choose the right position of the selected sensor by clicking on the map and Save.

Lock the sensor software version


When locked, the sensor will not automatically update its software.

Force the software update of the sensor


Even if it is locked, the sensor will automatically update its software, with the version installed on the
CMC.

Clear data from a sensor


Clear all synchronized data at the CMC received from the selected sensor. Use this in combination with
the clearing of the data on the sensor, and you will be able to restart the synchronization between the
sensor and the CMC from an empty state.
| Central Management Console | 328

Delete a sensor
Clear all data received from the selected sensor and delete it from the list. If the sensor tries to sync
with the CMC again, it appears disallowed in the list.
| Central Management Console | 329

Sensors map
In this page you can upload the sensors map by clicking on Upload map and choosing a .jpg file from
your computer.

You can inspect the sensors information in the Info pane. In the map each sensor is identified by its
own ID. The sensor marker color is related to the risk of its alerts and near the ID there is the number
of the alerts in the last 5 minutes (if greater than 0). If the alerts in last 5 minutes grows, the sensor
marker will blink for 1 minute.

If the site has been specified in the Administration/General section of the sensor, it is possible to
enable the "group by site" option. The sensors with the same site will be grouped to deliver a simpler
view of a complex N2OS installation.

Figure 228: Sensors map with "group by site" enabled


| Central Management Console | 330

The sensors map is also available as a widget.


| Central Management Console | 331

Configuring High Availability (HA)


This topic describes how to configure High Availability (HA), a feature that allows a CMC to replicate all
of its data on another replica CMC.
Important information:
• To enable the highest level of resiliency, both CMCs must replicate each other. This is to ensure
that when a CMC stops working, the connected sensors continue to send data to the replica CMC.
• When configuring HA, we recommend that users choose the same admin password for both CMCs
to avoid confusion. This is because during HA configuration, admin accounts are merged across
both HA and CMCs and local users are synchronized. If each CMC has a different password for the
admin account, then after HA configuration, only one of the passwords will work and it will be the
same password for both CMCs.
• Users from Active Directory are not synchronized.
• Threat Intelligence (TI) and Asset Intelligence (AI) contents are only available if the CMC has a valid
license for those products (TI and AI, respectively).
• When two CMCs are configured to work together for high availability (HA), one CMC is configured
as primary, and the other CMC is configured as secondary, or HA. Under this configuration, users
do not need to perform configuration actions (i.e. add the same alert rule, configure the same zone,
etc.) on both CMCs. Doing so can cause duplications, conflicts, and mismatches between the two
CMCs. Users must only perform configuration actions on one of the two CMCs. The synchronization
and propagation mechanisms of the HA configuration will automatically configure the settings
correctly on the other CMC.
Prerequisite: In order to configure the CMC High Availability (HA) feature, both CMCs must be
synchronized.
Configure both CMCs, using the appropriate IPs/synch tokens, as follows:
1. Configure the first CMC. From the Administration > Synchronization settings page, select Allow
to allow the remote to replicate on this CMC, which then appears on the Optional tab.

2. Connect another CMC as an HA replica, starting from the Administration > Synchronization
settings page.
| Central Management Console | 332

3. Click the On button in the High Availability portion of the Synchronization settings to enable the
HA feature. Then complete the Host and the Sync Token fields of the endpoint to which you want
to replicate it with.
Note: The Sync token can be found in the Administration > Synchronization settings page of
the destination endpoint.

4. Save your changes to confirm the connection to the two CMCs.


5. From the Administration > Synchronization settings of the destination endpoint page, verify that
the Sync ID shown is the one for the current machine, then click the Allow button.
| Central Management Console | 333

Once the CMCs have been configured, Guardian can be configured to synch with the CMC that you
deem as primary.
Guardian failover functionality
When the primary CMC fails, Guardian automatically fails over to the secondary CMC.
Testing the configuration
To verify the configuration and determine if it is working correctly, from the Administration > Health
settings, go to Replication status. View the various entities to see if they are synchronized. For
example, AuditItems are elements generally with a low creation frequency, which will be In Sync.

You can also verify a working connection by checking the Synchronization Settings page and clicking
the Check connection button.
You can also check the last CMC that the sensor has reached:
| Central Management Console | 334

Alerts
Alerts management in the centralized console is equivalent to alerts management in a sensor (for more
information about this go to Alerts on page 72). This allows for you to have all the alerts from all the
sensors in one place.
In a sensor, you can create a query (Queries on page 118) and therefore an assertion (Custom checks:
assertions on page 226) that involves all the nodes/links/etc of your overall infrastructure.
In the centralized console you have the ability to create a "Global Assertion": you can make one or
more groups of assertions that can be propagated to all the sensors. The sensors cannot edit nor
delete these assertions, only the CMC has control over them.
As mentioned previously, it is possible to configure the centralized console to forward alerts to a SIEM
without having to configure each sensor (for more information on this topic, see Data integration on
page 160).
| Central Management Console | 335

Functionalities overview
The unified experience offered by the CMC lacks some of the features found in the sensor user
interface.

As stated above, the Nodes table in a CMC offers only the Show alerts and Navigate actions (the
same table on a sensor has also Configure node, Show requested trace and Request a trace
actions).

Figure 229: Node actions on sensor (top) and CMC (bottom)

In the Environment Links table only the Show alerts and Navigate actions are available (the same
table on a sensor has also Configure link, Show requested trace, Request a trace and Show
events actions).

Figure 230: Link actions on sensor (top) and CMC (bottom)

In Process Variables table the Configure variable action is not allowed, but the other actions
(Variable details, Add to favourites and Navigate) are. You have a detailed explanation in Process
variables on page 111.

Figure 231: Variable actions on sensor (top) and CMC (bottom)

Configuration actions and trace request functionalities are available only in the sensor user interface.
| Central Management Console | 336

Updating
In this section we will cover the release update and rollback operations of a Nozomi Networks Solution
architecture, comprised of a Central Management Console and one or more sensor(s).
The Nozomi Networks Solution Software Update bundle is universal (except for the Container) -- it
applies to both the Guardian and the CMC, and will work for all the physical and virtual sensors to
make for a user-friendly update experience.
Once a sensor is connected to the Central Management Console, updates are controlled from there.
The software bundle is propagated from the CMC and, once the bundle is received by the sensor, the
update can be performed automatically or manually. Configure this behavior on the Synchronization
settings page; select an option under Let the user perform the update on the sensors,
as shown below.

Figure 232: Update policy

If the CMC is configured to allow manual updates, the sensor's status bar displays a message notifying
the user as soon as the sensor receives the update bundle (see the next figure).

Figure 233: Update available notification

The update process from the Central Management Console can proceed as explained in Software
update and rollback on page 311. After the Central Management Console is updated, each sensor will
receive the new Software Update.
If an error occurred during the update procedure, a message appears next to the related sensor's
version number on the sensors page.
To Rollback, first rollback the Central Management Console, and then proceed to rollback all the
sensors as explained in Software update and rollback on page 311.

Single-Sign-On (SSO) through the CMC


This topic describes how to configure CMC to use it as an identity provider for sensors connected to it.
Prerequisites
Configure the CMC as an identity provider (see SAML integration on page 48 for instructions) prior to
performing this SAML integration procedure .

Configuring the CMC


1. In the CMC, add the following configuration rule to the /data/cfg/n2os.conf.user file (replace
<ADDRESS> with the address of the CMC itself):

cmc identity_provider_url https://<ADDRESS>

Nozomi Networks recommends using HTTPS as the protocol; however, HTTP is also an option. The
address can also be a Fully-Qualified Domain Name (FQDN) or an IP address.
Examples:

cmc identity_provider_url https://192.168.1.8


cmc identity_provider_url https://cmc.example.com
| Central Management Console | 337

2. Restart all services and reboot the machine to effect the change.
3. Obtain the identity provider metadata file from the CMC by navigating to:

https://<ADDRESS>/idp/saml/metadata

Replace <ADDRESS> with the address of the CMC.


4. Configure the sensors connected to the CMC as described in SAML integration on page 48, using
the following data:
• SAML role: Use https://nozominetworks.com/saml/group-name
• Metadata XML: Use the metadata file downloaded in the previous step (step 3 on page 337)
above.
Result: Sensors are now able to allow logins through the SSO service.
In installations with more than one CMC level:
• Each CMC should be configured as an identity provider using the procedure described above.
• Each sensor should be configured to use the connected CMC as its identity provider.
For example, if a Guardian is connected to a mid-level CMC that is connected to a top-level CMC, the
following configurations allow SSO from the Guardian:
• The top-level CMC requires integration with the external SSO service in the Users Management
screen, and also requires configuration as an identity provider.
• The mid-level CMC SAML integration requires use of the top-level metadata file as its SSO identity
provider, and also requires configuration as an identity provider.
• The Guardian’s SAML integration requires use of the mid-level CMC’s metadata file to allow the
mid-level CMC to be used as an identity provider.
A login attempt from Guardian redirects the user through the mid-level CMC and the top-level CMC to
the external SAML service.
Chapter

14
Remote Collector
Topics: The Remote Collector is a sensor that collects and forwards traffic
to a Guardian.
• Overview
A Remote Collector is a low-demanding, low-throughput sensor
• Deployment
suitable for monitoring multiple isolated locations in highly
• Using a Guardian with distributed environments (e.g., windmills, solar power fields). It runs
connected Remote Collectors on less robust hardware than Guardian or the CMC, and its main
• Troubleshooting task is to forward traffic to a Guardian.
• Updating
• Disabling a Remote Collector
• Install the Remote Collector
Container on the Cisco
Catalyst 9300
| Remote Collector | 340

Overview
As mentioned, Remote Collectors are deployed in installations that require monitoring of multiple
isolated locations. Remote Collectors connect to a Guardian and act as "remote interfaces," that
broaden its capture capability.
In a sense, a Remote Collector is to a Guardian as a Guardian is to a CMC, with some key differences:
(1) A Remote Collector does not process sniffed traffic, but just forwards it to the Guardian to which
it is attached. (2) A Remote Collector has no graphical user interface. (3) A Remote Collector has
bandwidth limitations.
You enable a Guardian to receive traffic from Remote Collectors. When enabled, the Remote Collector
provides an additional (virtual) network interface, called a "remote-collector" that aggregates the traffic
of the Remote Collectors connected to it. Currently connected Remote Collectors can be inspected
from the Guardian's Sensors tab.
Each Remote Collector forwards its sniffed traffic to a set of Guardians. Several Remote Collectors can
connect to a Guardian. Traffic is encrypted with high security measures over the channel (Transport
Layer Security, or TLS) to avoid third-party interception. The Remote Collector's firmware receives
automatic updates from the Guardian to which it is connected.
| Remote Collector | 341

Deployment
The first step when setting up a Remote Collector is to deploy its Virtual Machine (VM) or its container.
The Remote Collector VM can be deployed using the steps provided in Installing the Virtual Machine
(VM) on page 14 for the Guardian edition. The main difference is that the Remote Collector version of
the image must be used in the installation.
Alternatively, a Remote Collector container can be deployed using the steps in Installing the container
on page 15, changing the container name, such as in this example:

docker build -t nozomi-rc .

Connecting to a Guardian
Remote Collectors are configured via a terminal (ssh or console).
First, configure the Remote Collector network setting following the same procedure as to set up a
Guardian, which is described in Set up phase 1 (basic configuration) on page 18.
Once you have completed that step, connect the Remote Connect to Guardian as described below.
Assume that the Remote Collector IP address is 1.1.1.1.
1. Run this command: n2os-enable-rc
This command opens port 6000 on the firewall, which allows the Remote Collector to send the
traffic it sniffs. A new interface called "remote-collector" appears in the list of "Network Interfaces."

2. Go to Administration > Settings > Synchronization settings to view and copy the sync token.
The Remote Collector is now synchronized with Guardian and software updates can occur. The
sync token is used later in the procedure.
| Remote Collector | 342

Remote Collector configuration


Use the terminal (ssh or console) to configure each Remote Collector.
In the following procedure, assume that the IP address of the Guardian to which the Remote Collector
is connecting is 1.2.3.4. The Remote Collector provides a Trusted User Interface (TUI) to help with
this setup phase.
1. Run the n2os-tui command.
2. Select the Remote Collector menu.

3. Select the Set Guardian Endpoint menu.


| Remote Collector | 343

4. Insert the IP address of the Guardian that you wish to connect to.

5. From the previous menu, select the Set Connection Sync Token menu. Insert the token that you
previously noted down during the Guardian configuration step.
| Remote Collector | 344

6. Optionally, a bpf-filter can be added by selecting the Set BPF Filter menu from the previous menu.

7. Exit from the TUI.

Set the time zone


Edit the /data/cfg/n2os.conf.user file to set the time zone of the sensor.
1. Log into the shell console, either directly or through SSH.
2. Use vi or nano to edit the /data/cfg/n2os.conf.user file.
3. Use the IANA format to add a line in the file that states the applicable time zone. For example:
system time tz Australia/Brisbane
4. Reboot the sensor.

Enable Bandwidth Throttling


You can limit the bandwidth that a sensor's management port has at its disposal (for access and
updates) by specifying the maximum amount of allowed traffic.
| Remote Collector | 345

1. Run the n2os-tui command.


2. Select the Remote Collector menu.

This step assumes that you have the correct privileges and the remote collector is enabled.
3. Select the Set traffic shaping bandwidth menu.

4. Insert the maximum bandwidth to use. For example, 2Mb will set a maximum of two Megabits.
| Remote Collector | 346

It is possible to exclude specific IP addresses from the bandwidth limitation.


5. Select the Set traffic shaping exclusions menu.

6. Insert the IP address(es) that you want to exclude.

7. Select the Set max bandwidth kb/s menu.

For Remote Collectors, you can limit the bandwidth for the traffic sniffed and forwarded to the
Guardian, without impacting other connections on the management port, by specifying the
maximum amount of allowed bandwidth.
8. Insert the max bandwidth in kb/s.
| Remote Collector | 347

9. Exit from the TUI.

Enable Multiplexing
In addition to a primary Guardian, Remote Collectors can multiplex traffic to a set of secondary
Guardians. Each Guardian receives the same traffic information from the Remote Collector.
To enable multiplexing, configure at least one secondary Guardian. In the following procedure, assume
that the secondary Guardian's IP address is 1.2.3.4, and ABCD is the sync token that you noted
during the Guardian configuration procedure in Connecting to a Guardian on page 341.
1. Run the n2os-tui command.
2. Select the Remote Collector menu.

3. Select the Set secondary endpoint menu.


| Remote Collector | 348

4. Enter the IP address and token of the secondary endpoint.

5. From the previous menu, select the Set secondary token menu. Insert the sync token for the
secondary Guardian.

6. Exit from the TUI.


Although every Guardian receives the same traffic information, only the primary is authorized to change
its settings. At each sync, in the event of a communication failure with the primary Guardian, the first
secondary Guardian that successfully connects acquires the configuration capabilities.
| Remote Collector | 349

Figure 234: Secondary Guardian with no Remote Collector configuration capabilities

Set the site name and description


To better identify the Remote Collector, set a site name and description for it.
1. Run the n2os-tui command.
2. Select the Remote Collector menu.

3. Select the Set site menu.

4. Enter the site name.


| Remote Collector | 350

5. To set a description, from the previous menu, select Set description, then enter a description in the
Description field.

6. Exit from the TUI.

Set the compression strategy


Setting a compression strategy allows you to compress traffic generated by the Remote Collector.
Possible values are:
• 'zstd': (default) sends traffic compressed with the zstd algorithm.
• 'raw': sends traffic without compression. This strategy saves CPU time if the traffic is hard to
compress.
1. Run the n2os-tui command.
2. Select the Remote Collector menu.
| Remote Collector | 351

3. Select the Set strategy menu.

4. Insert the strategy value in the Set Strategy field.

5. Exit from the TUI.


| Remote Collector | 352

Enable traffic forwarding


The previous steps enable the Remote Collector to communicate with the Guardian sensor, but traffic
forwarding requires the two to exchange the certificates required for encrypting the sniffed traffic being
forwarded.
The following steps explain the simplest way to configure the certificate exchange. See Configuration of
CA-based certificates on page 353 for an alternative approach.
1. Select the Sensors tab from the main menu.

The newly added Remote Collector appears in the list.


2. Click it and inspect the pane on the right. The Last seen packet property indicates whether traffic is
being forwarded.

3. Click the Refresh button.


The Refresh button appears to the right of the Not Connected label:

4. Switch on the Live notification at the top.


The button turns into a spinning wheel. After few minutes, once the procedure is complete, the date
and time of the last seen packet is displayed:

Note that it takes a few minutes to complete the exchange and the last step is completed only after
the Remote Collector sends the first encrypted packet to the Guardian sensor. If no traffic is being
sniffed (and therefore forwarded), the procedure remains stuck in the connecting (i.e. spinning
wheel) step.
| Remote Collector | 353

Configuration of CA-based certificates


The certificates installed by default in the Guardian and the Remote Collector are self-signed, but it is
also possible to use certificates signed with a CA authority, if your company policy demands such a
requirement.
Normally a "certificate chain" composed of a "Root CA" and several "Intermediate CAs" are used to
sign a "leaf" certificate. If you wish to follow this approach, then you may go through the following
steps, which have to be repeated for both the Guardian and the Remote Collector sensors.
1. Put a "leaf" certificate/key pairs under /data/ssl/https_nozomi.crt and /data/ssl/
https_nozomi.key.
This step installs your certificate in the sensor.
2. Put the "certificate chain" under /data/ssl/trusted_nozomi.crt.
This step installs your certificate chain in the sensor. Any certificate signed with the chain is
accepted as valid.

Final configuration
After all of the sensors have been configured, it is necessary to reboot them for the configuration to
take effect. Alternatively, it is sufficient to perform the following commands in a CLI:
1. Enter service n2osrc stop on the Guardian.
2. Enter service n2osrs stop on each Remote Collector.
This is the final step in the sensor configuration.

Using a Guardian with connected Remote Collectors


This topic describes how Guardian monitors traffic with a set of connected Remote Collectors.
Click the Sensors tab to inspect the set of health of the Remote Collectors. An information pane
appears on the right with detailed information, including the health status of the Remote Collector(s),
and the timestamp of the last received payload traffic.
The list of Remote Collector network interfaces is shown at the bottom of the pane. For each network
interface, there is a Configure button that allows the user to upload/enable/disable a denylist and set/
unset a Berkeley Packet Filters (BPF) filter in the same way as for the Guardian network interfaces.
| Remote Collector | 354

Guardian system health


Guardian's system health is communicated via qualitative strings. The possible values for system
health are: unreachable, poor, average and good.
System health calculation
System health is a weighted average of RAM, CPU, and disk usage.
The unreachable status indicates a sensor that has not reached out to the CMC for a long time and
is considered stale. The other health levels (poor, average and good) are determined based on
resource usage.
If all of the values of RAM, disk, or CPU usage are less than 80%, the status is Good.
If any of the values of RAM, disk, or CPU usage are greater than 80%, CMC calculates a weighted
average of the three values and subtracts it from 100.
Example:
Average: {RAM (38%)+ Disk (66%)+ CPU(99%)} = 68%
• 100%-68%= 32%
If the result is less than 30, the status is Poor.
If the result is greater than 30, but less than 80, the status is Average.
If the result is greater than 80, the status is Good.
Packet origins
The origin of the packets is tracked internally by Guardian and is displayed in several locations, such
as in the Nodes tab of the Network page,
| Remote Collector | 355

in the Assets,

and in the Alerts page.

Troubleshooting
In this section a list of the most useful troubleshooting tips for the RC is given.
1. If a Remote Collector is not appearing at all in the Sensors tab:
• Ensure that firewall(s) between the Guardian and the Remote Collector allows traffic on TCP/443
port (HTTPS), with the Remote Collector as Source and the Guardian as the Target
• Check that the tokens are correctly configured both in the Guardian and the Remote Collector
• Check the /data/log/n2os/n2osjobs.log file of the Remote Collector for connection
errors.
2. If a Remote Collector appears in the Sensors tab, but it sends no traffic (last seen packet is empty
or does not update its value):
• Ensure that firewall(s) between the Guardian and the Remote Collector allows traffic on
TCP/6000 port, with the Remote Collector as Source and the Guardian as the Target
• Check that the certificates have been correctly exchanged between the Guardian and the
Remote Collector, i.e., that the certificate at /data/ssl/https_nozomi.crt of a sensor
appears listed in /data/ssl/trusted_nozomi.crt of the other sensor, or that the certificate
chain has been trusted
• Check the /data/log/n2os/n2os_rs.log file of the Remote Collector for connection
errors. In particular errors related to certificates are logged with the error code coming directly
from the openssl library. Once identified the code it is possible to check for the corresponding
explanation at the following page: https://www.openssl.org/docs/man1.1.0/man3/
X509_STORE_CTX_get_error.html
• Make sure to restart n2osrc and n2osrs services everytime a change in the config or the
certificates is performed
| Remote Collector | 356

Updating
In this section we will cover the release update and rollback operations of a Remote Collector.
Remote Collectors receive automatic updates from the Guardian they are attached to: as for the
Guardian to the CMC, the Remote Collector updates to the version of the Guardian if the current
firwmare version is older than the Guardian's.
Note that Remote Collector Container does not update automatically.
A Remote Collector has no graphical interface. The only other method for changing the version of a
Remote Collector is to use the manual procedure described at Software update and rollback on page
311.

Disabling a Remote Collector


Disabling unused remote controllers hardens your environment.
1. Log into the Guardian UI that receives data from the Remote Collector, locate the Remote Collector
on the Sensors tab, and remove it by clicking the Delete button.

2. If you remove all the remote collectors in your environment, you can prevent any remote collectors
from sending data to Guardian. This hardening measure can make your environment more secure.
To do so log into the shell of the Guardian that receives data from the Remote Collector, go to
privileged mode, and run

n2os-disable-rc

Install the Remote Collector Container on the Cisco Catalyst 9300


The Remote Collector Container can be installed on the Catalyst 9300 switch. Extensive knowledge of
IOS, IOx, and the ioxclient program is a prerequisite to performing the tasks in this manual. Installation
and configuration of the Cisco ioxclient is not covered in this manual, but can be found in the official
Cisco documentation. Knowledge of Docker is required. Docker information is covered in the official
Docker manual.
IOS and IOx minimum supported versions:
• Cisco IOS Cisco on C9300-48T 17.3.2a
• Cisco IOx Local Manager 1.11.0.4
Minimum operational requirements:
• the Catalyst 9300 must be enabled to host a container, a second storage can be required
• ioxclient program
| Remote Collector | 357

• SSH access to the Catalyst 9300 is needed


• Privileged access to the Catalyst 9300 with "enable" password is needed.
• The Remote Collector Container version must be present on your registry or on the local Docker
cache
Important notes:
• The supported configuration provides for the exclusive use of the Catalyst 9300 container
subsystem by the Remote Collector Container. No other containers can run at the same time. This
configuration will use all of the CPU and RAM available for the container's subsystem.
• All of the commands and configurations proposed in this documentation regarding the Catalyst
9300 are only examples. All of the commands must be verified by a qualified network administrator
and can be modified according to your actual running configuration. Incorrect configurations and
commands on the Catalyst 9300 can make it unusable and can cause network disruptions.
• The 192.0.2.0/24 network is for documentation purposes only, and should be changed.
Legend of used parameter

Parameter Used value Description


appid NozomiNetworks_RC The Remote Collector Container
name
guest-ipaddress 192.0.2.10 The Remote Collector Container
IP address
app-default-gateway 192.0.2.1 The default gateway for the
provided network
$VERSION n.a. To be filled with the Remote
Collector version

Catalyst 9300 setup


1. Go to privileged mode in the Catalyst 9300:

enable

2. Set up the Catalyst 9300 AppGigabitEthernet interface in trunk mode:

conf t

interface AppGigabitEthernet 1/0/1


switchport mode trunk
exit

3. Configure the IOx to host the container:

app-hosting appid NozomiNetworks_RC


app-vnic management guest-interface 0
guest-ipaddress 192.0.2.10 netmask 255.255.255.0
app-default-gateway 192.0.2.1 guest-interface 0
app-vnic AppGigabitEthernet trunk
app-hosting appid NozomiNetworks_RC
app-vnic AppGigabitEthernet trunk
guest-interface 1
mirroring
end
Prepare the Remote Collector Container as a IOx packet
1. Prepare the Remote Collector Container as explained in the chapter "Installing the Container" of this
manual. Upload it on your registry or use a cached version.
2. Write a package.yaml file as in the example below:

descriptor-schema-version: "2.10"
info:
name: NozomiNetworks_RC
version: latest
app:
cpuarch: x86_64
resources:
persistent_data_target: "/data"
network:
- interface-name: eth0
- interface-name: eth1
mirroring: true
profile: custom
cpu: 7400
memory: 2048
disk: 4000
startup:
rootfs: rootfs.tar
target: ["/usr/local/sbin/startup-container.sh"]
user: admin
workdir: /data
type: docker

The above configuration is used by ioxclient to build the Remote Collector Container for IOx. It
enables mirrored ports on the Cat9300 IOx backplane on to the container's eth1 port and set /
data as persistent storage on the Catalyst 9300. Other input ports are not needed for the Remote
Collector.
3. Build the Remote Collector Container for the IOx package, in the same directory of package.yaml,
run:

ioxclient docker package --skip-envelope-pkg your-container-


registry.com/NozomiNetworks_RC:"$VERSION" .

This creates the package.tar. You must upload this file directly onto the IOx as covered by the Cisco
IOx documentation. Important: The app must be stopped and activated/started after every Catalyst
9300 configuration change or redeploy.
4. Import the previously generated package in the Catalyst 9300 IOx subsystem as described in the
Cisco IOx documentation. On the Catalyst 9300 ssh console, activate and start the application with
the commands:

app-hosting stop appid NozomiNetworks_RC


app-hosting activate appid NozomiNetworks_RC
app-hosting start appid NozomiNetworks_RC

5. Access to the container can only be through the Catalyst 9300 console, by running:

app-hosting connect appid NozomiNetworks_RC session

6. Proceed to the Remote Collector configuration.


Chapter

15
Configuration
Topics: This section describes the configuration of Nozomi Networks
Solution components in detail.
• Features Control Panel
Some features can be quickly configured using the Features Control
• Editing sensor configuration
Panel (see Features Control Panel on page 360).
• Basic configuration rules
• Configuring the Garbage You can also issue configuration rules via shell by the using the CLI.
Collector For each configuration rule, we will cover all the required details.
• Configuring alerts
• Configuring Incidents
• Configuring nodes
• Configuring assets
• Configuring links
• Configuring variables
• Configuring protocols
• Configuring va
• Customizing node identifier
generation
• Configuring decryption
• Configuring trace
• Configuring continuous trace
• Configuring Time Machine
• Configuring retention
• Configuring Bandwidth
Throttling
• Configuring synchronization
• Configuring slow updates
• Configuring session hijacking
protection
• Configuring Passwords
• Configuring sandbox
• Additional Commands
| Configuration | 360

Features Control Panel


The Features Control Panel gives an overview of the current status of system features configuration
and allows to fine tune specific values.
In the General tab, you can enable general features, such as whether to generate assets from IPv6
nodes.

The Retention tab allows to select a specific number (aka Retention level) for historical data
persistence. In some cases, you can either completely disable a feature's retention or enable the
advanced options that provide more specific settings.
| Configuration | 361

Expiration: allows to select a specific number of days for historical data persistence. It is allowed to
persist the data forever.
Space retention level: allows to select a specific space size for historical data persistence.
| Configuration | 362

Editing sensor configuration


In CMC and Guardian sensors, use the CLI to configure the Nozomi Networks solution.
You can access the CLI in two ways:
• use the cli command in a text-console when connected to the sensor, either directly or through
SSH
• in the web GUI, select Administration > CLI (see Command Line Interface (CLI) on page 145)
Examples:
A command issued via the cli command in a shell

A command issued (through pipe) to the cli command in a shell

There are cases, on Remote Collector sensors for example, where cli command doesn't work; in
those cases or to fine-tune user-defined configuration or mass-import rules from other systems it's
required to manually edit the /data/cfg/n2os.conf.user. In this section we will see how to
change and apply a configuration rule.
Please log into the shell console, either directly or through SSH, and issue the following commands.
• Use vi or nano to edit /data/cfg/n2os.conf.user
• Edit a configuration rule with the text editor, see the next sections for some examples.
• Write configuration changes to disk and exit the text editor.
Next sections cover all the necessary details about the supported configuration rules.
| Configuration | 363

Basic configuration rules

Set traffic filter

Product Guardian
Syntax conf.user configure bpf_filter <bpf_expression>
Description Set the BPF filter to apply on incoming traffic to limit the type and amount of
data processed by the sensor.

Parameters • bpf_expression: the Berkeley Packet Filter expression to apply on


incoming traffic. A BPF syntax reference can be accessed on the sensor
at https://<sensor_ip>/#/bpf_guide

Where CLI

To apply In a shell console execute: service n2osids stop

Enable or disable management filters

Product Guardian
Syntax conf.user configure mgmt_filters [on|off]
Description With this rule you can switch off the filters on packets that come from/to
N2OS itself. Choose 'off' if you want to disable the management filters
(default: on).

Where CLI

To apply In a shell console execute: service n2osids stop

Enable or disable TCP/UDP deduplication

Product Guardian
Syntax conf.user configure probe deduplication enabled [true|
false]
Description It can enable or disable the deduplication analysis that N2OS does on TCP/
UDP packets. it can be either true, to enable the feature, or false, to disable
it. (default: true)

Where CLI

To apply In a shell console execute: service n2osids stop

Set TCP deduplication time delta

Product Guardian
Syntax conf.user configure probe deduplication tcp_max_delta
<delta>
Description Set the desired maximum time delta, in milliseconds, to consider a
duplicated TCP packet.

Parameters • delta: The value of the maximum time delta (default: 1)


| Configuration | 364

Where CLI

To apply In a shell console execute: service n2osids stop

Set UDP deduplication time delta

Product Guardian
Syntax conf.user configure probe deduplication udp_max_delta
<delta>
Description Set the desired maximum time delta, in milliseconds, to consider a
duplicated UDP packet.

Parameters • delta: The value of the maximum time delta (default: 1)

Where CLI

To apply In a shell console execute: service n2osids stop

Rename fallback zones

Product Guardian
Syntax conf.user configure vi zones default [private|public]
<zone_name>
Description Set the private or public fallback zone name, for nodes not matching any
zone. Details on zones feature can be viewed in Network graph on page 98.
Remark: zones can be configured through the GUI, which is the preferred
way. Refer to Zone configurations on page 180

Parameters • zone_name: the name of the private or public fallback zone

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Add Zone

Product Guardian
Syntax ids configure vi zones create <subnet>[,<subnet>]
<zone_name>
Description Add a new zone containing all the nodes in one or more specified
subnetworks. More subnetworks can be concatenated using commas. The
subnetworks can be specified using the CIDR notation (<ip>/<mask>) or
by indicating the end IPs of a range (both ends are included: <low_ip>-
<high_ip>).
Remark: zones can be configured through the GUI, which is the preferred
way. Refer to Zone configurations on page 180

Parameters • subnet: The subnetwork or subnetworks assigned to the zone; both


IPv4 and IPv6 are supported
• zone_name: The name of the zone
| Configuration | 365

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Assign a level to a zone

Product Guardian
Syntax ids configure vi zones setlevel <level> <zone_name>
Description Assigns the specified level to a zone. All nodes pertaining to the given zone
will be assigned the level.
Remark: zones can be configured through the GUI, which is the preferred
way. Refer to Zone configurations on page 180

Parameters • level: The level assigned to the zone


• zone_name: The name of the zone

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Set the nodes ownership for a zone

Product Guardian
Syntax ids configure vi zones setis_public [true|false]
<zone_name>
Description Sets the specified nodes ownership for a zone. It can be either true, for
public ownership, or false, for private ownership. All nodes belonging to the
given zone are overwritten inheriting the value.
Remark: zones can be configured through the GUI, which is the preferred
way. Refer to Zone configurations on page 180

Parameters • zone_name: The name of the zone

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Assign a security profile to a zone

Product Guardian
Syntax ids configure vi zones setsecprofile [low|medium|high|
paranoid] <zone_name>
Description Assigns the specified security profile to a zone. The visibility of the alerts
generated within the zone will follow the configured security profile.
Refer to Security Profile.

Parameters • zone_name: The name of the zone


| Configuration | 366

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Add custom protocol

Product Guardian
Syntax conf.user configure probe custom-protocol <name> [tcp|
udp] <port>
Description Add a new protocol specifying a port and a transport layer. Names shall
always be unique, so when defining a custom protocol both for udp and tcp,
use two different names.

Parameters • name: The name of the protocol, it will be displayed through the user
interface; DO NOT use a protocol name already used by SG. E.g. one
can use MySNMP, or Myhttp
• port: The transport layer port used to identify the custom protocol

Where CLI

To apply It is applied automatically

Disabling a protocol

Product Guardian
Syntax conf.user configure probe protocol <name> enable false
Description Completely disables a protocol. This can be useful to fine tune the sensor
for specific needs.

Parameters • name: The name of the protocol to disable

Where CLI

To apply It is applied automatically

Set IP grouping

Product Guardian
Syntax conf.user configure probe ipgroup <ip>/<mask>
Description This command permits to group multiple ip addresses into one single
node. This command is particularly useful when a large network of clients
accesses the SCADA/ICS system. To provide a clearer view and get an
effective learning phase, you can map all clients to a unique node simply by
specifying the netmasks (one line for each netmask). The Trace on page 59
will still show the raw IPs in the provided trace files.
Warning: This command merges all nodes information into one in an
irreversible way, and the information about original nodes is not kept.

Parameters • ip/mask: The subnetwork identifier used to group the IP addresses

Where CLI
| Configuration | 367

To apply In a shell console, execute both: service n2osids stop AND service
n2ostrace stop

Set IP grouping for Public Nodes

Product Guardian
Syntax conf.user configure probe ipgroup public_ips <ip>
Description This command permits to group all public IP addresses into one single node
(for instance, use 0.0.0.0 as the 'ip' parameter). This command is particularly
useful when the monitored network includes nodes that have routing to the
Internet. The Trace on page 59, will still show the raw IPs in the provided
trace files.
Warning: This command merges all nodes information into one in an
irreversible way, and the information about original nodes is not kept.

Parameters • ip: The ip to map all Public Nodes to

Where CLI

To apply In a shell console, execute both: service n2osids stop AND service
n2ostrace stop

Skip Public Nodes Grouping for a subnet

Product Guardian
Syntax conf.user configure probe ipgroup public_ips_skip <ip>/
<mask>
Description This is useful when the monitored network has a public addressing that has
to be monitored (i.e. public addressing used as private or public addresses
that are in security denylists).

Parameters • ip/mask: The subnetwork identifier to skip

Where CLI

To apply In a shell console, execute both: service n2osids stop AND service
n2ostrace stop

Set special Private Nodes allowlist

Product Guardian
Syntax conf.user configure vi private_ips <ip>/<mask>
Description This rule will set the is_public property of nodes matching the provided mask
to false. This is useful when the monitored network has a public addressing
used as private (e.g. violation of RFC 1918).

Parameters • ip/mask: The subnetwork identifier to treat as private; both IPv4 and
IPv6 are supported

Where CLI

To apply In a shell console execute: service n2osids stop


| Configuration | 368

Set GUI logout timeout

Products CMC, Guardian


Syntax conf.user configure users max_idle_minutes
<timeout_in_minutes>
Description Change the default inactivity timeout of the GUI. This timeout is used to
decide when to log out the current session when the user is not active.

Parameters • timeout_in_minutes: amount of minutes to wait before logging out.


The default is 10 minutes.

Where CLI

To apply It is applied automatically

Enable Syslog capture feature

Product Guardian
Syntax conf.user configure probe protocol syslog capture_logs
[true|false]
Description With this configuration rule you can enable (option true) the passively
capture of the syslog events. It is useful when you want to forward them to a
SIEM, for further details see #unique_227/unique_227_Connect_42_syslog-
forwarder-integration

Where CLI

To apply It is applied automatically

Enable Guardian HA

Product Guardian
Syntax conf.user configure guardian replica-of
<other_guardian_id>
Description With this configuration rule you can enable the Guardian HA mode for two
Guardians that sniff the same traffic and are connected to the same CMC.
During normal operations, only the primary Guardian syncs with the CMC;
if it stops synchronizing the secondary Guardian will start synchronize the
records from the last primary Guardian update. This rule should only be
configured on the secondary sensor.

Parameters • other_guardian_id: The id of the other Guardian, it can be found


on the CMC with the query appliances | where host ==
<appliance_hostname> | select id

Where CLI

To apply In a shell console execute: service n2osids stop

Disabling Vulnerability Assessment for some nodes

Product Guardian
Syntax conf.user configure va_notification matching [id|label|
zone|type|vendor]=<value> discard
| Configuration | 369

Description With this configuration rule you can disable Vulnerability Assessment for
node matching the specified rules. The effect of this configuration rule is to
discard the matching of CVE identifiers. The types are as follows.
• id: the id of a node, it can be an IP address, a netmask in the CIDR
format or a MAC address.
• label: the label of a node.
• zone: the zone in which a node is located.
• type: the type of a node.
• vendor: the vendor of a node.

Parameters • value: If a simple string is specified the match will be performed with an
"equal to" case-sensitive criterion. The matching supports two operators:
• ^: starts with
• '[': contains
These operators must be specified right after the = symbol and their match
is case-insensitive.
Examples:
• va_notification matching id=192.168.1.123 discard
• va_notification matching id=192.168.1.0/24 discard
• va_notification matching label=^abc discard

Where CLI

To apply In a shell console execute: service n2osva stop

Enabling IPv6 Assets

Product Guardian
Syntax conf.user configure vi ipv6_assets [enabled|disabled]
Description With this configuration rule you can enable assets generation also when
nodes are IPv6.

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Change the maximum percentage of Variables in the Network Elements pool

Product Guardian
Syntax conf.user configure vi machine_limits_variables_quota
<n>
Description With this configuration rule you can change the maximum percentage of
Variables in the Network Elements pool, the default is 0.6 meaning that no
more than 60% of Network Elements can be Variables.

Parameters • n: the percentage of variables expressed as a number from 0.0 to 1.0,


e.g. vi machine_limits_variables_quota 0.7

Where CLI

To apply In a shell console execute: service n2osids stop


| Configuration | 370

Tuning Backend Web Server workers

Products CMC, Guardian


Syntax conf.user configure http_workers <n>
Description With this configuration rule you can change the number of Ruby Web
Server workers. With a higher workers count the CMC/Guardian can handle
more Web UI requests concurrently, at the expense of increased memory
footprint.

Parameters • n: The new number of workers

Where CLI

To apply In a shell console execute: service webserver stop

Tuning Backend Web Server threads

Products CMC, Guardian


Syntax conf.user configure http_threads <n>
Description With this configuration rule you can change the number of threads per Ruby
Web Server worker. Increasing the thread count brings better concurrency
behaviour to CMC/Guardian Web UI performance, without increasing that
much the memory footprint.

Parameters • n: The new number of threads

Where CLI

To apply In a shell console execute: service webserver stop

Configure how Threat Intelligence contents are handled

Product Guardian
Syntax conf.user configure vi <json_value>
Description This command allows Threat Intelligence contents to be completely
disabled, either selectively loaded. The JSON object can have the following
attributes:
• load_contents - this can be true/false to enable/disable the loading of
contents;
• loaded_content_types - this is a JSON array of contents to be
loaded.
Contents available are:
• stix_indicators
As an example, the following command will disable completely contents
loading:
conf.user configure vi contents { "load_contents":
true }
As a further example, the following command will allow only stix_indicators
rules to be loaded:
conf.user configure vi contents
{ "loaded_content_types": [ "stix_indicators" ] }
| Configuration | 371

Parameters • json_value: A JSON object to configure the contents handling

Where CLI

To apply In a shell console execute: service n2osids stop

Configure which files detected on the networks are sent to sandbox

Product Guardian
Syntax conf.user configure vi sandbox_extraction <json_value>
Description The json object can have the following attributes:

• disabled_protocols - A JSON array of protocol that are disabled


with regards files detection
• enabled_protocols - A JSON array of protocol that are enabled with
regards files detection
• disabled_file_extensions - A JSON array of file extensions that
are disabled with regards files detection
• enabled_file_extensions - A JSON array of file extensions that are
enabled with regards files detection

Parameters • json_value: A json object to configure the handling of the detected


files.

Where CLI

To apply In a shell console execute: service n2osids stop


| Configuration | 372

Configuring the Garbage Collector


This section describes how to configure the Environment Garbage Collector (GC). The Garbage
Collector lets the system to discard nodes, assets, and links that are no longer useful, thus saving
system resources.

Clean up old ghost nodes

Product Guardian
Syntax conf.user configure vi gc old_ghost_nodes <seconds>
Description Set the threshold after which idle nodes that are also not confirmed and not
learned are discarded by the garbage collector.
NOTE: in Adaptive Learning, the GC works also if nodes are learned, since
they all are.

Parameters • seconds: Number of seconds after which cleanup occurs (the default is
3600, the equivalent of one hour).

Where CLI

To apply It is applied automatically

Clean up old public nodes

Product Guardian
Syntax conf.user configure vi gc old_public_nodes <seconds>
Description Determines how long to keep public nodes that are inactive. Expressed in
seconds.

Parameters • seconds: Number of seconds after which cleanup occurs (default is


259200, the equivalent of three days).

Where CLI

To apply It is applied automatically

Clean up old inactive nodes

Product Guardian
Syntax conf.user configure vi gc old_inactive_nodes <seconds>
Description Determines how long to keep nodes that are inactive. Expressed in
seconds. Inactivity is calculated as the difference between the current time
and the last activity time.
Note: When a node that has been deleted by the garbage collector appears
again in the network it will be considered new, as a consequence, according
to the learning mode, an alert could be raised. For a better result, use
Adaptive Learning and choose a reasonably long interval for this setting.

Parameters • seconds: Number of seconds after which cleanup occurs (by default it's
disabled).

Where CLI
| Configuration | 373

To apply It is applied automatically

Clean up old inactive links

Product Guardian
Syntax conf.user configure vi gc old_inactive_links <seconds>
Description Determines how long to keep links that are inactive. Expressed in seconds.
Inactivity is calculated as the difference between the current time and the
last activity time.
Note: When a link that has been deleted by the garbage collector appears
again in the network it will be considered new, as a consequence, according
to the learning mode, an alert could be raised. For a better result, use
Adaptive Learning and choose a reasonably long interval for this setting.

Parameters • seconds: Number of seconds after which cleanup occurs (by default it's
disabled).

Where CLI

To apply It is applied automatically

Clean up old ghost links

Product Guardian
Syntax conf.user configure vi gc old_ghost_links <seconds>
Description Determines how long to wait before removing inactive ghost links. A ghost
link is one that has not shown any application payload since its creation.
This could be a connection attempt whose endpoint is not responding on the
specified port; or it could be a link with a successful handshake but without
application data transmitted (in this case, transferred data would still be
greater than 0).

Parameters • seconds: Number of seconds after which cleanup occurs (by default it's
disabled).

Where CLI

To apply It is applied automatically

Clean up old inactive variables

Product Guardian
Syntax conf.user configure vi gc old_inactive_variables
<seconds>
Description Determines how long to keep variables that are inactive. Expressed in
seconds. Inactivity is calculated as the difference between the current time
and the last activity time.
Note: When a variable that has been deleted by the garbage collector
appears again in the network it will be considered new, as a consequence,
according to the learning mode, an alert could be raised. For a better result,
use Adaptive Learning and choose a reasonably long interval for this setting.
| Configuration | 374

Parameters • seconds: Number of seconds after which cleanup occurs (by default it's
disabled).

Where CLI

To apply It is applied automatically

Clean up old sessions

Product Guardian
Syntax conf.user configure vi gc sessions_may_expire_after
<seconds>
Description Determines how long to wait before a session is considered stale and it's
resources may be collected. Expressed in seconds.

Parameters • seconds: Number of seconds after which clean up may occur. By


default, set to 100 seconds.

Where CLI

To apply It is applied automatically


| Configuration | 375

Configuring alerts

Configure maximum number of victims

Product Guardian
Syntax conf.user configure alerts max_victims <num>
Description Define the maximum number of victims that each alert can contains. Victims
exceeding the given value are not stored. Default value is 1000

Parameters • num: Maximum number of victims stored for each alert

Where CLI

To apply It is applied automatically

Configure maximum number of attackers

Product Guardian
Syntax conf.user configure alerts max_attackers <num>
Description Define the maximum number of attackers that each alert can contains.
Attackers exceeding the given value are not stored. Default value is 1000

Parameters • num: Maximum number of attackers stored for each alert

Where CLI

To apply It is applied automatically

Show/hide credentials

Product Guardian
Syntax conf.user configure alerts hide_username_on_alerts
[true|false]
Syntax conf.user configure alerts hide_password_on_alerts
[true|false]
Description This flags determine whether usernames or passwords should be presented
in the the alert. By default, the credentials are visible. Affected alert types:
SIGN:MULTIPLE-ACCESS-DENIED, SIGN:MULTIPLE-UNSUCCESSFUL-
LOGINS, SIGN:PASSWORD:WEAK.

Where CLI

To apply It is applied automatically

Configure maximum length of description

Product Guardian
Syntax conf.user configure alerts max_description_length
<nchars>
| Configuration | 376

Description Define the maximum number of characters that can contain the description
of each incident. When an incident is appendding an alert description, the
append is performed only if the incident description length is smaller then
the limit.

Parameters • nchars: Maximum number of characters allowed in the description of


each incident

Where CLI

To apply It is applied automatically

Configure MITRE ATT&CK mapping rules

Product Guardian
Syntax for conf.user configure alerts mitre_attack ics_mapping
MITRE ATT&CK <path>
for ICS
mappings
Syntax for conf.user configure alerts mitre_attack
MITRE ATT&CK enterprise_mapping <path>
Enterprise
mappings
Description Customize the rules used to assign MITRE ATT&CK techniques to alerts
by means of an external file. The file has the following format. Each line
defines a rule; the rule specifies an alert type ID followed by a semicolon
and a comma-separated list of MITRE ATT&CK technique IDs. For instance,
the line SIGN:PROGRAM:TRANSFER;T0843,T0853 instructs Guardian that
alerts of type SIGN:PROGRAM:TRANSFER must be assigned both the T0843
and T0853 MITRE ATT&CK techniques.

Parameters • path: The path to the file containing the rules

Where CLI

To apply It is applied automatically

Enable storing of alerts not visible under the current security profiles

Product Guardian
Syntax conf.user configure alerts save_invisible_alerts [true|
false]
Description Alerts are not stored into the database, if they are not visible under the
current security profile and they are not part of an incident. This command
can change this behaviour and allow the above alerts to be stored.

Where CLI

To apply In a shell console execute: service n2osalert stop

Configure red assertions behavior

Product Guardian
| Configuration | 377

Syntax conf.user configure assertion_element_monitoring


[enabled|disabled]
Description By default, an assertion is boolean; it is either red or green, and upon
turning red the assertion may send alerts. This configuration allows the
assertions to actively monitor lists of elements (such as nodes, labels, etc.),
and send new alerts for any additional elements that break the assertion,
and when an alert is closed and the triggering asserted element is still
present.
Disabled by default.

Where CLI

To apply It is applied automatically

Configure how Threat Intelligence contents are handled

Product Guardian
Syntax conf.user configure alerts contents <json_value>
Description This command allows Threat Intelligence contents to be completely
disabled, or selectively loaded. The JSON object can have the following
attributes:
• load_contents - this can be true/false to enable/disable the loading of
contents;
• loaded_content_types - this is a JSON array of contents to be
loaded.
Contents available are:
• stix_indicators
As an example, the following command will disable completely contents
loading:
conf.user configure alerts contents { "load_contents":
true }
As a further example, the following command will allow only stix indicators
rules to be loaded:
conf.user configure alerts contents
{ "loaded_content_types": [ "stix_indicators" ] }

Parameters • json_value: A JSON object to configure whether Threat Intelligence


contents are loaded

Where CLI

To apply In a shell console execute: service n2osalert stop


| Configuration | 378

SIGN:MULTIPLE-ACCESS-DENIED
In this section we will configure the Multiple Access Denied alert.
The detection is enabled by default and works accordingly to the following parameters.

Set interval and threshold - 1

Product Guardian
Syntax conf.user configure vi multiple_events protocol
<protocol> <interval> <threshold>
Description Set the detection configuration for a specific protocol.

Parameters • protocol: Name of the protocol to configure. Can be 'all' to apply the
configuration globally.
• interval: maximum time in seconds for the event to happen in order to
trigger the detection. Default: 30[s] for OT devices, 15[s] for the rest."
• threshold: number of times for the event to happen in order to trigger
the detection. Default: 20 for OT devices, 40 for the rest.

Where CLI

To apply It is applied automatically

For example, we can configure the detection of a multiple access denied alert for the SMB protocol with
an interval of 10 seconds and threshold of 35 attempts with the following command:

conf.user configure vi multiple_events protocol smb 10 35


| Configuration | 379

SIGN:MULTIPLE-UNSUCCESSFUL-LOGINS
In this section we will configure the Multiple Unsuccessful Logins alert.
The detection is enabled by default and works accordingly to the following parameters.

Set interval and threshold - 2

Product Guardian
Syntax conf.user configure vi multiple_events protocol
<protocol> <interval> <threshold>
Description Set the detection configuration for a specific protocol.

Parameters • protocol: Name of the protocol to configure. Can be 'all' to apply the
configuration globally.
• interval: maximum time in seconds for the event to happen in order to
trigger the detection. Default: 30[s] for OT devices, 15[s] for the rest."
• threshold: number of times for the event to happen in order to trigger
the detection. Default: 20 for OT devices, 40 for the rest.

Where CLI

To apply It is applied automatically

For example, we can configure the detection of a multiple unsuccessful login alert for the SMB protocol
with an interval of 10 seconds and threshold of 35 attempts with the following command:

conf.user configure vi multiple_events protocol smb 10 35


| Configuration | 380

SIGN:OUTBOUND-CONNECTIONS
In this section we will configure the outbound connections limit.
Guardian can detect a sudden increase of outbound connections from a specific learned source node.
An alert is raised by default when 100 new outbound connections are observed over a 60-seconds
interval.
By default, the detection is only performed when the node is being protected. Optionally, the detection
can also be performed when the node is being learned.
Optionally, we can prevent the system from creating additional destination nodes in order to preserve
resources. Such nodes creation limit is disabled by default.
Some of the configuration parameters listed below can be applied either globally or to individual nodes.
The configuration of an individual node has higher priority and overrides the global configuration.

Perform detection when source node is being learned

Product Guardian
Syntax conf.user configure vi outbound_connections_limit
learning [true|false]
Description Specify whether the detection has to be performed also when the source
node is being learned or only when it is being protected.
Select true for detection also when the source node is learned, or false
for detection only when the source node is being protected. By default
false.

Where CLI

To apply It is applied automatically

Enable/disable nodes creation limit

Product Guardian
Syntax global conf.user configure vi outbound_connections_limit
enabled [true|false]
Syntax conf.user configure vi node <ip>
individual node outbound_connections_limit enabled [true|false]
Description Enable (option true) or disable (option false) the destination nodes
creation limit.

Parameters • ip: The IP of the source node

Where CLI

To apply It is applied automatically

Set connections count

Product Guardian
Syntax global conf.user configure vi outbound_connections_limit
connections <count>
Syntax conf.user configure vi node <ip>
individual node outbound_connections_limit connections <count>
Description Set the outbound connections limit, in number of connections.
| Configuration | 381

Parameters • ip: The IP of the source node


• count: The amount of outbound connections from a node to be
observed in order to trigger the detection (default: 100)

Where CLI

To apply It is applied automatically

Set observation interval

Product Guardian
Syntax global conf.user configure vi outbound_connections_limit
interval <value>
Syntax conf.user configure vi node <ip>
individual node outbound_connections_limit interval <value>
Description Set the outbound connections observation interval, in seconds.

Parameters • ip: The IP of the source node


• value: The time interval during which the new outbound connections are
observed.

Where CLI

To apply It is applied automatically

For example, we can configure the outbound connections limit to prevent a source node from creating
additional destination nodes when 70 outbound connections are observed during a 30-seconds interval
with the following configuration commands:

conf.user configure vi outbound_connections_limit enabled true


conf.user configure vi outbound_connections_limit connections 70
conf.user configure vi outbound_connections_limit interval 30
| Configuration | 382

SIGN:TCP-SYN-FLOOD
In this section we will configure the TCP SYN flood detection.
A node is considered to be under a TCP SYN flood attack when:
• The number of incoming connection attempts during the observation interval is greater than the
detection counter
• And, during the observation interval, the ratio between established connections and total number of
connection attempts falls below the trigger threshold
A TCP SYN flood attack is considered terminated when:
• The number of incoming connection attempts during the observation interval returns below the
detection counter
• Or, during the observation interval, the ratio between established connections and total number of
connection attempts returns above the exit threshold
The detection of flooding is not guarded by the duplication detection. In other words, duplicated
packets can still trigger a flooding alert. This is because the detection of duplication is based on SYN
numbers, which do not change during a flooding event; deduplicating these packets will cause false
negatives as it will be inhibiting the flooding detection on duplicate packets.

Set detection counter

Product Guardian
Syntax conf.user configure vi tcp_syn_flood_detection counter
<value>
Description Set the connection attempts counter, in number of connections.

Parameters • value: The amount of connection attempts to be observed in order to


trigger the detection (default: 100)

Where CLI

To apply It is applied automatically

Set observation interval

Product Guardian
Syntax conf.user configure vi tcp_syn_flood_detection interval
<value>
Description Set the observation interval, in seconds.

Parameters • value: The time interval during which the connection attempts are
observed, in seconds (default: 10).

Where CLI

To apply It is applied automatically

Set trigger threshold

Product Guardian
Syntax conf.user configure vi tcp_syn_flood_detection
trigger_threshold <value>
Description Set the trigger threshold.
| Configuration | 383

Parameters • value: The ratio between established connections and connections


attempts, which when it is reached triggers the flood detection (default:
0.1).

Where CLI

To apply It is applied automatically

Set exit threshold

Product Guardian
Syntax conf.user configure vi tcp_syn_flood_detection
exit_threshold <value>
Description Set the exit threshold.

Parameters • value: The ratio between established connections and connections


attempts, which when it is reached terminates the flood detection
(default: 0.4).

Where CLI

To apply It is applied automatically

For example, with the commands below the TCP SYN flood detection would trigger when 200
connection attempts are observed during a 15-seconds observation interval and the ratio between
established connections and connection attempts falls below 0.3. Then the detection would terminate
when the ratio returns above 0.5.

conf.user configure vi tcp_syn_flood_detection counter 200


conf.user configure vi tcp_syn_flood_detection interval 15
conf.user configure vi tcp_syn_flood_detection trigger_threshold 0.3
conf.user configure vi tcp_syn_flood_detection exit_threshold 0.5
| Configuration | 384

SIGN:UDP-FLOOD
In this section we will configure the UDP flood detection.
The detection is enabled by default and it triggers when a victim receives 20'000 UDP packets per
second for at least 10 seconds.

Enable/disable detection

Product Guardian
Syntax conf.user configure vi udp_flood_detection enabled
[true|false]
Description Enable (option true) or disable (option false) the UDP flood detection.

Where CLI

To apply It is applied automatically

Set detection threshold

Product Guardian
Syntax conf.user configure vi udp_flood_detection
packets_per_second <threshold>
Description Set the UDP flood detection threshold, in packets per second.

Parameters • threshold: The amount of UDP packets per second to be transmitted


to a victim for at least 10 seconds in order to trigger the detection
(default: 20000)

Where CLI

To apply It is applied automatically

For example, we can configure the UDP flood detection to trigger when a victim receives 40'000 UDP
packets per second for at least 10 seconds with the following configuration command:

vi udp_flood_detection packets_per_second 40000


| Configuration | 385

SIGN:NETWORK-SCAN

DDOS Defense
In this section we will configure the detection of a DDOS attack.
The detection is enabled by default and an alert is raised at most every 5 minutes, when under one
minute more than 20 nodes have been created.

Set analysis interval

Product Guardian
Syntax conf.user configure vi ddos_defense interval <threshold>
Description Set the analysis interval for the detection.

Parameters • threshold: The analysis interval is measured in minutes. Default: one


minute.

Where CLI

To apply In a shell console execute: service n2osids stop

Set max created nodes

Product Guardian
Syntax conf.user configure vi ddos_defense max_created_nodes
<max_nodes>
Description Number of created nodes that, if created in less time than the analysis
interval, will trigger the alert.

Parameters • max_nodes: Number of created nodes that trigger the detection. Default:
20.

Where CLI

To apply In a shell console execute: service n2osids stop

Set alert threshold

Product Guardian
Syntax conf.user configure vi ddos_defense alert_threshold
<threshold>
Description Interval to wait in order to raise an additional alert.

Parameters • threshold: Minutes to wait for another alert to be raised. Default: 5


minutes.

Where CLI

To apply In a shell console execute: service n2osids stop

Set alert threshold

Product Guardian
| Configuration | 386

Syntax conf.user configure vi ddos_defense alert_threshold


<threshold>
Description Interval to wait in order to raise an additional alert.

Parameters • threshold: Minutes to wait for another alert to be raised. Default: 5


minutes.

Where CLI

To apply In a shell console execute: service n2osids stop

TCP Port Scan


In this section we will configure the detection for the TCP Port scan.
The detection is enabled by default and an alert is emitted according to the configuration parameters
described below.

Set attempts threshold

Product Guardian
Syntax conf.user configure vi port_scan_tcp attempts_threshold
<threshold>
Description Set the number of scan attempts that will trigger the alert.

Parameters • threshold: Number of scan attempts that will trigger the alert. Default:
100.

Where CLI

To apply It is applied automatically

Set observation interval

Product Guardian
Syntax conf.user configure vi port_scan_tcp interval <interval>
Description Set the analysis interval for the detection algorithm.

Parameters • interval: Analysis interval in seconds for the detection algorithm.


Default: 10 seconds.

Where CLI

To apply It is applied automatically

Set trigger threshold

Product Guardian
Syntax conf.user configure vi port_scan_tcp trigger_threshold
<threshold>
Description Set the trigger threshold for the detection algorithm. An alert is raised only if
the ratio between the number of established connections and total attempts
is smaller than the trigger threshold.
| Configuration | 387

Parameters • threshold: Trigger threshold as described above for the detection


algorithm. Default: 0.1.

Where CLI

To apply It is applied automatically

Set out of sequence threshold

Product Guardian
Syntax conf.user configure vi port_scan_tcp
out_of_sequence_threshold_number <threshold>
Description Set the number of out of sync fragments which trigger this feature of the
detection algorithm.

Parameters • threshold: Number of out of sync fragments. Default: 10.

Where CLI

To apply It is applied automatically

Set out of sequence interval

Product Guardian
Syntax conf.user configure vi port_scan_tcp
out_of_sequence_interval <interval>
Description Set the analysis interval of the out of sync recognition feature of the
detection algorithm.

Parameters • interval: Analysis interval in seconds. Default: 10 seconds.

Where CLI

To apply It is applied automatically

Set out of sequence max rate

Product Guardian
Syntax conf.user configure vi port_scan_tcp
out_of_sequence_threshold_max_rate <rate>
Description Set the period of time during which additional alerts due to out of sync
fragments are not raised.

Parameters • rate: Timespan in minutes to mute additional alerts due to ouf of sync
fragments. Default: 5 minutes.

Where CLI

To apply It is applied automatically

Set ignored port ranges

Product Guardian
| Configuration | 388

Syntax conf.user configure vi port_scan_tcp ignore_ports


<port_ranges>[,<port_ranges>]
Description Set the victims' ports or port ranges which must not participate in the
detection algorithm.

Parameters • port_ranges: ports can be entered as a list of comma separated


values and ranges as a pair of ports separated by a dash. Example:
1000,1200-1300,1500. Default: none.

Where CLI

To apply It is applied automatically

For example, we can configure the detection for the TCP Port scan with the following commands:

conf.user configure vi port_scan_tcp attempts_threshold 50


conf.user configure vi port_scan_tcp interval 20
conf.user configure vi port_scan_tcp trigger_threshold 0.2
conf.user configure vi port_scan_tcp
out_of_sequence_threshold_number 15
conf.user configure vi port_scan_tcp out_of_sequence_interval 20
conf.user configure vi port_scan_tcp
out_of_sequence_threshold_max_rate 10
conf.user configure vi port_scan_tcp ignore_ports
1000,1200-1300,1500

UDP Port Scan


In this section we will configure the detection for the UDP Port scan.
The detection is enabled by default and an alert is emitted according to the configuration parameters
described below.

Set fast threshold

Product Guardian
Syntax conf.user configure vi port_scan_udp fast_threshold
<threshold>
Description Set the number of attempts which will trigger the alert for the fast detection
algorithm.

Parameters • threshold: Attempts triggering the alert for the fast detection algorithm.
Default: 500.

Where CLI

To apply It is applied automatically

Set slow interval

Product Guardian
Syntax conf.user configure vi port_scan_udp slow_interval
<interval>
Description Set the analysis interval for the slow detection algorithm.
| Configuration | 389

Parameters • interval: Analysis interval for the slow detection algorithm. Default: 60
seconds.

Where CLI

To apply It is applied automatically

Set fast interval

Product Guardian
Syntax conf.user configure vi port_scan_udp fast_interval
<interval>
Description Set the analysis interval for the fast detection algorithm.

Parameters • interval: Analysis interval for the fast detection algorithm. Default: 1
second.

Where CLI

To apply It is applied automatically

Set fast different ports threshold

Product Guardian
Syntax conf.user configure vi port_scan_udp
fast_different_ports_threshold <threshold>
Description Set the number of different ports that should be tested by the attacker for the
fast detection algorithm to trigger the alert.

Parameters • threshold: Minimum number of different ports to be tested by the


attacker to trigger the alert for the fast detection algorithm. Default: 250.

Where CLI

To apply It is applied automatically

Set unreachable ratio

Product Guardian
Syntax conf.user configure vi port_scan_udp unreachable_ratio
<ratio>
Description The slow detection algorithm will issue an alert only if the ratio between the
number of unreachable requests and the total requests is greater than this
value.

Parameters • ratio: Critical ratio for the slow detection algorithm to trigger an
alert. An alert is raised if the ratio between the number of unreachable
requests and the total requests is greater than the critical ratio. Default:
0.1.

Where CLI

To apply It is applied automatically


| Configuration | 390

For example, we can configure the detection for the UDP Port scan with the following commands:

conf.user configure vi port_scan_udp slow_threshold 200


conf.user configure vi port_scan_udp slow_interval 30
conf.user configure vi port_scan_udp fast_threshold 400
conf.user configure vi port_scan_udp fast_different_ports_threshold
150
conf.user configure vi port_scan_udp fast_interval 3
conf.user configure vi port_scan_udp unreachable_ratio 0.2

Ping Sweep
In this section we will configure the detection for the ICMP/Ping Sweep scan.
The detection is enabled by default and an alert is emitted when more than 100 request are issued in
less than 5 seconds with a total number of recorded victims equal to 100.

Set request number

Product Guardian
Syntax conf.user configure vi ping_sweep max_requests
<threshold>
Description Set the number of requests that will trigger the alert.

Parameters • threshold: Number of request that will raise the alert. Default: 100.

Where CLI

To apply It is applied automatically

Set interval

Product Guardian
Syntax conf.user configure vi ping_sweep interval <interval>
Description Set the interval during which the maximum number of requests should be
issued in order to trigger the alert.

Parameters • interval: Interval in seconds for the maximum requests to be issued.


Default: 5 seconds.

Where CLI

To apply It is applied automatically

For example, we can configure the detection for the ICMP/Ping Sweep scan with an analysis interval of
10 seconds for a threshold of 200 requests with 150 victims recorded with the following commands:

conf.user configure vi ping_sweep max_requests 200


conf.user configure vi ping_sweep interval 10

Treck Stack
In this section we will configure the detection for the Treck TCP/IP Fingerprint scan via ICMP 165.
The detection is enabled by default and an alert is emitted at most once every 20 minutes.
| Configuration | 391

Set alert interval

Product Guardian
Syntax conf.user configure vi treck_stack once_every
<threshold>
Description Set the minimum interval between two raised alerts, in minutes.

Parameters • threshold: Minutes to wait for another alert to be raised. Default: 20


minutes.

Where CLI

To apply It is applied automatically

For example, we can configure the detection for the Treck TCP/IP Fingerprint Scan via ICMP 165 with
an interval between two emitted alerts of one hour (60 minutes) with the following command:

conf.user configure vi treck_stack once_every 60


| Configuration | 392

Configuring Incidents
| Configuration | 393

INCIDENT:PORT-SCAN

Port Scan Incident


In this section we will configure the parameters of a Port Scan Incident.
The detection is enabled by default and an incident is raised when more than 6 correlated alerts are
triggered, independently from their creation time.
For example, we can configure the parameters for the Port Scan Incident with the following command,
where we identify the minimum number of alerts for the incident to be triggered, and the maximum time
interval in milliseconds in which they need to occur:

conf.user configure alerts incidents portscan {"min_alerts": 25,


"max_time_interval": 1500}

Configuring the port scan incident

Product Guardian
Syntax conf.user configure alerts incidents portscan <json_obj>
Description Configure the port scan incident by providing the configuration in a JSON
object.

Parameters • json_obj: JSON object containing the keys 'min_alerts' and


'max_time_interval', which are respectively the minimum number of alerts
which trigger the detection and the maximum time interval in which they
need to occur.

Where CLI

To apply In a shell console execute: service n2osalert stop


| Configuration | 394

Configuring nodes

Set node label

Product Guardian
Syntax set ids configure vi node <ip> label <label>
Syntax erase ids configure vi node <ip> label
Description Set the label to a node, the label will appear in the Graph, in the Nodes, in
the Process > Variables

Parameters • ip: The IP address of the node


• label: The label that will be displayed in the user interface

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Set default live traffic label formatting

Product Guardian
Syntax conf.user configure vi default_node_live_label
<operation>[:<param>][,<operation>[:<param>]]
Description The default formatting operation(s) applied to labels coming from live traffic.
More operations can be applied sequentially. See also the "Set protocol-
specific live traffic label formatting" configuration for details

Parameters • operation: The operation to apply


• param: The specific operation parameter

Where CLI

To apply It is applied automatically

Set protocol-specific live traffic label formatting

Product Guardian
Syntax conf.user configure vi node_live_label <protocol>
<operation>[:<param>][,<operation>[:<param>]]
Description Protocol-specific formatting operation(s) applied to labels coming from live
traffic. More operations can be applied sequentially.
| Configuration | 395

Parameters • protocol: The name of the protocol


• operation: The operation to apply. There are several operations
categories:
• Invalid character replace operations (the default param is ' '):
• utf8: only printable utf8 characters
• ascii: only printable ascii characters
• alnum: only alphanumeric characters a-zA-Z0-9
• alnum_underscore: only alnum + underscore
• String operations:
• prefix: it keeps only the prefix of a string identified by param.
The default param is '.'
• Validation operations:

strict: checks if the label is changed from it's first value or, if
present, from the last mark operation. If changed, the label will be
set to empty
• mark: resets the strict history to the current operation (the label
will not be changed)
• param: The specific operation parameter as above.
e.g.

Operations Input label Output label Comment


utf8:- lab¿1 ¿test¿. lab-1 -test-. The utf8
operation
replaces
not allowed
characters
with the set
parameter '-'
alnum test1 test1 Unchanged
because all
characters are
valid
alnum lab,1 lab 1 The alnum
operation
replaces a
not allowed
character
alnum,strict lab,1 The strict
operation
detects a change
between the
initial input 'lab,1'
and the alnum
output 'lab 1', the
label is cleared
alnum,mark,utf8, lab,1 lab 1 The mark
strict operation sets
alnum output
(lab 1) as
default, for the
following strict
operation. The
utf8 operation
has not effect,
and strict
detects no
changes so has
no effect either
| Configuration | 396

Where CLI

To apply It is applied automatically

Set node Device ID with priority

Product Guardian
Syntax ids configure vi node <ip> device_id_with_priority
<device_id>;<priority>
Description Adds the Device ID to the set of node Device IDs. The final Device ID, used
for node grouping under Assets is the one with the highest priority

Parameters • ip: The IP address of the node


• device_id: The device id
• priority: the priority of the Device ID. If missing, it will be se to the
lowest priority value

Where CLI

To apply It is applied automatically

Override node Device ID

Product Guardian
Syntax ids configure vi node <ip> device_id_override
<device_id>
Description Adds the Device ID to the set of node Device IDs, giving it the maximum
priority value. This Device ID will be used for node grouping under Assets

Parameters • ip: The IP address of the node


• device_id: The device id (with the maximum priority)

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Enable or disable node

Product Guardian
Syntax ids configure vi node <ip> state [enabled|disabled]
Description This directive permits to disable a node. This setting has effect in the graph:
a disabled node will not be displayed.

Parameters • ip: The IP address of the node

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.
| Configuration | 397

Enable or disable same ip node separation

Product Guardian
Syntax conf.user configure check_multiple_macs_same_ip enable
[true|false]
Description This directive permits to enable the separation of L3 nodes with same IP
but different MAC address. The nodes with the desired IP addresses will be
treated as L2 nodes and appear as distinct assets. If the nodes already exist
as L3 nodes upon the application of the configuration, they will be deleted
and the new logic will start to execute with empty statistics.
The values of true or false enables, respectively disables, the feature.

Where CLI

To apply In a shell console execute: service n2osids stop

Configure same ip node separation

Product Guardian
Syntax conf.user configure check_multiple_macs_same_ip ip
<ip_address>
Description Selects the ip of the nodes which should be separated as per the strategy
described in the previous box.

Parameters • ip_address: The IP of the node to be configured

Where CLI

To apply In a shell console execute: service n2osids stop

Delete node

Product Guardian
Syntax ids configure vi node <ip> :delete
Description Delete a node from the environment

Parameters • ip: The IP of the node to delete

Where CLI

To apply It is applied automatically

Define a cluster

Product Guardian
Syntax conf.user configure vi cluster <ip> <name>
Description This command permits to define an High Availability cluster of observed
nodes. In particular, this permits to: accelerate the learning phase by joining
the learning data of two sibling nodes, and to group nodes by cluster in the
graph.

Parameters • ip: The IP of the node


• name: The name of the cluster
| Configuration | 398

Where CLI

To apply It is applied automatically


| Configuration | 399

Configuring assets

Hide built-in asset types

Product Guardian
Syntax conf.user configure vi hide_built_in_asset_types true
Description Hides built-in asset types visible from the dropdown in the asset
configuration modal

Where CLI

To apply It is applied automatically


| Configuration | 400

Configuring links

Set link last activity check

Product Guardian
Syntax set vi link <ip1> <ip2> <protocol> :check_last_activity
<seconds>
Syntax erase vi link <ip1> <ip2>
<protocol> :check_last_activity :delete
Description Set the last activity check on a link, an alert will be raised if the link remains
inactive for more than the specified seconds

Parameters • ip1, ip2: The IPs of the two nodes involved in the communication
• protocol: The protocol
• seconds: The communication timeout

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Set link persistency check

Product Guardian
Syntax vi link <ip1> <ip2> <protocol> :is_persistent [true|
false]
Description Set the persistency check on a link, if a new handshake is detected an alert
will be raised

Parameters • ip1, ip2: The IPs of the two nodes involved in the communication
• protocol: The protocol

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Set link alert on SYN

Product Guardian
Syntax vi link <ip1> <ip2> <protocol> :alert_on_syn [true|
false]
Description Raise an alert when a TCP SYN packet is detected on this link

Parameters • ip1, ip2: The IPs of the two nodes involved in the communication
• protocol: The protocol

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.
| Configuration | 401

Set link to track availability

Product Guardian
Syntax set vi link <ip1> <ip2> <protocol> :track_availability
<seconds>
Syntax erase vi link <ip1> <ip2>
<protocol> :track_availability :delete
Description Notify the link events when the link communication is interrupted or
resumed.

Parameters • ip1, ip2: The IPs of the two nodes involved in the communication
• protocol: The protocol
• seconds: Interval to checking if the link is available or not

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Delete link

Product Guardian
Syntax ids configure vi link <ip1> <ip2> :delete
Description Delete a link

Parameters • ip1, ip2: The IPs identifying the link

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Delete protocol

Product Guardian
Syntax ids configure vi link <ip1> <ip2> <protocol> :delete
Description Delete a protocol from a link

Parameters • ip1, ip2: The IPs identifying the link


• protocol: The protocol of the link to delete

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Learn function code

Product Guardian
Learn ids configure vi link <ip1> <ip2> <protocol> fc
<func_code>
| Configuration | 402

Learn or ids configure vi link <ip1> <ip2> <protocol> fc


Unlearn <func_code> is_learned [true|false]
Description Learn or unlearn a function code from a protocol

Parameters • ip1, ip2: The IPs identifying the link


• protocol: The protocol of the link
• func_code: The function code to be learned or unlearned

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Delete function code

Product Guardian
Syntax ids configure vi link <ip1> <ip2> <protocol> fc
<func_code> :delete
Description Delete a function code from a protocol

Parameters • ip1, ip2: The IPs identifying the link


• protocol: The protocol of the link
• func_code: The function code to be deleted

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Enable link_events generation

Product Guardian
Syntax conf.user configure vi link_events [enabled|disabled]
Description Enable or disable the generation of link_events records, this feature can
have an impact on performance, enable it carefully

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Disabling the persistence of links

Product Guardian
Syntax conf.user configure vi persistence skip_links true
Description With this configuration rule you can disable the persistence of links, thus
saving disk space in cases with a large number of links.

Where CLI

To apply It is applied automatically


| Configuration | 403

Enable link ports collection for the specified set of protocols

Product Guardian
Syntax conf.user configure vi enable_link_ports
<protocol_name>[,<protocol_name>]
Description For enabled protocols, the set of source and destination ports found in the
underlying sessions is collected and shown as links attributes (from_ports
and to_ports). By default enabled only on unrecognized protocols (i.e.
links named other). If the command is used, the total list of protocols to be
enabled shall be specified (and so including other too if applicable).

Parameters • protocol_name: The name of the protocol to enable

Where CLI

To apply It is applied automatically

Set max number of collected link ports

Product Guardian
Syntax conf.user configure vi max_link_ports
<max_collected_ports>
Description When the ports collection is enabled for links, sets the maximum number of
collected ports for each link.

Parameters • max_collected_ports: The maximum number of collected ports for


each link. Default 32. Max 128.

Where CLI

To apply It is applied automatically


| Configuration | 404

Configuring variables

Enable or disable default variable history

Product Guardian
Syntax ids configure vi variable default history [enabled|
disabled]
Description Set if the variable history is enabled or not, when not set it's disabled. The
amount of the history maintained can be configured in "Variable history
retention" section in Configuring retention on page 428
Note: Enabling this functionality can negatively affect Guardian's
performance, depending on the amount of variables and the update rate.

Where CLI

To apply It is applied automatically

Enable or disable variable history

Product Guardian
Syntax ids configure vi variable <var_key> history [enabled|
disabled]
Description Define the amount of samples shown in the graphical history of a variable.
Set if the variable history is enabled or not, when not set it's disabled.
The amount of the history maintained can be configured in "Variable history
retention" section in Configuring retention on page 428
Note: Enabling this functionality can negatively affect Guardian's
performance, depending on the amount of variables and the update rate.

Parameters • var_key: The variable identifier

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Set variable label

Product Guardian
Syntax ids configure vi variable <var_key> label <label>
Description Set the label for a variable, the label will appear in the Process sections

Parameters • var_key: The variable identifier


• label: The label displayed in the user interface

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.
| Configuration | 405

Set variable unit of measure

Product Guardian
Syntax ids configure vi variable <var_key> unit <unit>
Description Set a unit of measure on a variable.

Parameters • var_key: The variable identifier


• unit: The unit of measure displayed in the user interface

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Set variable offset

Product Guardian
Syntax ids configure vi variable <var_key> offset <offset>
Description The offset of the variable that will be used to map the 0 value of the variable.

Parameters • var_key: The variable identifier


• offset: The offset value used to calculate the final value of the variable

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Set variable scale

Product Guardian
Syntax ids configure vi variable <var_key> scale <scale>
Description The scale of the variable that is used to define the full range of the variable.

Parameters • var_key: The variable identifier


• scale: the scale value used to calculate the final value of the variable

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Set variable last update check

Product Guardian
Syntax set vi variable <var_key> :check_last_update <seconds>
Syntax remove vi variable <var_key> :check_last_update :delete
Description Set the last update check on a variable, if the variable value is not updated
for more than the specified seconds an alert is raised
| Configuration | 406

Parameters • var_key: The variable identifier


• seconds: The timeout after which a stale variable alert will be raised

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Set variable quality check

Product Guardian
Syntax set vi variable <var_key> :check_quality <seconds>
Syntax remove vi variable <var_key> :check_quality :delete
Description Set the quality check on a variable, if the value quality remains invalid for
more than the specified seconds an alert is raised

Parameters • var_key: The variable identifier


• seconds: The maximum amount of consecutive seconds the variable
can have an invalid quality

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Set variable to alert on quality

Product Guardian
Syntax set vi variable <var_key> :alert_on_quality <quality>
Syntax remove vi variable <var_key> :alert_on_quality :delete
Description Raise an alert when the variable has one of the specified qualities. Possible
values are: invalid, not topical, blocked, substituted, overflow, reserved,
questionable, out of range, bad reference, oscillatory, failure, inconsistent,
inaccurate, test, alarm. Multiple values can be separated by comma.

Parameters • var_key: The variable identifier


• quality: The alert quality

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Set a variable critical state

Product Guardian
Syntax conf.user configure cs variable <id> <var_key> [<|>|=]
<value>
| Configuration | 407

Description Define a new custom critical state on a single variable that will raise on
violation of defined range.
For instance, if the > operator is specified, the variable will have to be higher
than value to trigger the critical state.

Parameters • id: A unique ID for this critical state


• var_key: The variable identifier
• value: The variable value to check for

Where CLI

To apply It is applied automatically

Set a multiple critical state

Product Guardian
Syntax conf.user configure cs multi <id> variable <ci>
<var_key> [<|>|=] <value>[ ^ variable <ci> <var_key> [<|
>|=] <value>]
Description Creates a multi-valued critical state, that is an expression of "variable critical
states", described above. The syntax is and AND (^) expression of the
single-variable critical state.

Parameters • id: A unique ID for this critical state


• ci: Enumerate the variables c1, c2, c3, ..., etc
• var_key: The variable identifier
• value: The variable value to check for

Where CLI

To apply It is applied automatically

Control variables extraction at the protocol level

Product Guardian
Syntax conf.user configure probe protocol <name>
variables_extraction [disabled|enabled|advanced|global]
Description It allows the application of a variable extraction policy different from the
global policy on a protocol basis. Note that if the Global policy is set to
Disabled, it prevails on any protocol-specific setting. However, protocol
specific policies prevail.
Choices are whether variables extraction is disabled, enabled, enabled
with advanced heuristics (advanced) or if it should inherit the global policy
(global)

Parameters • name: The name of the target protocol

Where CLI

To apply It is applied automatically

Control variables extraction at the global level for all zones

Product Guardian
| Configuration | 408

Syntax conf.user configure vi variables_extraction [disabled|


enabled|advanced|global]
Description Same as for the protocol level variables extraction, except it sets the policy
for the global level.
Choices are whether variables extraction is disabled, enabled, enabled
with advanced heuristics (advanced) or if it should inherit the global policy
(global)

Where CLI

To apply It is applied automatically

Control variables extraction at the global level for specific zones

Product Guardian
Syntax conf.user configure vi variables_extraction [disabled|
enabled|advanced|global] <zones>
Description Same as for the protocol level variables extraction, except it sets the policy
for the global level for the specified zones.
Choices are whether variables extraction is disabled, enabled, enabled
with advanced heuristics (advanced) or if it should inherit the global policy
(global)

Parameters • zones: Name of the zones for which the extraction should be enabled.
If unspecified, the extraction is enabled for all the zones. and values
are separated by a comma; for example: [plant1,plant2] or
[zone1,zone2,zone3]. Brackets are required

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.
| Configuration | 409

Configuring protocols

Configure iec104s encryption key

Product Guardian
Syntax conf.user configure probe protocol iec104s tls
private_key <ip> <location>
Description Add a private key associated to the device running iec104s. For more
information, see Configuring IEC-62351-3 on page 420

Parameters • ip: The IP of the device


• location: The absolute location of the key

Where CLI

To apply In a shell console execute: service n2osids stop

Set CA size for iec101 protocol decoder

Product Guardian
Syntax conf.user configure probe protocol iec101 ca_size <size>
Description iec101 CA size can vary across implementations, with this configuration rule
the user can customize the setting for its own environment

Parameters • size: The size in bytes of the CA

Where CLI

To apply It is applied automatically

Set LA size for iec101 protocol decoder

Product Guardian
Syntax conf.user configure probe protocol iec101 la_size <size>
Description iec101 LA size can vary across implementations, with this configuration rule
the user can customize the setting for its own environment

Parameters • size: The size in bytes of the LA

Where CLI

To apply It is applied automatically

Set IOA size for iec101 protocol decoder

Product Guardian
Syntax conf.user configure probe protocol iec101 ioa_size
<size>
Description iec101 IOA size can vary across implementations, with this configuration
rule the user can customize the setting for its own environment

Parameters • size: The size in bytes of the IOA


| Configuration | 410

Where CLI

To apply It is applied automatically

Set a dictionary file

Product Guardian
Syntax conf.user configure probe protocol <protocol> dictionary
<dictionary_file_name>
Description Based on the dictionary file set with this command, friendly names are
associated to the extracted variables, for the specific protocol in scope.

Parameters • protocol: The protocol can be can-bus or mvb


• dictionary_file_name: The path for the dictionary file

Where CLI

To apply It is applied automatically

Set an arbitrary amount of bytes to skip before decoding iec101 protocol

Product Guardian
Syntax conf.user configure probe protocol iec101 bytes_to_skip
<amount>
Description Based on the hardware configuration iec101 can be prefixed with a fixed
amount of bytes, with this setting Guardian can be adapted to the peculiarity
of the environment.

Parameters • amount: The amount of bytes to skip

Where CLI

To apply It is applied automatically

Enable the Red Electrica Espanola semantic for iec102 protocol

Product Guardian
Syntax conf.user configure probe protocol iec102 ree [enabled|
disabled]
Description There is a standard from Red Electrica Espan#ola which changes the
semantic of the iec102 protocol, after enabling (choosing option enabled)
this setting the iec102 protocol decoder will be compliant to the REE
standard.

Where CLI

To apply It is applied automatically

Set the subnet in which the iec102 protocol will be enabled

Product Guardian
Syntax conf.user configure probe protocol iec102 subnet
<subnet>
| Configuration | 411

Description The detection of iec102 can lead to false positives, this rules give the
possibility to the user to enable the detection on a specific subnet

Parameters • subnet: A subnet in the CIDR notation

Where CLI

To apply It is applied automatically

Enable iec102 on the specified port

Product Guardian
Syntax conf.user configure probe protocol iec102 port <port>
Description The detection of iec102 can lead to false positives, this rules give the
possibility to the user to enable the detection on a specific port

Parameters • port: The TCP port

Where CLI

To apply It is applied automatically

Set the subnet in which the iec103 protocol will be enabled

Product Guardian
Syntax conf.user configure probe protocol iec103 subnet
<subnet>
Description The detection of iec103 can lead to false positives, this rules give the
possibility to the user to enable the detection on a specific subnet

Parameters • subnet: A subnet in the CIDR notation

Where CLI

To apply It is applied automatically

Enable iec103 on the specified port

Product Guardian
Syntax conf.user configure probe protocol iec103 port <port>
Description The detection of iec103 can lead to false positives, this rules give the
possibility to the user to enable the detection on a specific port

Parameters • port: The TCP port

Where CLI

To apply It is applied automatically

Force iec101 semantics inside iec103 protocol

Product Guardian
| Configuration | 412

Syntax conf.user configure probe protocol iec103


force_iec101_semantics true
Description Forces change of semantics for iec103 protocol to use ASDUs of iec101

Where CLI

To apply It is applied automatically

Allow to recognize as iec103 very fragmented sessions

Product Guardian
Syntax conf.user configure probe protocol iec103
accept_on_fragmented true
Description Allow to accept as iec103 those packets that are always incomplete,
thus allowing situations where the protocol is heavily fragmented to be
recognized.

Where CLI

To apply It is applied automatically

Enable the detection of plain text passwords in HTTP payloads

Product Guardian
Syntax conf.user configure probe protocol http
detect_uri_passwords [true|false]
Description Guardian is able to detect if plain text passwords and login credentials
are present in HTTP payloads, such as strings containing ftp://
user:password@example.com. The feature is disabled by default.
Choose true to enable the feature and false to disable it.

Where CLI

To apply It is applied automatically

Set the subnet in which the tg102 protocol will be enabled

Product Guardian
Syntax conf.user configure probe protocol tg102 subnet <subnet>
Description The detection of tg102 can lead to false positives, this rules give the
possibility to the user to enable the detection on a specific subnet

Parameters • subnet: A subnet in the CIDR notation

Where CLI

To apply It is applied automatically

Set the port range in which the tg102 protocol will be enabled

Product Guardian
Syntax conf.user configure probe protocol tg102 port_range
<src_port>-<dst_port>
| Configuration | 413

Description The detection of tg102 can lead to false positives, this rules give the
possibility to the user to enable the detection on a specific port range

Parameters • src_port: The starting port of the range


• dst_port: The ending port of the range

Where CLI

To apply It is applied automatically

Set the subnet in which the tg800 protocol will be enabled

Product Guardian
Syntax conf.user configure probe protocol tg800 subnet <subnet>
Description The detection of tg800 can lead to false positives, this rules give the
possibility to the user to enable the detection on a specific subnet

Parameters • subnet: A subnet in the CIDR notation

Where CLI

To apply It is applied automatically

Set the port range in which the tg800 protocol will be enabled

Product Guardian
Syntax conf.user configure probe protocol tg800 port_range
<src_port>-<dst_port>
Description The detection of tg800 can lead to false positives, this rules give the
possibility to the user to enable the detection on a specific port range

Parameters • src_port: The starting port of the range


• dst_port: The ending port of the range

Where CLI

To apply It is applied automatically

Disable variable extraction for a Siemens S7 area and type

Product Guardian
Syntax conf.user configure probe protocol s7 exclude <area>
<type>
Description For performance reasons or to reduce noise it's possible to selectively
exclude variables extraction for some areas and type.

Parameters • area: The area, some examples are: DB, DI, M, Q


• type: The type of the variable, some examples are: INT, REAL, BYTE

Where CLI

To apply It is applied automatically


| Configuration | 414

Enable the full TLS inspection mode

Product Guardian
Syntax conf.user configure probe tls-inspection enable [true|
false]
Description TLS inspection is normally performed only on https and iec104s traffic.
Enabling (chosing the option true) the full inspection mode provides the
following additional features:
• TLS traffic found on any TCP port is inspected
• an alert is raised when TLS-1.0 is used (when this mode is disabled, this
is an https only check)
• an alert is raised on expired certificates
• an alert is raised on weak cipher suites
• session ID, cipher suite and certificates are extracted into the relative link
events

Where CLI

To apply It is applied automatically

Enable or disable the persistence of the connections for Ethernet/IP Implicit

Product Guardian
Syntax conf.user configure probe protocol ethernetip-implicit
persist-connection [true|false]
Description The Ethernet/IP Implicit decoder of Guardian is able to detect handshakes
that are then used to decode variables. In some scenarios these
handshakes are not common but it's very important to persist them so that
Guardian can continue to decode variables after a reboot or an upgrade.
By enabling (chosing option true) this option Guardian will store on disk
the data needed to autonomously reproduce the handshake phase after a
reboot.

Where CLI

To apply It is applied automatically

Enable or disable fragmented packets for modbus protocol

Product Guardian
Syntax conf.user configure probe protocol modbus
enable_full_fragmentation [true|false]
Description Modbus protocol is usually not fragmented, so this option is by default
disabled (option false). If fragmented modbus packets can be present in the
network, then full fragmentation can be enabled (choosing option true) to
avoid generation of unexpected alerts.

Where CLI

To apply It is applied automatically

Import the ge-egd produced data XML file for variables extraction

Product Guardian
| Configuration | 415

Syntax conf.user configure probe protocol ge-egd produced-data-


xml <path>
Description The ge-egd protocol can extract process variables only after the XML file
describing the produced data for the involved nodes is imported. Multiple
imports are allowed as long as the XML files do not provide overlapping
information for any producer node.

Parameters • path: The path of the produced data XML file to import

Where CLI

To apply It is applied automatically

Disable file extraction for SMB protocol

Product Guardian
Syntax conf.user configure probe protocol smb file_extraction
false
Description The SMB protocol decoder is able to extract files and analyze them for
malware in a sandbox. If not needed, the user can disable such feature and
improve the performance of the system especially in environments where
SMB file transfer is heavily used.

Where CLI

To apply It is applied automatically

Activate the extraction of GE asset information from modbus registers

Product Guardian
Syntax conf.user configure probe protocol modbus
ge_asset_info_from_registers true
Description Some General Electric devices send asset information (product name,
firmware version, serial number, label, and FPGA version) encoded in
register values with the Modbus protocol. By enabling this setting, Guardian
is instructed to extract this data and enrich the corresponding nodes with it.
This data is also used to produce CPEs for the corresponding devices.

Where CLI

To apply It is applied automatically


| Configuration | 416

Configuring va

Vulnerability Assessment configuration

Configure how Threat Intelligence contents are handled

Product Guardian
Syntax conf.user configure va contents <json_value>
Description This command allows Threat Intelligence contents to be either completely
disabled, or selectively loaded. The JSON object can have the following
attributes:
• load_contents - this can be true/false to enable/disable the loading of
contents;
• loaded_content_types - this is a JSON array of contents to be
loaded.
Contents available are:
• cpe_items
• microsoft_hotfixes
• vulnass
As an example, the following command will disable completely contents
loading:
conf.user configure va contents { "load_contents":
true }
As a further example, the following command will allow only cpe_items to
be loaded:
conf.user configure va contents
{ "loaded_content_types": [ "cpe_items" ] }

Parameters • json_value: A JSON object to configure how contents are loaded

Where CLI

To apply In a shell console execute: service n2osva stop

Enable the legacy index

Product Guardian
Syntax conf.user configure va use_legacy_index <flag>
Description It is recommended to use the legacy index when memory constraints are
important. The switch to the legacy index will be performed automatically if
the memory of the system is less than 5GB - The user can anyhow later on
force the switch.

Parameters • flag: The legacy index is disabled by default

Where CLI

To apply It is applied automatically


| Configuration | 417

Disable the hotfix resolution capabilities

Product Guardian
Syntax conf.user configure va use_hotfix_resolution <flag>
Description Please consider that disabling the hotfix resolution means that CVEs for
Microsoft Windows machines will not be automatically closed through Smart
Polling, and as a consequence those nodes might be assigned by Guardian
a large number of obsolete CVEs.

Parameters • flag: Hotfix resolution is enabled by default

Where CLI

To apply It is applied automatically

Enable hotfixes resolution

Product Guardian
Syntax conf.user configure va use_legacy_hotfixes_calculation
<flag>
Description Please consider that when this is set to true hotfixes are loaded and are
used to resolve CVEs while when this flag is set to false, hotfixes are loaded
by external cpe2cve service and resolution is done by retrieving data from
that external service.

Parameters • flag: Hotfix calculation is enabled by default

Where CLI

To apply It is applied automatically

Enables hotfixes management

Product Guardian
Syntax conf.user configure va hotfixes_enabled <flag>
Description Please consider that when this is set to true hotfixes are loaded and used to
set CVEs status while when this flag is set to false, hotfixes are not loaded
nor used by CVE calculation.

Parameters • flag: Hotfix management enabled by default

Where CLI

To apply It is applied automatically

Disable the CPE computation for a specific node

Product Guardian
Syntax conf.user configure va cpe disable <node_id> [true|
false]
Description Please consider that, when this command is used, the vulnerabilities
assessment engine is completely disabled and no CVEs will be assigned to
the nodes.
| Configuration | 418

Parameters • node_id: Node ID of the node targeting the rule

Where CLI

To apply It is applied automatically

Configure CVE matching

Product Guardian
Syntax conf.user configure va cve enable [true|false|
if_not_sync]
Description By default, the sensors only match CVEs if they are not connected to
an upstream (i.e. a CMC or Vantage). The CVE matching will happen
upstream. This behavior can be configured using this configuration line,
where 'true' forces the CVE matching even if the sensor is connected
upstream, 'false' disables it in any case, and 'if_not_sync' restores the
default behavior.

Where CLI

To apply In a shell console execute: service n2osva stop

Disable End Of Life CPEs calculation

Product Guardian
Syntax conf.user configure va use_eol_cpe_calculation false
Description By default, when CVE associated to CPES calculation is perfomed, CPE
that are referring to products that reached End Of Life are not taken into
account. To disable this behaviour use this configuration.

Where CLI

To apply In a shell console execute: service n2osva stop


| Configuration | 419

Customizing node identifier generation


All the entities that communicate in a network are called nodes and a Guardian assigns to each node
a unique identifier, or NodeID in short. Generally, the NodeID is just an ip address (or a mac address),
but in some special network topologies, extra information must be included in a NodeID to further
differentiate nodes.
Note: NodeIDs generated with different settings will cause inconsistencies and should not coexist.
These options should be manually set at sensor deploy time or on a Guardian with a clean
configuration.

Include VLAN number in NodeID


Nodes can have their NodeID "decorated" with the VLAN ID of their zone

Product Guardian
Syntax nodeid_factory zone
Description Nodes included in a zone, which has a non-zero VLAN id, will get a NodeID
of the form ip@vlan.

Include Remote Collector provenance in NodeID


Packets forwarded by Remote Collectors carry a special "provenance" attribute that the Guardian
uses to track precisely where the traffic comes from. The configuration directive nodeid_factory
include_capture by default will use a standard NodeID for nodes seen by local capture devices,
and append the suffix _from:... to nodes appearing in remotely captured traffic. The suffix will in
fact display the packet provenance, either the ip address of the Remote Collector or optionally its
site. include_capture will be used in addition to zone only if the configuration contains both
nodeid_factory zone and nodeid_factory include_capture in this order.

Product Guardian
Syntax nodeid_factory include_capture [local-traffic-tag]
[format-string]
Description Enable decoration of NodeIDs with packet provenance information.
Parameters • the optional local-traffic-tag is the provenance name for locally
captured traffic: leave empty or use no_localhost to disable NodeID
decoration on local traffic.
• the format-string is the template for decorating remotely captured
NodeIDs. A pair of curly braces {} will be expanded to the actual
provenance. The default format is "_from:{}"

Note The default provenance of a remotely captured packet is the ip of the


Remote Collector. Alternatively, a Guardian can use the site of the
Remote Collector (and fall back to the ip, when the site is undefined), by
adding the directive remote_capture_forward_packet_src true to
the Guardian configuration.
| Configuration | 420

Configuring decryption
The following sections describe the configuration of Guardian's decryption capabilities for links. For
more decryption details beyond the scope of this manual, contact Nozomi Networks.

IEC 60870-5-7 / 62351-3/5 encrypted links


IEC TC57 (POWER SYSTEMS management and associated information exchange) develops the
standards 60870 and 62351. IEC 60870 part 5 (by WG3) describes systems used for telecontrol. IEC
62351 (by WG15) handles the security of TC 57 series.
IEC TC57 WG15 recommends the combination of IEC 62351-3 and 5 to secure IEC 60870-5-104 links:
• IEC 62351-3 is a TLS profile to secure power systems related communication.
• IEC 62351-5 is an application security protocol applicable to IEC 60870-5-101, 104, and derivatives.
Its implementation in terms of ASDUs (i.e., real encapsulation) is outlined in IEC 60870-5-7.
In order to decrypt IEC 62351-3 (TLS) traffic, you must meet these conditions:
• The private key for each TLS server (e.g. RTU, PLC) must be available; it is used to derive session
keys.
• All the equipment where decryption is needed must operate using the
TLS_RSA_WITH_AES_128_CBC_SHA (0x00002f) cipher suite. Often, this step is accomplished by
forcing either the client or the server to confine itself to that specific cipher suite.

Configuring IEC-62351-3
The following steps assume we're decoding the communication of a TLS server with the address
192.168.1.26.
1. Upload the TLS server’s private key to /data/cfg. The file name must match the server's address.
In our case, the file must be named 192.168.1.26.key.
Your key should be similar to the following:

2. In Guardian's Features Control Panel, enable link events; this provides visibility to the TLS decoded
handshakes; for example:
| Configuration | 421

3. Specify the key file's location by defining it in the CLI. To continue our example, we would use the
following string:

conf.user configure probe protocol iec104s tls private_key


192.168.1.26 /data/cfg/192.168.1.26.key

4. Repeat these steps for each applicable TLS server key.


5. Run the following command in a shell console:

service n2osids stop


| Configuration | 422

Configuring trace

Trace size and timeout


A trace is a sequence of packets saved to the disk in the PCAP file format. The number of packets in
a trace is fixed, this way when a trace of N packets is triggered Guardian starts to write to disk the N/2
packets that were sniffed before the trace was triggered, after that it tries to save another N/2 packets
and then finalize the write operation, at this point the trace can be downloaded. To avoid a trace being
pending for too much time there is also a timeout, when the time expires the trace is saved also if the
desired number of packets has not been reached.
Trace files are stored in directory /data/traces, which employs disk based storage. In order to
improve performance though, in machines with larger memory configurations this directory is backed
by RAM based storage.

Figure 235: A schematic illustration of the trace saving process

Set max trace packets

Product Guardian
Syntax conf.user configure trace trace_size <size>
Description The maximum number of packets that will be stored in the trace file.

Parameters • size: Default value 5000

Where CLI

To apply It is applied automatically

Set trace request timeout

Product Guardian
Syntax conf.user configure trace trace_request_timeout
<seconds>
Description The time in seconds after which the trace will be finalized also if the
trace_size parameter is not fulfilled

Parameters • seconds: Default value 60

Where CLI

To apply It is applied automatically


| Configuration | 423

Set max pcaps to retain

Product Guardian
Syntax conf.user configure trace max_pcaps_to_retain <value>
Description The maximum number of PCAP files to keep on disk, when this number is
exceeded the oldest traces will be deleted. Both automatic alert traces and
user-requested traces are included. This is a runtime machine setting used
for self protection prevailing on the retention settings as described in the
Configuring retention section

Parameters • value: Default value 100000

Where CLI

To apply It is applied automatically

Set minimum free disk percentage

Product Guardian
Syntax conf.user configure trace min_disk_free <percent>
Description The minimum percentage of disk free under which the oldest traces will be
deleted. If the traces directory is memory backed, this configuration cannot
be overridden and the default value will always be used.

Parameters • percent: Default value 10 if traces storage is disk backed, 5 if memory


backed. Enter without % sign

Where CLI

To apply It is applied automatically

Set maximum occupied space

Product Guardian
Syntax conf.user configure retention trace_request
occupied_space <max_occupied_bytes>
Description The maximum traces occupation on disk in bytes. If the traces directory is
memory backed, this configuration cannot be overridden and the default
value will always be used.

Parameters • max_occupied_bytes: Default value is half of disk size if traces


storage is disk backed, 95% of available space if memory backed

Where CLI

To apply It is applied automatically


| Configuration | 424

Configuring continuous trace

Set max continuous trace occupation in bytes

Product Guardian
Syntax conf.user configure continuous_trace max_bytes_per_trace
<size>
Description The maximum size in bytes for a continuous trace file.

Parameters • size: Default value 100000000

Where CLI

To apply It is applied automatically

Set max pcaps to retain

Product Guardian
Syntax conf.user configure continuous_trace max_pcaps_to_retain
<value>
Description The maximum number of PCAP files to keep on disk, when this number is
exceeded the oldest traces will be deleted. This is a runtime machine setting
used for self protection prevailing on the retention settings as described in
the Configuring retention section

Parameters • value: Default value 100000

Where CLI

To apply It is applied automatically

Set minimum free disk percentage

Product Guardian
Syntax conf.user configure continuous_trace min_disk_free
<percent>
Description The minimum percentage of disk free under which the oldest continuous
traces will be deleted

Parameters • percent: Default value 10, enter without % sign

Where CLI

To apply It is applied automatically

Set maximum occupied space

Product Guardian
Syntax conf.user configure retention continuous_trace
<occupied_space>
Description The maximum continuous traces occupation on disk in bytes
| Configuration | 425

Parameters • occupied_space: Default value is half of disk size

Where CLI

To apply It is applied automatically


| Configuration | 426

Configuring Time Machine


In this section we will configure the Nozomi Networks Solution Time Machine functionality.

Set snapshot interval

Products CMC, Guardian


Syntax conf.user configure tm snap interval <interval_seconds>
Description Set the desired interval between snapshots, in seconds.

Parameters • interval_seconds: The amount of seconds between snapshots


(default: 3600, minimum: 3600)

Where CLI

To apply service n2osjobs stop

Enable or disable automatic snapshot for each alert

Product Guardian
Syntax conf.user configure tm snap on_alert [true|false]
Description It can enable (option true) or disable (option false) the possibility to take
a snapshot on alert. By default snapshots are taken only for VI alerts; it is
possible to explicitly set the alerts that will trigger automatic snapshots via
on_alert_trigger.

Where CLI

To apply service n2osjobs stop

Configure how alerts trigger automatic snapshots

Product Guardian
Syntax conf.user configure tm snap on_alert_trigger
<json_value>
Description Configures the alert triggers for automatic snapshots. The JSON object must
have the following attribute:
• type_ids - A JSON array of the alert type IDs that will trigger an
automatic snapshot. These type IDs may be literals or wildcarded ones
(the asterisk can be used to match any substring).
For example, the following command will configure the system to
automatically take a snapshot whenever a VI:NEW-NODE or VI:NEW-LINK
alert occurs:
conf.user configure tm snap on_alert_trigger
{"type_ids": ["VI:NEW-NODE", "VI:NEW-LINK"]}
As a second example, the command below will configure the system to take
a snapshot on all VI or SIGN alerts:
conf.user configure tm snap on_alert_trigger
{"type_ids": ["VI:*", "SIGN:*"]}

Parameters • json_value: A JSON object describing the alert automatic snapshot


triggers (default: {"type_ids": ["VI:*"]})
| Configuration | 427

Where CLI

To apply service n2osjobs stop

Set maximum number of network elements allowed in a diff

Product Guardian
Syntax conf.user configure tm diff max_results_network_elements
<num_elements>
Description When comparing time machine snapshots that are too different, it is possible
to overtax the system resources (memory, CPU). By setting a limit on the
number of network elements that are allowed to be reported in a diff, the
system is protected by such effects. When this threshold is crossed, the diff
job is aborted and the appropriate error message is shown to the user.

Parameters • num_elements: Maximum number of network elements that may be


reported by a diff (default: 10000)

Where CLI

To apply It is applied automatically


| Configuration | 428

Configuring retention
Retention of historical data is controlled for each persisted entity by a configuration entry. Modify it to
extend or reduce the default retention.
By default, the CMC retains 500,000 alerts. Note that retaining large numbers of alerts can impair
performance. We recommend limiting the number of alerts generated rather than retaining more data.
If you want to retain more alerts, we recommend an iterative approach of incrementally increasing
this value and evaluating the system's performance. In some cases, you may want to send alerts to a
different system using our data integration features instead of retaining the alerts in the sensor.

Alerts retention

Products CMC, Guardian


Syntax conf.user configure retention alert rows
<rows_to_retain>
Description Set the amount of alerts to retain.

NOTE: When an alert is deleted, the related trace file is deleted too.

Parameters • rows_to_retain: The number of rows to keep (default: 500000)

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Alerts advanced retention

Products CMC, Guardian


Syntax conf.user configure retention
alert.out_of_security_profile rows <rows_to_retain>
Description Set the amount of alerts out of security profile to retain. By default, this
feature is disabled.
NOTE:
• This retention has a higher priority than retention alert rows
<rows_to_retain> and will be executed before it.
• When an alert is deleted, the related trace file is deleted too.

Parameters • rows_to_retain: The number of rows to keep (disabled by default)

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Trace retention size

Product Guardian
Syntax conf.user configure retention trace_request
occupied_space <max_occupied_bytes>
| Configuration | 429

Description The maximum traces occupation on disk in bytes. If the traces directory is
memory backed, this configuration cannot be overridden and the default
value will always be used.

Parameters • max_occupied_bytes: Default value is half of disk size if traces


storage is disk backed, 95% of available space if memory backed

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Trace retention rows

Product Guardian
Syntax conf.user configure retention trace_request rows
<rows_to_retain>
Description Set the amount of traces to retain.

Parameters • rows_to_retain: The number of rows to keep (default: 10000)

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Trace advanced retention

Products CMC, Guardian


Syntax conf.user configure retention
trace_request.<generation_cause> rows <rows_to_retain>
Description Set the amount of traces retained considering their generation cause. By
default, these options are disabled.
NOTE: This retention has a higher priority than retention trace rows
<rows_to_retain> and will be executed before it. Moreover, These
advanced retention options depend on each other, thus they must be
configured all together or none.

Parameters • generation_cause: Can be any of:


• by_alerts_high: traces generated by high risk alerts
• by_alerts_medium: traces generated by medium risk alerts
• by_alerts_low: traces generated by low risk alerts
• by_user_request: traces generated by a request from the user
• rows_to_retain: The number of rows to keep

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

For example, we can configure the trace retention with the following command:

conf.user configure retention trace_request 10000


| Configuration | 430

and also set up the advanced retention with:

conf.user configure retention trace_request 10000

conf.user configure retention trace_request.by_alerts_high 5000


conf.user configure retention trace_request.by_alerts_medium 1000
conf.user configure retention trace_request.by_alerts_low 1000
conf.user configure retention trace_request.by_user_request 3000

Continuous trace retention size

Product Guardian
Syntax conf.user configure retention continuous_trace
occupied_space <max_occupied_bytes>
Description Set max occupation in bytes for continuous traces

Parameters • max_occupied_bytes: the number of bytes to keep (default: half of


disk size)

Where CLI

To apply In a shell console execute: service n2ostrace stop

Note You can also change this configuration from the Web UI.

Continuous trace retention rows

Product Guardian
Syntax conf.user configure retention continuous_trace rows
<rows_to_retain>
Description Set the amount of continuous traces to retain

Parameters • rows_to_retain: the number of rows to keep (default: 10000)

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Link events retention

Product Guardian
Syntax conf.user configure retention link_event rows
<rows_to_retain>
Description Set the amount of link events to retain

Parameters • rows_to_retain: The number of rows to keep (default: 2500000)

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.
| Configuration | 431

Captured urls retention

Product Guardian
Syntax conf.user configure retention captured_urls rows
<rows_to_retain>
Description Set the amount of captured "urls" (http queries, dns queries, etc) to retain

Parameters • rows_to_retain: The number of rows to keep (default: 10000)

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Variable history retention

Product Guardian
Syntax conf.user configure retention variable_history rows
<rows_to_retain>
Description Set the amount of variable historical values to retain

Parameters • rows_to_retain: The number of rows to keep (default: 1000000)

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Node CVE retention

Product Guardian
Syntax conf.user configure retention node_cve rows
<rows_to_retain>
Description Set the maximum amount of node_cve entries to retain

Parameters • rows_to_retain: The number of rows to keep (default: 100000)

Where CLI

To apply In a shell console execute: service n2osva stop

Note You can also change this configuration from the Web UI.

Uploaded traces retention

Product Guardian
Syntax conf.user configure retention input_pcap rows
<files_to_retain>
Description Set the amount of PCAP files to retain

Parameters • files_to_retain: The number of files to keep (default: 10)

Where CLI
| Configuration | 432

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

File quarantine retention

Product Guardian
Syntax conf.user configure retention quarantine number_of_files
<files_to_retain>
Description Set the number of files to retain. When a new file is added to a sensor,
Nozomi deletes the oldest quarantined file if the file exceeds this limit and
the sensor needs to free disk space.

Parameters • files_to_retain: The number of files to keep (default: 50)

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.
| Configuration | 433

Configuring Bandwidth Throttling


It is possible to limit the bandwidth that a sensor's management port has at its disposal (so for access
and updates) by specifying the maximum amount of allowed traffic.

Limit traffic shaping bandwidth

Product Guardian
Syntax conf.user configure system traffic_shaping bandwidth
<max_bandwidth>
Description Set the maximum outbound bandwidth that the sensor's management
interface can use. Inbound data is still unlimited.

Parameters • max_bandwidth: the bandwidth limit. The following units are supported:
b, kB, Mb, Gb. When no unit is specified, b is intended by default (i.e. bits
per second). When setting a limit in decimal notation make sure you add
the all the leading zeros, and the unit (e.g., write 0.015Mb, not .015Mb).
(default: no limitation).

Where CLI

To apply Update rules with n2os-firewall-update. On a fresh installation a


reboot is necessary.

For example, we can set a limit of two megabytes with the following configuration command:

system traffic_shaping bandwidth 2Mb

Note that this command affects only the sensor on which it is executed, its effects are not propagated
to other sensors.
It is possible to exclude from the limitation of the bandwidth specific IPs.

Exclude IP from traffic shaping

Product Guardian
Syntax conf.user configure system traffic_shaping exclude <ip>
Description Set the IP to exclude from the limitation.

Parameters • ip: the IP to exclude. It can be a single IP or a class of IPS (e.g.


192.168.12.34 or 192.168.0.0/16). It can be repeated for as much IPs are
needed.

Where CLI

To apply Update rules with n2os-firewall-update. On a fresh installation a


reboot is necessary.

For example, we can exclude an IP with the following configuration command:

system traffic_shaping exclude 192.168.12.34

Note that this command affects only the sensor on which it is executed, its effects are not propagated
to other sensors.
| Configuration | 434

Configuring synchronization
In this section we will configure the synchronization between sensors at different levels.

Set the global synchronization interval (notification message)

Product CMC
Syntax conf.user configure cmc sync interval <interval_seconds>
Description Set the desired global synchronization interval for the in-scope sensor.
Configuration is defined on the parent sensor; synchronization starts at child
sensors and flows upstream.
Each and every sync takes place following a notification message sent by
the child sensor, stating that the child sensor is ready to synchronize data to
its parent. The notification messages act as global synchronization settings,
working together with the following settings as well.
Note: In a multi-level deployment (e.g., one with root CMC, local CMC, and
Guardian), the setting must be applied at each parent level (e.g., at the root
CMC as well as at the local CMC).

Parameters • interval_seconds: the number of seconds between status


notifications (default: 60)

Where CLI

To apply It is applied automatically

Set the DB synchronization interval

Products CMC, Guardian


Syntax conf.user configure cmc sync_db_interval
<interval_seconds>
Description Set the desired interval between DB synchronizations for the in-scope
sensor. Configuration is done on the parent sensor; synchronization starts at
child sensors and flows upstream. The setting applies to each DB element
subject to synchronization (e.g., Alerts, Assets, Audit logs, and Health
logs). As the interval expires, the DB entries are synchronized at the next
notification message.
Note: In a multi-level deployment (e.g., one with root CMC, local CMC, and
Guardian), if the setting is applicable, it must be applied at each parent level
(e.g., at the root CMC as well as at the local CMC).

Parameters • interval_seconds: the number of seconds between DB


synchronizations (default: 60). This parameter only makes sense when
set higher than the global synchronization interval.

Where CLI

To apply It is applied automatically

Set the filesystem synchronization interval

Product CMC
Syntax conf.user configure cmc sync_fs_interval
<interval_seconds>
| Configuration | 435

Description Set the desired interval between filesystem synchronizations for the sensor
in scope, from its child sensors. The setting applies to each filesystem
element subject to synchronization (e.g., nodes, links, and variables). As
the interval expires, the filesystem entries are synchronized at the next
notification message. In case the CMC is All-In-One, this interval will be
used as default value for the cmc merge interval
Note: In a multi-level deployment (e.g., one with root CMC, local CMC, and
Guardian), if the setting is applicable, it must be applied at each parent level
(e.g., at the root CMC as well as at the local CMC).

Parameters • interval_seconds: the number of seconds between filesystem


synchronizations (default: 10800 [3 hours] if the CMC is multi-context,
60 if the CMC is All-In-One). This parameter only makes sense when set
higher than the global synchronization interval.

Where CLI

To apply It is applied automatically

Set the binary files synchronization interval

Product CMC
Syntax conf.user configure cmc sync_binary_files_interval
<interval_seconds>
Description Set the desired interval between binary files synchronizations for the sensor
in scope, from its child sensors. The setting applies to each binary file
element subject to synchronization (e.g., PDF reports). As the interval
expires, the binary file entries are synchronized at the next notification
message.
Note: In a multi-level deployment (e.g., one with root CMC, local CMC, and
Guardian), if the setting is applicable, it must be applied at each parent level
(e.g., at the root CMC as well as at the local CMC).

Parameters • interval_seconds: the number of seconds between binary files


synchronizations (default: 60). This parameter only makes sense when
set higher than the global synchronization interval.

Where CLI

To apply It is applied automatically

Set the rows to be sent at every DB synchronization for each DB element

Products CMC, Guardian


Syntax conf.user configure cmc sync record_per_loop
<number_of_record_per_loop>
Description The system allows the user to customize the synchronization, in particular
the number of records to be sent at each phase. A synchronization
phase is composed of 50 steps for each DB element, every one sending
number_of_record_per_loop rows, which means that the system sends, by
default, 2500 rows every time.

Parameters • number_of_record_per_loop: the number of DB rows sent per


single request (default: 50)
| Configuration | 436

Where CLI

To apply It is applied automatically

Synchronize only visible alerts

Products CMC, Guardian


Syntax conf.user configure cmc sync send_only_visible_alert
[true|false]
Description Set whether to synchronize all alerts from the child sensors to the in-scope
parent sensor (false), or to synchronize only visible alerts (as defined in the
Security Profile) (true). Default: false.
Note: In a multi-level deployment (e.g., one with root CMC, local CMC, and
Guardian), if the setting is applicable, it must be applied at each parent level
(e.g., at the root CMC as well as at the local CMC).

Where CLI

To apply It is applied automatically

Set the alert rules execution policy

Product CMC
Syntax conf.user configure alerts execution_policy alert_rules
[upstream_only|upstream_prevails|local_prevails]
Description Set the desired execution policy for the alert rules.

Note: In a multi-level deployment (e.g., one with root CMC, local CMC, and
Guardian), if the setting is applicable, it must be applied at each parent level
(e.g., at the root CMC as well as at the local CMC).

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Set the configurations merge interval

Product CMC
Syntax conf.user configure cmc merge interval
<interval_seconds>
Description Periodically, CMC All-In-One merge all the filesystem elements received by
the connected Guardians. This operation strictly depends on the filesystem
synchronization. It is possible to define a custom interval, however it is
suggested to specify a similar value as the one set for the filesystem
synchronization.
Note: In a multi-level deployment (e.g., one with root CMC, local CMC, and
Guardian), if the setting is applicable, it must be applied at each parent level
(e.g., at the root CMC as well as at the local CMC).

Parameters • interval_seconds: the number of seconds between two merging


actions (default: it follows the value of the filesystem synchronization
interval).
| Configuration | 437

Where CLI

To apply It is applied automatically

Enable the PostgreSQL advisory locks for assets synchronization

Product CMC
Syntax conf.user configure cmc save_assets_with_advisory_lock
[true|false]
Description Enable this feature to avoid potential database deadlocks on assets. This
option shall be applied on mid-level CMC if the bulk asset synchronization is
not enabled.
Note: This option applies only on the CMC it is configured on. It is enabled
by default.

Where CLI

To apply It is applied automatically


| Configuration | 438

Configuring slow updates


In this section we show how to configure a sensor for receiving firmware updates from an upstream
sensor at a determined speed. This option is suitable for those scenarios where a limited bandwidth is
available, and a normal firmware update procedure would result in a timeout. For example, one may
want to configure a remote collector constrained by a 50Kbps bandwidth to receive the updates at
24Kbps. This configuration will prevent the saturation of the communication channel and thus it will
allow to send data traffic while receiving an update.

Enable or disable slow update

Products CMC, Guardian, Remote Collector


Syntax software_update_slow_mode [true|false]
Description This is a global switch that enables (true) or disables (false) the feature.
When the feature is disabled all the other switches are ignored.

Where CLI

To apply It is applied automatically

Set the transfer chunk size

Products CMC, Guardian, Remote Collector


Syntax software_update_slow_mode_chunk_size <size_in_bytes>
Description The update bundle is split into multiple fixed-sized chunks. Chunks are
individually transmitted, verified and reassembled. In case of a failed
delivery, only invalid chunks are retransmitted. Essentially, big chunks are
more suitable for high-speed networks, while smaller chunks are to be
preferred for more effective bandwidth limitation.

Parameters • size_in_bytes: the chunk size in bytes. Values are normalized to stay
in the range [128, 10485760]. Default value is 4096.

Where CLI

To apply It is applied automatically

Set the transfer speed

Products CMC, Guardian, Remote Collector


Syntax software_update_slow_mode_max_speed <speed_in_bps>
Description Sets the maximum allowed speed for update transfer.

Parameters • speed_in_bps: The maximum allowed speed in bytes per second.


Values lower than 1024 are normalized to 1024. The default value is
4096. Notice that small chunks add some slight overhead, so generally
the transfer speed will remain consistently below the declared limit.

Where CLI

To apply It is applied automatically


| Configuration | 439

Configuring session hijacking protection


Web management interface protects itself from session hijacking attacks binding web session
to ip addresses and browser configurations. When it detects differences on these parameters it
automatically destroys the session and records the error in the audit log. This feature is enabled by
default and it can be disabled using this configuration:

Disable session hijacking protection

Products CMC, Guardian


Syntax conf.user configure ui session protection [true|false]
Description Enable (option true, default behavior) or disable (option false) session
hijacking protection.

Where CLI

To apply It is applied automatically

When closing sessions the web management interface will record in the audit log this error text

Session hijacking detected, closing session

and the details of the affected session.


| Configuration | 440

Configuring Passwords
This topic describes N2OS password parameters and their default values.
Modifying the default password requires that you change the configuration of the sensor, as discussed
in Users Chapter 3.

Set maximum attempts

Product Guardian
Syntax conf.user configure password_policy maximum_attempts
<attempts>
Description Set the maximum number of attempts for inserting a password, before the
system locks.

Parameters • attempts: The maximum number of attempts allowed (default: 3)

Where CLI

To apply It is applied automatically

Set lock time

Product Guardian
Syntax conf.user configure password_policy lock_time <minutes>
Description Set the maximum number of attempts for inserting a password, before the
system locks.

Parameters • minutes: The number of minutes for which a user account is locked out
after having failed to login for the maximum attempts (default: 5)

Where CLI

To apply It is applied automatically

Set history

Product Guardian
Syntax conf.user configure password_policy history <number>
Description Set the number of historical passwords that are required to be unique
(default: 3).

Parameters • number: The number of unique passwords to be used

Where CLI

To apply It is applied automatically

Set password digits

Product Guardian
Syntax conf.user configure password_policy digit <number>
| Configuration | 441

Description Sets the minimum number of digits that are required to be contained in a
password (default: 1).

Parameters • number: The number of digits required

Where CLI

To apply It is applied automatically

Set password lower

Product Guardian
Syntax conf.user configure password_policy lower <number>
Description Sets the minimum number of lowercase characters that are required to be
contained in a password (default: 1).

Parameters • number: The number of lowercase characters required.

Where CLI

To apply It is applied automatically

Set password upper

Product Guardian
Syntax conf.user configure password_policy upper <number>
Description Sets the minimum number of uppercase characters that are required to be
contained in a password (default: 1).

Parameters • number: The number of uppercase characters required

Where CLI

To apply It is applied automatically

Set password symbol

Product Guardian
Syntax conf.user configure password_policy symbol <number>
Description Sets the minimum number of symbol characters that are required to be
contained in a password (default: 0).

Parameters • number: The number of symbol characters required

Where CLI

To apply It is applied automatically

Set password min length

Product Guardian
Syntax conf.user configure password_policy min_password_length
<number>
| Configuration | 442

Description Sets the minimum length required for a password (default: 12).

Parameters • number: The minimum length

Where CLI

To apply It is applied automatically

Set password max length

Product Guardian
Syntax conf.user configure password_policy max_password_length
<number>
Description Sets the maximum length required for a password (default: 128).

Parameters • number: The maximum length

Where CLI

To apply It is applied automatically

Enable expire for inactive users

Product Guardian
Syntax conf.user configure password_policy
inactive_user_expire_enable [true|false]
Description Enable or disable the expiration for inactive users (default: false).

Where CLI

To apply It is applied automatically

Sets user lifetime

Product Guardian
Syntax conf.user configure password_policy
inactive_user_lifetime <number>
Description Sets the required inactive days after which a user is forced as disabled
(default: 60).

Parameters • number: The number of days

Where CLI

To apply It is applied automatically

Enable expiration for admin users

Product Guardian
Syntax conf.user configure password_policy admin_can_expire
[true|false]
Description Enable or disable the expiration for inactive admin users (default: false).
| Configuration | 443

Where CLI

To apply It is applied automatically

Enable password expiration

Product Guardian
Syntax conf.user configure password_policy
password_expire_enable [true|false]
Description Enable or disable the expiration for passwords (default: false).

Where CLI

To apply It is applied automatically

Sets password lifetime

Product Guardian
Syntax conf.user configure password_policy password_lifetime
<number>
Description Sets the required number of days after which a password change is
enforced (default: 90).

Parameters • number: The number of days

Where CLI

To apply It is applied automatically


| Configuration | 444

Configuring sandbox

Sandbox archive processing


When a sandbox file contains an archive, the archive is uncompressed and each extracted file is
processed to check the presence of malware using the Yara rules and STIX indicators. The process
is repeated recursively for each extracted file to eventually check for malware in nested archives. The
'archive' configurations commands listed below permits to control how this unpacking and checking
process is performed. Since the unpacking and checking can consume significant resources, the tuning
of these parameters can be important when a large number of potentially large files is present in the
sandbox.

Set the number of workers

Product Guardian
Syntax conf.user configure sandbox dispatcher number_of_workers
<value>
Description Service will start one dispatcher application and as many worker
applications as requested. By default, Sandbox will instead start in
standalone mode.

Parameters • value: Maximum value of 8

Where CLI

To apply It is applied automatically

Set the size of the sandbox tmpfs partition

Product Guardian
Syntax conf.user configure sandbox tmpfs sandbox <value>
Description The tmpfs in-memory partition should be big enough to host two times the
maximum number of retained files.

Parameters • value: Size in MB of the partition. Defaults to 400MB.

Where CLI

To apply It is applied upon the reboot of the sensor

Set the size of the sandbox tmpfs temporary partition

Product Guardian
Syntax conf.user configure sandbox tmpfs tmp_sandbox <value>
Description For high throughputs, the tmpfs in-memory partition should be big enough
to host all the files that can fit in the sandbox tmpfs partition described in the
previous command.

Parameters • value: Size in MB of the partition. Defaults to 100MB.

Where CLI

To apply It is applied upon the reboot of the sensor


| Configuration | 445

Set the size of the sandbox tmpfs pipes partition

Product Guardian
Syntax conf.user configure sandbox tmpfs pipes <value>
Description 10MB should be allocated for every worker that has been configured.

Parameters • value: Size in MB of the partition. Defaults to 10MB.

Where CLI

To apply It is applied upon the reboot of the sensor

Configure how Threat Intelligence contents are handled

Product Guardian
Syntax conf.user configure sandbox contents <json_value>
Description This command allows Threat Intelligence contents to be either completely
disabled or selectively loaded. The JSON object can have the following
attributes:
• load_contents - this can be true/false to enable/disable the loading of
contents;
• loaded_content_types - this is a JSON array of contents to be
loaded.
Contents available are:
• stix_indicators
• yara_rules
As an example, the following command will disable completely contents
loading:
conf.user configure sandbox contents { "load_contents":
false }
As a further example, the following command will allow only yara rules to be
loaded:
conf.user configure sanbox contents
{ "loaded_content_types": [ "yara_rules" ] }

Parameters • json_value: A JSON object to configure how contents are loaded by


Guardian

Where CLI

To apply It is applied automatically

Setting sandbox strategies for vulnerability assessment

Product Guardian
Syntax conf.user configure sandbox strategies <json_value>
Description The schema for the configuration options is:

{"enabled_strategies": ["yara_rules",
"stix_indicators"]}

Parameters • json_value: Define which strategies will be used to analyse files


| Configuration | 446

Where CLI

To apply It is applied automatically

Set the minimum percentage of free disk

Product Guardian
Syntax conf.user configure sandbox min_disk_free <value>
Description Minimum free disk percentage that should be observed in the tmpfs /var/
sandbox folder, where n2os_ids will write the captured files.

Parameters • value: Default value 10

Where CLI

To apply It is applied automatically

Set the maximum number of files to retain

Product Guardian
Syntax conf.user configure sandbox max_files_to_retain <value>
Description Maximum number of files to retain in the tmpfs /var/sandbox folder, where
n2os_ids will write the captured files. The actual number may be up to two
times higher under heavy loading in order to improve the performance of the
n2os_sandbox process.

Parameters • value: Default value 250

Where CLI

To apply It is applied automatically

Set the size of the asynchronous processing queues

Product Guardian
Syntax conf.user configure sandbox queues queue_length <value>
Description Length of the asynchronous queues processing and analysing the captured
files. This number is applied to both the standalone, dispatcher and worker
applications. This figure should be increased to the number of files which
are expected to be handled by the application every 250ms. After the
application of this setting, also the size of the tmpfs folder partitions needs to
be carefully tuned as described in the corresponding commands.

Parameters • value: Default value 200

Where CLI

To apply It is applied automatically

Set the interval for stale file analysis

Product Guardian
| Configuration | 447

Syntax conf.user configure sandbox dispatcher files_timeout


<value>
Description Stale files which have not been analysed and cleaned by the worker
applications under severe loading are garbage collected by default every
hour. This is an exceptional situation which should not even occur under
heavy loading, but this setting is provided as an additional protection
mechanism.

Parameters • value: Default value 3600 seconds

Where CLI

To apply It is applied automatically

Set the maximum number of extracted file

Product Guardian
Syntax conf.user configure archive max_number_of_files <value>
Description Defines the maximum number of files that can be extracted and processed
from an archive file. If the maximum number of extracted files is reached
for a file, then additional files nested inside it are neither expanded nor
processed.

Parameters • value: Default value 100

Where CLI

To apply It is applied automatically

Set the maximum level of nested extracted files

Product Guardian
Syntax conf.user configure archive max_levels <value>
Description Defines the maximum level of nested archives that are extracted and
processed from an archive file. Files nested in archives at a level deeper
than the specified one are neither extracted nor processed.

Parameters • value: Default value 3

Where CLI

To apply It is applied automatically

Set the maximum size of a single extracted file

Product Guardian
Syntax conf.user configure archive max_single_size
<size_in_bytes>
Description Defines the maximum size in bytes of a file that can be extracted and
processed from a compressed archive file. If a file is larger than the limit it is
neither extracted nor processed.

Parameters • size_in_bytes: Default value 200000000 (200M)


| Configuration | 448

Where CLI

To apply It is applied automatically

Set the maximum overall size of extracted files

Product Guardian
Syntax conf.user configure archive max_overall_size
<size_in_bytes>
Description Defines the overall maximum size of the files that can be extracted from a
single compressed archive file. If the overall size of the files extracted from a
file exceed this limit, the remaining files in the archive are neither extracted
nor processed.

Parameters • size_in_bytes: Default value 400000000 (400M)

Where CLI

To apply It is applied automatically

Set the maximum time to be spent for each file extraction

Product Guardian
Syntax conf.user configure archive max_wait_msec
<time_in_millisecs>
Description Defines the maximum time to be spent during a file extraction from a
compressed archive file. If the spent time exceeds the limit, the extraction is
aborted.

Parameters • time_in_millisecs: Default value 10000 millisecs

Where CLI

To apply It is applied automatically

Enable or disable the adaptive algorithm for file unzipping

Product Guardian
Syntax conf.user configure archive auto_switch_off <flag>
Description Unzipping is a very expensive process which is automatically disabled when
Sandbox is under heavy loading. Instead of discarding files, Sandbox will
disable unzipping for some files and process only the unzipped file with
STIX and Yara indicators.

Parameters • flag: Default to true. false to disable the feature

Where CLI

To apply It is applied automatically

Configuring handling of sandboxed zipped files

Product Guardian
| Configuration | 449

Syntax conf.user configure sandbox unzipping <json_value>


Description The json object can have the following attributes: * modes - array of
unzipping modes which should be enabled. By default all of them are
enabled and are executed in the described order. Possible values are:
fast, for fast unzipping, macro, for macro extraction and analysis,
upx, for upx decompression, full, for extensive and advanced archive
decompression. An empty array can be used to completely disable the
unzipping functionalities of Sandbox.
conf.user configure sandbox unzipping {"modes":
["macro", "upx", "full"]}

Parameters • json_value: A json object to configure how zipped files are handled by
Guardian

Where CLI

To apply It is applied automatically

Configuring high throughput protection

Product Guardian
Syntax conf.user configure sandbox extraction <json_value>
Description The user should note in the following that advertised file extensions are
considered. If an attacker hides behind a JPG file extension a malicious
executable, there is no way for Sandbox to understand that the file is an
executable without performing an in-depth analysis on the file itself. For
this reason, we highly discourage the use of the file extension attribute
in the JSON below. Protocols are instead encouraged, when even the
auto switch off adapative algorithm cannot provide a sufficient protection
against high throughputs. The json object can have the following attributes:
* enabled_protocols - only files extracted from these protocols will be
analysed. * disabled_protocols - files extracted from these protocols
will be excluded from the analysis. * enabled_file_extensions -
only files extracted with these advertised extensions will be analysed. *
disabled_file_extensions - files extracted with these advertised
extensions will be excluded from the analysis.
conf.user configure sandbox extraction
{"enabled_protocols": ["http"]}

Parameters • json_value: A json object to configure which files are not analysed by
Sandbox

Where CLI

To apply It is applied automatically


Additional Commands
For completeness, this section will contain commands that do not clearly fit into other subsections.

Configure SSH key update interval

Product Guardian
Syntax conf.user configure ssh_key_update interval <seconds>
Description See Adding SSH keys to admin users on page 36

Parameters • seconds: Number of seconds between propagations

Where CLI

To apply It is applied automatically

Configure paranoid mode for User login authentication

Product Guardian
Syntax conf.user configure authentication paranoid_mode [true|
false]
Description Paranoid mode in authentication is enabled by default. It is used to control
the disclosure of information about the existence of a user during the login
authentication process and to normalize the login response time. When this
setting is disabled, after several failed login attempts, the user is warned
with a message about the remaining attempts before the account gets
locked. As a consequence, the user information can be leaked. However,
when this setting is enabled, there is no warning message and thus no
potential for user information leak.

Where CLI

To apply In a shell console execute: service webserver stop

Configure identity provider

Products CMC, Guardian


Syntax conf.user configure cmc identity_provider_url <ip>
Description Expose identity provider endpoint.

Parameters • ip: The address of the identity provider

Where CLI

To apply It is applied automatically


Chapter

16
FIPS configuration
Topics: This chapter provides information on configuring N2OS to use the
FIPS-140-2 approved cryptography module.
• Compliant FIPS cryptography
features Federal Information Processing Standards (FIPS) are publicly
announced standards developed by the National Institute of
• Important FIPS notes
Standards and Technology for use in computer systems by
• Enabling FIPS mode non-military American government agencies and government
• Disabling FIPS mode contractors.
• Checking FIPS mode The FIPS 140 series specifies requirements for cryptography
• Auditing FIPS operations modules within a security system protecting sensitive but
• FIPS enabled protocols unclassified data.
Note: To enable FIPS mode, you must install a FIPS-enabled
license. To obtain a license, refer to your Nozomi Networks
representative for more information.
| FIPS configuration | 452

Compliant FIPS cryptography features


We describe the compliant and non-compliant FIPS cryptography features in this section.
Ensure that the system is configured in FIPS mode and uses only FIPS-compliant features to achieve
full compliance.
After enabling FIPS mode, the following features will use compliant cryptography:
• HTTPS Web interface
• SSH remote access
• RC and CMC data flows
• Local users password encryption
• Configuration secrets stored in the local configuration file

Non-compliant FIPS cryptography features


The N2OS solution does not prevent you from using features that are not FIPS compliant. Ensure
that the system is configured in FIPS mode and uses only FIPS-compliant features to achieve full
compliance.
For example, these features cannot be FIPS complaint (not an exhaustive list):
• SMB remote backup transfer
• Unencrypted Syslog forwarding
• SNMP with users configured with MD5 or DES protocols
• Any crypto usage outside the security boundary of the FIPS library

Important FIPS notes


• Product version comparability: To enable FIPS mode, you must be running version N2OS 22.2.1
or later.
• Product inter-operability: FIPS products can only be connected to other FIPS products. The use
of mixed environments is not allowed.
• FIPS license: To use the FIPS addon, you'll need an additional license. Contact your sales
representative for additional information.
• Enabling FIPS mode:
• If you're running an N2OS version between 22.2.1 and 23.1.0, both Guardians and CMCs
require a valid FIPS license.
• Beginning with version 23.1.0 or later, FIPS mode can be enabled on Guardians without a
license, but packet sniffing will be disabled until a valid license is activated.
• The order of enabling FIPS on either device does not affect functionality.
• FIPS licenses are required for CMCs and Remote Collectors (RCs) and managed by upstream
sensors.

Enabling FIPS mode


Use the n2os-fips-enable command to enable FIPS mode.
Important:
• Invalid local user Web UI user passwords: When switching to FIPS mode, local user Web UI
passwords become invalid. Use the n2os-passwd command to reset the passwords to take
advantage of FIPS encryption.
• N2OS-passwd action delay on execution: The n2os-passwd <USER> command takes several
seconds to several minutes to prompt the user for a new password. On the R50 platform, the
prompt may take up to 3 minutes. Changing the password for every user is a critical step when
enabling FIPS.
| FIPS configuration | 453

• CMCs: For CMCs beginning with version 23.1.0 or later, the following step 2 and step 3 are
unnecessary.
• Guardians: For Guardians beginning with version 23.1.0 or later, the following step 2 and step 3
can also be performed after enabling FIPS.
Perform these steps to enable FIPS mode:
1. Log in to the console, and enter privileged mode with the command:

enable-me

2. Type the following command to configure the FIPS license:

echo 'conf.user configure license fips xxxxxxxx' | cli

3. Restart the IDS with the command:

service n2osids stop

The IDS will stop and start automatically in a few seconds.


4. Enable FIPS mode with the command:

n2os-fips-enable

The system automatically reboots.


5. In the container edition, stop the current container and start a new one with the same settings.
6. Follow the Important Notes at the beginning of this topic to change the password for every user by
typing:

n2os-passwd <USER>

7. Log in to the sensor and verify that the FIPS license status has changed to OK. From the Web UI,
go to: Administration > System > Updates & Licenses.

8. Repeat this procedure for additional CMC(s) or Guardian(s).

Disabling FIPS mode


The n2os-fips-disable command can be used to disable FIPS mode.
Important: When switching to FIPS mode, local user Web UI passwords become invalid. Use the
n2os-passwd command to reset them.
1. Log in to the console via the serial console, and enter privileged mode with the command:

enable-me

2. Disable FIPS mode:


a) This operation requires a system reboot or a container restart to be applied.
b) Enter the following command:

n2os-fips-disable
| FIPS configuration | 454

Checking FIPS mode


When running in FIPS mode, the sensor adds a FIPS indicator after the version string.
The Web UI displays FIPS after the version string on the top status bar.
To check FIPS status:
• View the top status bar to see if the Web UI displays FIPS after the version string.

• Use the n2os-version command to add FIPS to the version string.

• Use the n2os-fips-status command to check FIPS status.

• Use the n2os-fips-status [-h | -q] command to support additional parameters for
extended usage:

Auditing FIPS operations


The FIPS audit trail tracks specific FIPS events.
FIPS status changes are logged into the audit trail using the following events:
• FIPS mode enabled
• FIPS mode disabled
When booting, the sensors report FIPS mode operation using the message System running in
FIPS mode.
| FIPS configuration | 455

FIPS enabled protocols


FIPS supports the SSH and HTTPS protocols identified in this section.

Supported SSH protocols in FIPS mode


FIPS supports these SSH protocols:

Function Algorithms

diffie-hellman-group14-sha256
Key exchange diffie-hellman-group16-sha512
diffie-hellman-group18-sha512

aes128-gcm@openssh.com
aes256-gcm@openssh.com
Ciphers aes128-ctr
aes192-ctr
aes256-ctr

hmac-sha2-256-etm@openssh.com
MACs
hmac-sha2-512-etm@openssh.com

Host Key Algorithms ssh-rsa

Supported HTTPS protocols in FIPS mode


FIPS supports these HTTPS protocols:

TLS version Cipher Suite Name (IANA/RFC)

TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
TLS 1.2
TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
TLS_DHE_RSA_WITH_AES_256_GCM_SHA384

TLS_AES_256_GCM_SHA384
TLS 1.3
TLS_AES_128_GCM_SHA256
Chapter

17
Compatibility reference
Topics: In this chapter you will receive compatibility information about
Nozomi Networks products.
• SSH compatibility
• HTTPS compatibility
| Compatibility reference | 458

SSH compatibility

Supported SSH protocols (since 19.0.4)

Function Algorithms
Key exchange curve25519-sha256@libssh.org
diffie-hellman-group-exchange-sha256
diffie-hellman-group14-sha256
diffie-hellman-group16-sha512
diffie-hellman-group18-sha512

Ciphers chacha20-poly1305@openssh.com
aes128-gcm@openssh.com
aes256-gcm@openssh.com
aes128-ctr
aes192-ctr
aes256-ctr

MACs hmac-sha2-256
hmac-sha2-512
umac-128-etm@openssh.com
hmac-sha2-256-etm@openssh.com
hmac-sha2-512-etm@openssh.com
hmac-sha2-512@openssh.com

Host Key ssh-rsa


Algorithms
ssh-ed25519
ecdsa-sha2-nistp384
ecdsa-sha2-nistp521
| Compatibility reference | 459

HTTPS compatibility

Supported HTTPS protocols (since 21.9.0)

TLS version Cipher Suite Name (IANA/RFC)


TLS 1.2 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
TLS_DHE_RSA_WITH_AES_256_GCM_SHA384

TLS 1.3 TLS_AES_256_GCM_SHA384


TLS_CHACHA20_POLY1305_SHA256
TLS_AES_128_GCM_SHA256

Supported RC/Guardian data channel protocols (since 21.9.0)

TLS version Cipher Suite Name (IANA/RFC)


TLS 1.2 ECDHE-RSA-AES128-GCM-SHA256
ECDHE-RSA-AES256-GCM-SHA384
DHE-RSA-AES128-GCM-SHA256
DHE-RSA-AES256-GCM-SHA384
Appendix

A
Reference table of icons
Topics: This topic details information about the icons found in the Nozomi
Networks solution, displaying universal icons, then those specific to
• Icon reference table a particular function.
| Reference table of icons | 462

Icon reference table


Icon Description
Universal
Navigate to

Show requested traces

Configure

Delete

Navigate to page

Refresh

Edit dashboard

Export
and
Legend

Forward

Save as pdf

Go to end

Edit

Update credentials

Add to

Confirm/save

Dashboard
Dashboard - increase and decrease

Dashboard - filter
| Reference table of icons | 463

Icon Description
Dashboard - recommended filters that affect speed

Dashboard - add row

Dashboard - History

Dashboard - Edit

Dashboard - Discard

Dashboard - Clone

Dashboard - Export

Dashboard - Save

Alerts
Alert - toggle between standard and expert mode

Alert - ack/unack

Alert - edit note

Alert - clone

Alert - close

Alert - download trace

Alert - time machine diff not available

Sensors
Sensor - not allowed

Sensor - allowed

Sensor - clear

Sensor - delete
| Reference table of icons | 464

Icon Description
Sensor - focus

Sensor - force update

Sensor - lock

Sensor - place

Sensor - remote connect

Navigation Bar
Navigation bar - collapse bar

Navigation bar - monitor

Network
Network - configure

Network - show alerts

Network - manage learning

Network - request a trace

Network - show requested traces

Links - captured URLs


Links - events
Nodes - add to Smart Polling plan

Process
Process - add to favorites

Process - configure variables

Process - variable details

Chart

Schedule backup
Schedule backup file list action - delete
| Reference table of icons | 465

Icon Description
Schedule backup file list action - download

Schedule backup file list action - restore

Traces
Trace request

Continuous trace -destroy

Continuous trace - download

Continuous trace - list parts

Continuous trace - start

Continuous trace - stop


| Glossary | 467

Glossary
adaptive learning
Adaptive learning is an anomaly detection method where deviations are evaluated at a global level
rather than at a single node level.
alert
An alert represents an event of interest in the observed system. There are various kinds of alerts. For
example, they can derive from anomaly-based learning, assertions, or protocol validation.
alert rules
Alert rules allow governing actions with regard to alerts. Rules are typically created to suppress alerts
for which the user knows the alert behavior, and understands that no further action is needed. Alerts
can be muted permanently (i.e., they never enter the database) or temporarily (until a date specified by
the user). Other actions include: changing security profile visibility, changing risk, and changing trace
filter.
alerts dictionary
The alerts dictionary is a complete list of alert types.
allowed sensor
An allowed sensor is a downstream connected sensor that synchronizes data to and can receive
propagated data from upstream. Upstream means a CMC or Vantage for a Guardian or a Guardian for
a Remote Collector.
sensor ID
A sensor ID is an alphanumeric string, a universally unique identifier (UUID), that uniquely identifies
the sensor inside the Nozomi Network infrastructure. The sensor ID is used to complete the upstream
connection configuration, as well as the connection to the CMC HA, if enabled. Note: Sensor ID differs
from machine ID. When restoring a backup on different hardware, the sensor ID remains the same,
while the machine ID changes.
assertion
A valid assertion is a normal query with a special command appended to the end. Assertions can be
saved to have them continuously executed in the system.
asset
An asset in the environment represents a physical device within a privately-monitored domain network.
Assets range from a single node to multiple nodes. Public nodes that are not part of the owner's private
network domain are not considered assets.
Asset Intelligence™
Asset Intelligence is a continuously expanding database of modeling asset behavior used by N2OS to
enrich asset information, and improve overall visibility, asset management, and security, independent
of monitored network data.
backup archive
A backup archive is a copy of historical data that can be used to restore original data if needed, and
may also be kept for long-term retention reasons, such as compliance.
bundle
A bundle (update) is an archive containing all files needed to update the Nozomi Networks Operating
System (N2OS) version. The bundle is propagated through the entire sensor hierarchy and is used by
CMC or Guardian to update the controlled sensors.
CMC All-in-one
CMC All-in-one indicates that data gathered from sensors connected to the CMC are collected and
merged.
| Glossary | 468

CMC multi-context
CMC multi-context indicates that data gathered from sensors connected to the CMC are collected
and kept separately. The CMC Multi-context setting is the default, recommended setting for most
environments. Multi-context mode allows administrators to collect information from non-cohesive
environments.
Command Line Interface (CLI)
The Nozomi Networks solution uses the CLI as a tool to change configuration parameters, or when
performing troubleshooting activities.
Common Vulnerability Scoring System (CVSS)
The CVSS is a standard value between 0 and 10 that measures of the severity of vulnerabilities, and is
commonly known as the CVE score.
Central Management Console™ (CMC)
CMC is a centralized monitoring variant of the standalone Nozomi sensor. It supports complex
deployments that cannot be addressed with a single sensor. The central design principle behind the
CMC is the unified experience that allows access to information in the same manner as the Nozomi
sensor.
content pack
A content pack is a collection of saved Web UI configurations that can be imported into Guardian or
CMC. They can be used to share commonly configured functions like queries and reports.
dashboard
The Nozomi Networks solution offers multiple dashboards to show network status, graphically and
in table format. The solution has several default built-in dashboard templates, but dashboards are
configurable to show current status, history, a snapshot in time, and other configurations that are
available online and viewed in reports.
data integration
The Nozomi Networks solution allows users to configure endpoints to receive data when integrated
with third party systems, such as custom JSON, custom CSV, DNS reverse lookup, as well as with
third-party vendor-specific tools/products.
environment
The environment is the real-time representation of the network monitored by Guardian, which provides
a synthetic view of all assets, network nodes, and communication between them.
features control panel
The features control panel shows the current status of the system's features configuration. From the
features control panel, users can change specific values, such as retention periods.
firewall integration
Guardian integrates with firewall software programs and hardware devices that analyze incoming
and outgoing network traffic and, based on predetermined rules, create a barrier to block viruses and
attackers. If any incoming information is flagged by filters, it’s blocked. Guardian is comprehensively
integrated with a number of third party firewalls. Configuration of firewall integration requires
administrative permission.
Guardian™
Guardian is the Nozomi Networks Operating System (N2OS) sensor that detects cyber threats through
passive network analysis. Guardian detects OT, IoT/IIoT, ICS, IT, edge, and cloud assets on a network,
using asset discovery, network visualization, vulnerability assessment, risk monitoring and threat
detection. Guardian can share this data with Vantage and the CMC.
hotfix (Smart Polling)
Smart polling can discover the patch level of polled nodes. The Hotfix tab shows which hotfixes have
been installed or may be missing from the node.
| Glossary | 469

incident
An incident is a summarized view of alerts. When multiple alerts describe different aspects of the same
situation, the Nozomi Networks Operating System's (N2OS's) correlation engine groups the alerts
together to provide a more comprehensive view of the environment.
learned/unlearned network objects
The Nozomi Networks solution discovers, identifies, and learns the behavior of objects (nodes and
links) on a network. To learn network objects, users can choose an anomaly-based detection method
that is either adaptive learning or strict learning. Through integration with the firewall, unlearned nodes
and links are automatically blocked through block policies. Block policies are not created for nodes and
links in the learned state.
link
A link in the environment represents communication between two nodes using a specific protocol. It is
a directional one-to-one association with a single protocol (i.e., source, destination, protocol).
link events
Link events are activities that can occur on a link, such as being available or not.
machine ID
Machine ID is an alphanumeric string, a universally unique identifier (UUID), that uniquely identifies the
hardware sensor or the virtual machine. It is mainly used for licensing.
multi-homed asset
A multi-homed asset is an asset with multiple IP addresses and TYPE == computer.
muting
Muting alerts prevents them from entering the database. Alerts can be muted permanently so that they
never enter the database, or they can be muted temporarily until a specified date.
network graph
The network graph page gives a visual overview of the network. In the graph every vertex can
represent a single network node or an ensemble of nodes, while every edge represents one or multiple
links between nodes or nodes ensembles.
node
A node in the environment represents a logical endpoint in the network communication.
node point
Node points are data points extracted from monitored nodes over time via Smart Polling.
nodeid
Node IDs, is the unique name by which the system identifies a node in a network.
Nozomi Networks Operating System (N2OS) solution
The N2OS solution is a suite of products that provide a synthetic view of all assets and network nodes
with communication between them. The Nozomi Networks solution includes Guardian, Vantage, Threat
Intelligence, Asset Intelligence, Smart Polling, Central Management Console (CMC), and Remote
Collector (RC).
outbound connections
Outbound connections are those that go out to a specific device from a device/host. Guardian detects
a sudden increase of outbound connections from a specific learned source node. An alert is raised by
default when we detect a larger number of outbound connections than normal. By default, the detection
is only performed when the node is protected. Optionally, the detection can also be performed when
node learning is in process.
passive detection
| Glossary | 470

The Nozomi Networks solution receives an out-of-band copy of data exchanged between devices while
continuously monitoring the network. Passive detection allows for a comprehensive state of risk without
impacting the production equipment.
plan (Smart Polling)
A Smart Polling plan is a scheduled job that collects additional data about a set of nodes. Plans allow
polling to be targeted to specific nodes, using specific protocols at a chosen interval.
playbooks
Playbooks are instructions associated with alerts that guide users to take proper action when an alert is
raised.
process
Process is a feature that presents Guardian process variables extracted by deep packet inspection.
protection mode
Protection mode provides alerts when behavior is different from the learned baseline. Stable network
nodes and segments are automatically protected. When learning changes to protecting, the system
triggers alerts for suspicious events deviating from the baseline.
query
The Nozomi Networks Query Language (N2QL) syntax is a concatenation of single commands
separated by the pipe (|) symbol in which the output of one command is the input for the next
command. This makes it possible to create complex data processing by composing several simple
operations.
Remote Collector (RC)
The Nozomi Networks Remote Collectors are low-resource sensors that capture data from distributed
locations and send it to Guardian(s) for further analysis. A Remote Collector is typically installed in
isolated areas (e.g., windmills, solar power fields), where it monitors multiple small sites. Traffic is
encrypted. The Remote Collector firmware receives automatic updates from the connected Guardian.
rollback
Rollback is the procedure to bring the previous version of the sensor back after an update. Rollback is
not always possible. Changes that inhibit this feature are highlighted in the release notes.
sandbox
Sandbox is an N2OS feature that scans files seen in the environment for potential threats.
sensor
A sensor is any component of the control, security or any other system, that shares raw or processed
data with Nozomi Networks solutions. Sensors are sources of information that contribute to the asset
discovery, management, and threat detection capabilities that Nozomi Networks provides. Sensors also
aggregate network and asset information from various sources to optimize network traffic, and increase
consistency of information across system components.
session
A session is a semi-permanent interactive information interchange between two communicating nodes.
A session is set up or established at a certain point in time, and then turned down at some later point.
An established communication session may involve more than one message in each direction.
Smart Polling™
Smart Polling™ is an add-on feature to the Nozomi Networks Guardian that allows it to contact nodes
for the purpose of gathering new information or enriching existing information through the use of plans.
Plans are user-defined and include instructions that describe the specific nodes to poll, and when and
how to poll them.
snapshot
A snapshot is a backup of the environment taken by the time machine at a point in time. It can be used
to compare different snapshots or a snapshot against the live environment.
| Glossary | 471

stale
Stale is the status given to an sensor (Guardian/CMC/Remote Collector) when the last time the sensor
communicated back to the CMC exceeds a configured threshold. This leads to the health status of the
sensor being set to unreachable.
strict learning
Guardian's strict learning feature uses a detailed anomaly-based approach, so deviations from the
baseline are detected and alerted. This approach is called strict because it requires that the learned
system behave as it has behaved during the learning phase, and assumes knowledge of the monitored
system to be maintained over time.
support archive
Support archive is a compressed set of data files, containing all information useful for support to
troubleshoot an issue. This archive contains information about hardware status, network status, system
resources consumption, database, and application information. Data inside the support archive can be
anonymised.

sync token
Sync token is a highly secure alphanumeric string used to register an sensor to its controller, a
Guardian or a CMC, which permits it to start the encrypted communication.
Threat Intelligence™
Nozomi Networks Threat Intelligence™ feature monitors ongoing OT and IoT threat and vulnerability
intelligence to improve malware anomaly detection. This includes managing packet rules, Yara rules,
STIX indicators and vulnerabilities. Threat Intelligence™ allows new content to be added, edited, and
deleted, and existing content to be enabled or disabled.
time machine
Time machine is a feature that permits users to manage the status of the observed network that is
learned by the Nozomi Network Guardian in time (the snapshot).
trace
A trace is a sequence of network packets that have been processed and can be downloaded in a
Packet Capture (PCAP) file for analysis. Traces can be automatically generated by alerts or can be
requested.
traffic
Network traffic is made up of data packets that flow trough the network. Within the Nozomi Networks
solution, the process of capturing the data packets is accomplished only by Guardians, Remote
Collectors, and Arc.
Vantage™
Vantage™ is the Nozomi Networks SaaS prodcut that secures OT, IoT, and IT networks. The
platform allows scalable asset protection anywhere and consolidates security management in a single
application.
variable
The Nozomi Networks solution monitors the virtual representation of an industrial process using
numerical values. The process's numerical values are known as variables. Variables are identified by
the host, remote terminal unit (RTU) ID and name. Variables are listed in table format.
vulnerability
The Nozomi Networks solution finds weaknesses in system applications, operating systems, and
hardware components, then provides an assessment to identify, quantify, and rank them.
zones
Security zones are segmented sections of a network to limit access to the internal network. The
Nozomi Networks solution supports three zone types: (1) predefined (or standard) zones that are
preconfigured and cannot be modified; (2) user-defined zones that can be edited, removed, and
| Glossary | 472

exported; and (3) auto-configured zones that are heuristically discovered, including some automatically
pre-filled fields.

You might also like