N2OS UserManual 19 0 4 1

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 216

Notice

Legal notices
Publication Date
December 2019

Copyright
Copyright © 2013-2019, Nozomi Networks. All rights reserved.
Nozomi Networks believes the information it furnishes to be
accurate and reliable. However, Nozomi Networks assumes no
responsibility for the use of this information, nor any infringement of
patents or other rights of third parties which may result from its use.
No license is granted by implication or otherwise under any patent,
copyright, or other intellectual property right of Nozomi Networks
except as specifically described by applicable user licenses. Nozomi
Networks reserves the right to change specifications at any time
without notice.
| Table of Contents | v

Table of Contents

Legal notices.......................................................................................... iii

Chapter 1: Preliminaries.........................................................................9
Prepare a Safe and Secure Environment...................................................................................10

Chapter 2: Installation.......................................................................... 11
Installing a Physical Appliance....................................................................................................12
Installing on Virtual Hardware..................................................................................................... 12
Installing the Container............................................................................................................... 13
Setup Phase 1.............................................................................................................................15
Setup Phase 2.............................................................................................................................17
Additional settings....................................................................................................................... 19

Chapter 3: Users Management............................................................ 21


Managing Users.......................................................................................................................... 22
Managing Groups........................................................................................................................ 24
Password policies........................................................................................................................26
Active Directory Users.................................................................................................................28
SAML Integration.........................................................................................................................30

Chapter 4: Basics..................................................................................31
Environment................................................................................................................................. 32
Asset............................................................................................................................................ 32
Node............................................................................................................................................ 32
Session........................................................................................................................................ 33
Link.............................................................................................................................................. 33
Variable........................................................................................................................................ 34
Vulnerability................................................................................................................................. 34
Query........................................................................................................................................... 34
Protocol........................................................................................................................................ 35
Incident & Alert............................................................................................................................35
Trace............................................................................................................................................ 36
Charts.......................................................................................................................................... 37
Tables.......................................................................................................................................... 38
Navigation through objects..........................................................................................................38

Chapter 5: User Interface Reference...................................................41


Supported Web Browsers........................................................................................................... 42
Navigation header....................................................................................................................... 42
Dashboard................................................................................................................................... 44
Alerts............................................................................................................................................ 48
Asset View...................................................................................................................................49
Network View...............................................................................................................................51
Process View...............................................................................................................................63
Queries........................................................................................................................................ 67
Reports........................................................................................................................................ 70
Time Machine.............................................................................................................................. 73
Vulnerabilities...............................................................................................................................76
| Table of Contents | vi

Settings........................................................................................................................................ 77
System......................................................................................................................................... 95
Continuous Traces.................................................................................................................... 105

Chapter 6: Security Profile.................................................................107


Security Control Panel.............................................................................................................. 108
Learned Behavior...................................................................................................................... 108
Alerts.......................................................................................................................................... 109
Manage Network Learning........................................................................................................ 110
Custom Checks: Assertions...................................................................................................... 115
Custom Checks: Specific Checks............................................................................................. 117
Alerts Customization..................................................................................................................118
Security Profile.......................................................................................................................... 119
Alerts Dictionary........................................................................................................................ 121
Incidents Dictionary................................................................................................................... 127
Packet rules...............................................................................................................................128
Hybrid Threat Detection............................................................................................................ 131

Chapter 7: Vulnerability Assessment................................................133


Basics........................................................................................................................................ 134
Passive detection...................................................................................................................... 135
Configuration..............................................................................................................................136

Chapter 8: Smart Polling.................................................................... 137


Strategies................................................................................................................................... 138
Configurations............................................................................................................................ 138
Extracted information.................................................................................................................140

Chapter 9: Queries.............................................................................. 143


Overview.................................................................................................................................... 144
Reference.................................................................................................................................. 145
Examples................................................................................................................................... 153

Chapter 10: Maintenance....................................................................157


System Overview.......................................................................................................................158
Data Backup and Restore.........................................................................................................159
Reboot and shutdown............................................................................................................... 160
Software Update and Rollback................................................................................................. 161
Data Factory Reset................................................................................................................... 163
Support...................................................................................................................................... 163

Chapter 11: Central Management Console.......................................165


Overview.................................................................................................................................... 166
Deployment................................................................................................................................ 167
Settings...................................................................................................................................... 168
Connecting Appliances.............................................................................................................. 168
Troubleshooting......................................................................................................................... 169
Propagation of users and user groups..................................................................................... 170
CMC connected appliance - Date and Time............................................................................ 170
Appliances List.......................................................................................................................... 171
Appliances Map......................................................................................................................... 173
HA (High Availability)................................................................................................................ 175
Alerts.......................................................................................................................................... 177
Functionalities Overview............................................................................................................178
| Table of Contents | vii

Updating.....................................................................................................................................179
Single-Sign-On through the CMC............................................................................................. 179

Chapter 12: Remote Collector............................................................181


Overview.................................................................................................................................... 182
Deployment................................................................................................................................ 183
Using a Guardian with connected Remote Collectors.............................................................. 187
Troubleshooting......................................................................................................................... 188
Updating.....................................................................................................................................189

Chapter 13: Configuration.................................................................. 191


Editing Configuration files..........................................................................................................192
Basic configuration rules........................................................................................................... 193
Configuring nodes..................................................................................................................... 198
Configuring links........................................................................................................................ 200
Configuring variables................................................................................................................. 202
Configuring protocols.................................................................................................................205
Configuring trace....................................................................................................................... 208
Configuring Time Machine........................................................................................................ 210
Configuring retention................................................................................................................. 211
Configuring Bandwidth Throttling.............................................................................................. 213

Chapter 14: Compatibility reference................................................. 215


SSH compatibility...................................................................................................................... 216
Chapter

1
Preliminaries
Topics: In this chapter you will receive preliminary information to get a
Guardian or a CMC properly and securely installed.
• Prepare a Safe and Secure
Environment
Prepare a Safe and Secure Environment
Before starting the installation process, some preliminary information need to be checked to ensure
optimal and secure operation of the system.
If you are installing a physical appliance, install it in a location that has been physically secured and to
which only authorized personnel can have access. Observe the following precautions to help prevent
potential issues for property damage, personel injury or death.
• Do not use damaged equipment, including exposed, frayed or damaged power cables.
• Do not operate the appliance with any covers removed.
• Choose a suitable location for the appliance: it should be situated in a clean, dust-free area that is
well ventilated. Avoid area where heat, electrical noise and electromagnetic fields are generated.
Avoid areas where it can get wet. Protect the appliance from liquid intrusion. If the appliance gets
wet disconnect power to the appliance.
• Use a regulating uninterruptible power supply (UPS) to protect the appliance from power surges,
voltage spikes and to keep your system operating in case of a power failure.
• A reliable ground must be maintained at all times. To ensure this, the rack itself should be grounded
and the appliance chassis should be connected for grounding to the rack via the provided appliance
grounding cable.
• It should be mounted into a rack or otherwise placed so that the amount of airflow required for safe
operation is not compromised.
• If mounted into a rack it should be placed so that a hazardous condition does not arise due to
uneven mechanical loading.
If you are installing a virtual appliance, contact your virtual infrastructure manager to ensure that all
the possible precautions are put in place to guarantee that the system's console is only accessible to
authorized personnel only.
The appliance's management port should get an IP address assigned in a dedicated management
VLAN, so that access to it can be controlled at different levels and restricted only to a selected set of
hosts and people.
Before connecting any SPAN/mirror port to the appliance, ensure that the configuration on the switch/
router/firewall or other networking device has been set in order to allow only traffic in output. The
appliance's ports are configured in order to read only the traffic and not inject any packet, however to
prevent any human error (e.g. a span port cable put into the management port) it's useful to check that
no packet can be injected from those ports.
Chapter

2
Installation
Topics: In this chapter you will receive the fundamental information
necessary to get both Nozomi Networks Solution physical and
• Installing a Physical Appliance virtual appliances up and running.
• Installing on Virtual Hardware
Further information on additional configuration is given in the
• Installing the Container Configuration chapter.
• Setup Phase 1
Maintenance tasks are described in the Maintenance chapter.
• Setup Phase 2
• Additional settings
| Installation | 12

Installing a Physical Appliance


If you have purchased a physical appliance from Nozomi Networks, it is already configured with the
latest stable release of Nozomi Networks Solution N2OS.
The first phase of the configuration needs to attach to the serial console of the appliance, using a null-
modem serial cable. N1000, N750 and P500 appliances use an RJ45 Console plug, NSG-L and NSG-
M Series have a USB serial plug, while the R50 and R150 need a DB9 serial plug.
Once the cable is connected, open a terminal emulator, that can be Hyper Terminal or Putty on
Windows and cu or minicom on macOS and other *nix platforms.
Connect setting the speed to 9600 bauds and no parity bit set. The appliance will show a login prompt.
Now proceed to the section Setup Phase 1 on page 15.

Installing on Virtual Hardware


Installation on Virtual Hardware has been tested on a variety of OVA-compatible environments.
However, the current release of N2OS officially supports these hypervisors:
1. VMware ESXi 5.5 or newer
2. HyperV 2012 or newer
3. XEN 4.4 or newer
4. KVM 1.2 or newer
The minimum requirements for a Guardian Virtual Machine (VM) resources are:
• 4 vCPU running at 2 Ghz
• 4 GB of RAM
• 10 GB of minimum disk space, running on SSD or hybrid storage (100+ GB of disk recommended)
• 2 or more NICs (maximum number depends on hypervisor), one will be used for management and
the 1 or more other for traffic monitoring
Ensure that all these resources are provided in healthy conditions. Overall hypervisor load must be
under control and no ballooning should regularly occur on the Guardian VM, otherwise unexpected
behavior of the system may be experienced, such as dropped packets or overall poor system
performance.

Virtual Machine Sizing


The following table lists the minimum requirements for the Guardian VM size, based on the amount
of nodes and throughput. This is purely an indication, and differences in distribution of protocols and
hypervisor hardware might affect the optimal settings.

Nodes Throughput vCPU RAM (GB) Disk (GB)


(Mbps)
1000 50 4 4 50+
2500 150 4 8 100+
5000 300 8 16 250+

Installing the Virtual Machine


In this section we will cover the installation of the Virtual Machine into the hypervisor. A running VM
will be obtained, however further configuration enabling external access will be provided in subsequent
sections.
In order to proceed you should be familiar with importing OVA Virtual Machine in your hypervisor
environment. Should this not be the case, please refer to the manual or support service of your
hypervisor.
| Installation | 13

1. Import the Virtual Machine into the hypervisor and configure resources according to the minimum
requirements specified in the previous section.
2. After importing the VM, go to the hypervisor settings of the VM disk and set a desired size. Some
hypervisors, for instance VMware ESX >= 6.0, allow to change the disk size at this stage. With
hypervisors that do not allow this operation, you must STOP HERE with this section and proceed
with instructions contained in Adding a secondary disk to Virtual Machine on page 13.
3. Boot the VM. It will now boot into a valid N2OS environment.
4. Login as admin
You will be instantly logged in, no password is set by default.
5. Go to privileged mode with the command:

enable-me

You will now be able to perform changes into the system.

Adding a secondary disk to Virtual Machine


In this section we will cover how to add a bigger virtual data disk to the N2OS VM, in case the
main disk could not be grown during the first import. In order to proceed you should be familiar with
managing virtual disks in your hypervisor environment. Otherwise please refer to the manual or support
service of your hypervisor.
1. Add a disk to the VM and restart it
2. In the VM console, use the following command to obtain the name of the disk devices:

sysctl kern.disks

3. Assuming ada1 is the device disk added as secondary disk (note that ada0 is the OS device),
execute this command to move the data partition to it

data_move ada1

Adding a monitoring interface to the Virtual Machine


By default the VM has one management network interface and one monitoring interface. Depending on
deployment needs, it may be useful to add more monitoring interfaces to the appliance. To add one or
more interfaces, follow these steps:
1. If the VM is powered on, shut it down
2. Add one or more network interfaces from the hypervisor configuration
3. Power on the VM
The newly added interface(s) will be automatically recognized and used by the Guardian.

Installing the Container


The Container enables to install the Nozomi Networks Solution on embedded platforms like switches,
routers and firewalls with a Container Engine onboard.
It's also a good platform for tightly integrated scenarios where several products have to interact on the
same hardware platform to provide a unified experience.
For all the remaining use cases, a Physical Appliance or a Virtual Appliance are the recommended
options.
| Installation | 14

Install on Docker
After these steps we'll have an image ready and a running container based on it.
A prerequisite for the steps below is to have Docker installed. We have tested with version 18.06 and
18.09.
The image can be built from the directory containing the artifacts with the command:
docker build -t n2os .
Once the image has been built, it can be run using for instance this command:
docker run --hostname=nozomi-sga --name=nozomi-sga --
volume=<path_to_data_folder>:/data --network=host -d n2os
where <path_to_data_folder> is the path to a volume where the appliance's data will be stored,
and saved for future runs.
The image has been built to automatically monitor all network interfaces shown to the container -- and
the network=host setting will allow to access all network interfaces of the host computer.
The container can be stopped anytime with:
docker stop nozomi-sga
and executed with
docker start nozomi-sga

Additional Details
The Container has the same features provided by the Physical and Virtual Appliances. A key difference
is that provisioning of "system" settings must be performed from Docker commands, and thus are not
editable from inside the container itself. A notable example is the hostname: it has to be set when
launching a new instance of the image.
It is mandatory to use volumes for the /data partition to make sure that data will survive to updates of
the image.
To update a container, build the new version of the n2os image, stop and destroy the current running
containers and start a new one with the updated image. Data will be automatically migrated to the new
version.
The network=host Docker parameter allows to let the container monitor the physical NICs of the
host machine. However, by default it will let the container monitor all the available ones. To restrict to
a subset, create a cfg/n2osids_if file into the /data volume with the list of interfaces to monitor
separated by comma (e.g: eth1,eth2).
| Installation | 15

Setup Phase 1
We will now setup the very basic configuration needed to start using the Nozomi Networks Solution.
After these steps the system will have the management interface setup and reachable as text console
via SSH and as web console via HTTPS.
We assume that Nozomi Networks Solution has already been installed and ready to be configured
for the first time. Depending on the case, a serial console must be used in this phase (for Physical
Appliances) or the text hypervisor console (for Virtual Appliances).
1. The console will display a prompt with the text "N2OS - login:". Type admin and then press [Enter].
In the Virtual Appliance, you will be instantly logged in, as no password is set by default. In Physical
Appliances, nozominetworks is the default password.
2. Elevate the privileges with the command: enable-me
3. Now launch the initial configuration wizard with the command: setup

4. You will be prompted to choose the admin password first. Select a strong password as this will allow
the admin user to access the appliance through SSH.

5. Secondly, you will need to setup the management interface IP address. Select the "2 Network
Interfaces" menu in the dialog.

6. Now you will need to setup the management interface IP address. Depending on the appliance
model, the management interface can be named em0 or mgmt. Select it and press [Enter].
| Installation | 16

7. Edit the values for IP address (ipaddr) and Netmask (netmask). Enable DHCP to configure all
automatically. Then move up to "X. Save/Exit" and press [Enter].

8. Now select "Default Router/Gateway" from the menu, and enter the IP address of the default
gateway. Press [Tab] and then [Enter] to save and exit.

9. Now select "DNS nameservers" from the menu, and configure the IP addresses of DNS servers.

10.Move up to "X Exit" and press [Enter].


11.The basic networking setup is done; the remaining steps will be performed by opening the web
console running on the management interface.
| Installation | 17

Setup Phase 2
This second phase of the setup will be performed with the web console. Before starting to use the web
console, be sure to use one of the supported web browsers.
The web console can be accessed pointing at https://<appliance_ip> where <appliance_ip>
is the IP address assigned to the management interface. Please note that the product integrates self-
signed SSL certificates to get started, so add an exception in your browser. Later in this chapter we will
provide steps to import valid ones. You should now see the login screen:

Default username and password are admin / nozominetworks. For security reasons you will be
prompted to change these credentials at first login.
Once logged in, the remaining steps of the setup can be completed. Go to Administration >
General and change the host name.

Now fix date and time settings. Go to Administration > Date and time, and change the time
zone, set the date and (optional) enable the NTP client.
| Installation | 18

The appliance is almost ready to be put into production: next step is to install a valid license.

License
In the Administration > License page, you will need to copy the machine ID and use it together
with the Activation Code that you have received from Nozomi Networks to obtain a license key. Once
obtained, paste it inside the text box under "License configuration". After confirmation, the appliance
begins to monitor the configured network interfaces.

Figure 1: The License page


| Installation | 19

Additional settings
In this chapter some additional, non-mandatory settings of the system will be explained.

Install SSL certificates


In this section we will import a real SSL certificate into the appliance, needed to securely encrypt all
traffic between client computers and the N2OS appliance over HTTPS.
The N2OS webserver that exposes the HTTPS interface is nginx. Please, be prepared with a certificate
and a key file both compatible with NGINX and name them https_nozomi.crt and https_nozomi.key.
1. Upload the certificate and key file to the appliance with an SSH client in the /data/tmp folder.
For example, given you have https_nozomi.crt and https_nozomi.key in the same folder, open a
terminal, cd into it and then upload

scp https_nozomi.* admin@<appliance_ip>:/data/tmp

2. Log into the text-console, either directly or through SSH then elevate the privileges

enable-me

3. Execute the command n2os-addtlscert

n2os-addtlscert https_nozomi.crt https_nozomi.key

4. Now restart nginx by issuing the command

service nginx restart

5. Verify that the certificate is correctly loaded by pointing your browser to https://
<appliance_ip>/ and checking that the certificate is now recognized as valid.
6. We can safely save the new setup by issuing this command in the console

n2os-save

Now the imported SSL certificates are correctly working and will be applied also on next reboot.

Install CA certificates
In this section we will add a CA certificate to an appliance: the procedure is needed to trust the
certificate exposed by nginx over HTTPS, especially to secure the communication between a CMC and
the connected appliances.
Please, be prepared with the certificate and copy it under /data/tmp. The certificate's formats accepted
by the command are DER and PEM. The PKCS#12 format is not accepted.
1. Upload the CA certificate file to the appliance with an SSH client in the /data/tmp folder. For
example, given you have cert.crt file, open a terminal, cd into the directory and then upload

scp cert.crt admin@<appliance_ip>:/data/tmp

2. Log into the text-console, either directly or through SSH then elevate the privileges

enable-me

3. Execute the script n2os-addcacert

n2os-addcacert cert.crt

Now the imported CA certificate is trusted by the appliance and could be used to secure the HTTPS
communication from a connected appliance to a CMC as described in Connecting Appliances on
page 168.
Enabling SNMP
Monitoring the health state of the Nozomi Networks Solution appliance is important. This can be
performed in a standard manner by enabling the SNMP daemon.
The current SNMP daemon supports only version 2c.
Please log into the text-console, either directly or through SSH, and issue the following commands.
1. Use vi or nano to edit /etc/snmpd.conf
2. Edit the location, contact and community variables.
3. Now edit the /etc/rc.conf file to add the line

bsnmpd_enable="YES"

4. Start the service with the command

service bsnmpd start

5. Save all settings by issuing the command

n2os-save
Chapter

3
Users Management
Topics: In this section all aspects related to authentication and authorization
of users will be covered. You will get guided on how to setup local
• Managing Users users, groups and external groups imported from Active Directory.
• Managing Groups
In the Nozomi Networks Solution user permissions are governed by
• Password policies its group. Each group can have its own subset of allowed nodes and
• Active Directory Users a list of allowed sections. Furthermore, a group can be enabled to
• SAML Integration be a "super-administrator" by flagging it as "Is Admin".
| Users Management | 22

Managing Users
In this section we will overview the management operations related to users.

List of users
1. Go to the Administration > Users page. You will get the list of all users. From the users
page it's possible to create and delete users and change the password and/or username of existing
users.

Adding a local user


1. Go to the Administration > Users page. Click on the "+" button. You will get to this screen:

2. Here you have to specify a username, a strong password and you should decide its group (Groups
configuration will be covered in the next section). Clicking on the "x" button (or ESC on the
keyboard) will close this window.

Edit a local user


1. Go to the Administration > Users page. Browse through the list of users and select the one
you want to edit by clicking on edit. You will get to a form like this:
| Users Management | 23

2. Here you can adjust the username and update the password. Of course you will have to enter two
matching passwords to update it correctly. Clicking on the "x" button (or ESC on the keyboard) will
close this window.
| Users Management | 24

Managing Groups
In this section we will overview the management operations related to user groups, changing the
sections of the platform the user can access.

List of groups
1. Go to the Administration > Users page and move to the Groups tab.

Adding a local group


1. Go to the Administration > Users page. Move to the Groups tab. Click on the "+" button. You
will get to this screen:

2. Here you have to specify a name and an optional "node filters" list (a comma separated list of
subnet masks used to limit the group to a subset of nodes). Finally, you will have to select one or
| Users Management | 25

more section(s) that the group will be allowed to view and to interact with. Optionally the "Is admin?"
flag will enable the group to view and modify all sections of the system.
Each group has several properties:

Name A string to identify the group


Node filters A list of subnet addresses in CIDR format separated by comma to limit the
nodes a user can view in the Nodes, Links, Variables list, Graph, Queries
and Assertions
Allowed sections The sections that the user is able to view and to interact with
Is admin? An admin user can view all the sections, modify some settings and edit
users and groups

Edit a group
1. Go to the Administration > Users page. Move to the Groups tab. Browse through the list of
groups and select the one you are willing to edit by clicking on edit. You will get to a form like this:
| Users Management | 26

Password policies
In this section we will provide an overview on how to manage local password policies.

Shell password policies


Passwords for local console and ssh accounts must meet the following complexity requirements. A
valid password must contain characters from these classes:
• upper case letters
• lower case letters
• digits
• other characters
They must be at least 8 characters long when they match 3 of these 4 classes, or 7 characters long
when they match all 4 classes.
Characters that form a common pattern are discarded.

Web GUI password policies


Passwords for web GUI local accounts must meet complexity requirements. By default, they must have
at least eight characters, include a combination of upper-case and lower-case letters and numbers.
The password history policy determines the number of unique new passwords that must be associated
with a user account before an old password can be reused. The password lockout policy prevents
bruteforcing attacks by disabling a user login for a fixed time after x unsuccessful attempts.
Local passwords and local user accounts can be forced to expire after a period of time. Admin
accounts can be protected from expiring. See below table for settings.
The default policies can be changed in the /data/cfg/n2os.conf.user file to best suit
organizational requirements.
Password policies can be checked using the info tooltip while adding or editing user.

Parameter Default Description


value
password_policy maximum_attempts 3 Number of unsuccessful login attempts
before user lock
| Users Management | 27

password_policy lock_time 5 Number of minutes that a user account


is locked out after unsuccessful login
attempts
password_policy history 3 Number of unique password to be used
password_policy digit 1 Number of numbers that a password must
contain
password_policy lower 1 Number of lower case characters that a
password must contain
password_policy upper 1 Number of upper case characters that a
password must contain
password_policy symbol 0 Number of symbols that a password must
contain
password_policy min 8 Minimum password length
password_policy max 128 Maximum password length
password_policy false Disable inactive user policy flag
inactive_user_expire_enable
password_policy inactive_user_lifetime 60 Required inactive days to force user as
disabled
password_policy admin_can_expire false This setting can prevent admin accounts
from expiring
password_policy password_expire_enable false Password expiration feature
password_policy password_lifetime 90 Required days to force password change
| Users Management | 28

Active Directory Users


Besides local users, already existing users of an Active Directory domain can also be configured to
login. Moreover, their permissions can be defined upon their group.
In order to proceed with the configuration, you will need to have handy:
1. the domain name (aka pre-Windows 2000 name) (in this manual we will refer to it using
<domainname>)
2. the domain Distinguished Name (in this manual we will refer to it using <domainDN>)
3. one or more Domain Controller IP addresses (in this manual we will refer to an IP using
<domaincontrollerip>)

Configuring Active Directory Integration using the UI


In this section we will configure the Active Directory Integration from the UI.

1. Go to the Administration > Users page. Select the Active Directory tab.
2. Enter Username and Password.
You need to prepend the Domain Name to the Username, separated by a backslash character, as
shown in the example.
3. Specify a Domain Controller IP/Hostname.
You can check if the Active Directory service is running on port 389 (LDAP) or on port 636 (LDAPS)
by using the Check Connection button and the LDAPS selector.
Should you need to add another Domain Controller IP you can click on the Add host button.
4. Specify the Domain details in Domain name and Distinguished name.
5. Optionally configure the Connection timeout
6. Save the configuration by clicking on the Save button, which will also validate the data.
If there are errors, they will be shown beside the Status field.
The Delete configuration button allows you to delete the Active Directory configuration by
removing all its variables. This action is not recoverable.

Import Active Directory Groups


This section explains how to import an existing group from an Active Directory infrastructure. This step
is fundamental to allow Active Directory users log into the system.
1. Go to the Administration > Users page. Select the Groups tab. In the Groups page, click on
the Import from Active Directory button.
| Users Management | 29

2. From the import screen, start specifying a domain administrative credential. Then click on the
Retrieve groups button to retrieve the list of groups.

In the Username field type the Active Directory user logon name in the <domainname>
\<domainusername> format
3. Now filter and select the desired groups to import. If you want to import also related groups (e.g.
parent groups) be sure to flag the checkbox near the Import button.

4. When finished, click the Import button. You will be redirected to the list of groups.

5. Now you can edit the group permissions. Active Directory users belonging to this group will be
automatically assigned to it and will inherit all permissions of the configured group.
6. After configuring Active Directory groups permissions, users can log into the system with the
<domainname>\<domainusername> user and their current domain password in the login screen.
SAML Integration
To enable a Single Sign On experience, SAML 2.0 Identity Providers are supported in the platform.
A SAML application for this system needs to be configured in the Identity Provider before proceeding.
The goal is to configure a new application where the Assertion Consumer Service URL must be
the URL of this system with the /saml/auth path (example: https://10.0.1.10/saml/
auth) and the Issuer must be the URL of this system with the /saml/metadata path (example:
https://10.0.1.10/saml/metadata). In the Identity Provider, download and save the metadata
XML file that will be used to configure the system.
To configure SAML login, go to the Administration > Users page. Select the SAML tab.

Once completely configured, the login page will integrate a new Single Sign On button:

In order for SAML to work properly, groups matching SAML's roles need to exist already in the system.
Groups will be looked up using the name, so that if the SAML role attribute specifies a "Operator" role,
the "Operator" group will be looked up when authorizing an authenticating user.
Chapter

4
Basics
Topics: In the chapter you will get introduced to some basic concepts of the
Nozomi Networks Solution and some recurring graphical interface
• Environment controls will be explained.
• Asset
You must have mastered these concepts in order to understand
• Node how to properly use and configure the N2OS system.
• Session
• Link
• Variable
• Vulnerability
• Query
• Protocol
• Incident & Alert
• Trace
• Charts
• Tables
• Navigation through objects
| Basics | 32

Environment
The Nozomi Networks Solution Environment is the real time representation of the network
monitored by the Guardian, providing a synthetic view of all the assets, all the network nodes and the
communications between them.

Asset View
In the Asset View section are displayed all your assets, intended as single discrete endpoints. In this
section it is easy to visualize, find and drill down on asset information such as hardware and software
versions.
For more details see Asset View on page 49

Network View
In the Network View section are contained all the generic network information which are not related
to the SCADA side of some protocols like the list of nodes, the connection between nodes and the
topology.
For more details see Network View on page 51

Process View
In the Process View section are contained all the SCADA specific information like the SCADA slaves
list, the slave variables with their history of values and other related information, a section with the
analysis on the variables values and some variables related statistics.
For more details see Process View on page 63

Asset
An asset in the Environment represents an actor in the network communication and, depending on the
nodes and components involved, it can be something ranging from a simple personal computer to an
OT device.
All the assets are listed in the Environment > Asset View > List section and can also be
viewed in a more graphical way in the Environment > Asset View > Diagram section which
aggregates the assets in different levels.

Figure 2: An example list of assets

Node
A node in the Environment represents an actor in the network communication and, depending on the
protocols involved, it can be something ranging from a simple personal computer to an RTU or a PLC.
All the nodes in the Environment are listed in the Environment > Network View > Nodes section
or can be viewed in a more graphical way in the Environment > Network View > Graph section.
| Basics | 33

When a node is involved in a communication using SCADA protocols it can be a master or a slave.
SCADA slaves can be analyzed in detail in the Environment > Process View section.

Figure 3: An example list of network nodes

Session
A session is a semi-permanent interactive information interchange between two or more
communicating nodes.
A session is set up or established at a certain point in time, and then turned down at some later point.
An established communication session may involve more than one message in each direction.
The Nozomi Networks Solution shows the status of a session depending on the transport protocol, for
example a TCP session can be in the SYN or SYN-ACK status before being OPEN.
When a session is closed it will be retained for a certain amount of time and can still be queried to
perform subsequent analysis.
All the sessions are listed in the Environment > Network View > Sessions.

Figure 4: An example list of network sessions

Link
A link in the Environment represents the communication between two nodes using a specific protocol.
| Basics | 34

All the links are listed in the Environment > Network View > Link section and can be viewed in
a more graphical way in the Environment > Network View > Graph section.

Figure 5: An example list of network links

Variable
The Guardian creates a variable for each used command, monitored measure and, more in general,
for each information that is accessed or modified by the SCADA/ICS system. Different characteristics
can be attached to a variable depending on the protocol that is used to access or modify it. For
instance, highly specialized protocols such as IEC-60870-5-104 will generate and update variables with
specific type and quality for each sampled value that can also determine if the sample is valid or not.
A variable has many properties, described in Process Variables on page 63 in detail. In particular,
the RTU ID and name properties will have specific values depending on the protocol, as explained in
the following section.
A recurring concept of a variable used as an universal identifier inside the system is the var_key. The
var_key is an identifier of the variable that puts together the node IP address, the RTU ID and the
name in the form <node_ip>/<RTU_id>/<name>. For instance, a variable with name ioa-2-99,
located at RTU ID 24567 and accessed with the IP address 10.0.1.2 will have a var_key equals to
10.0.1.2/24567/ioa-2-99.

Vulnerability
A vulnerability is a weakness which allows an attacker to reduce a system's information assurance.
By constantly analyzing industrial network assets against a state-of-the-art repository of ICS
vulnerabilities, the Nozomi Networks Solution permits operators to stay on top of device vulnerabilities,
updates and patch requirements.

Figure 6: The vulnerabilities

Query
The N2QL (Nozomi Networks Query Language) syntax is inspired by the most common Linux and Unix
terminal scripting languages: the query is a concatenation of single commands separated by the |
symbol in which the output of a command is the input of the next command. In this way it is possible to
create complex data processing by composing several simple operations.
| Basics | 35

The following example is a query that lists all nodes ordered by received_bytes (in descending order):

nodes | sort received.bytes desc

For a reference of the graphical user interface or how you can create/edit queries go to the Query -
User interface reference
For a full reference of commands, data sources, and examples of the query language go to the Query -
complete reference

Protocol
In the Environment a link can communicate with one or more protocols. A protocol can be recognized
by the system simply by the transport layer and the port or by a deep inspection of its application layer
packets.

SCADA protocols mapping


All SCADA protocols are recognized by deep packet inspection and for each of them there is a
mapping that brings protocol specific concepts to the more generic and flexible Environment Variable
model.
As an example of such mappings, consider the following table:

Protocol RTU ID Name


Modbus Unit identifier (r|dr|c|di)<register address>
IEC 104 Common address <ioa>-<high byte>-<low byte>
Siemens S7 (Timer or Fixed to 1 (C|T)<address>
Counter area)
Siemens S7 (DB or DI Fixed to 1 (DB|DI)<db number>.<type>_<byte
area) position>.<bitposition>
Siemens S7 (other areas) Fixed to 1 (P|I|Q|M|L).<type>_<byte
position>.<bitposition>
Beckhoff ADS <AMSNetId <Index Group>/<Index Offset>
Target><AMSPort
Target>
and more...

Incident & Alert


An alert represents an event of interest in the observed system. Alerts can be of different kinds, for
instance they can derive from anomaly-based learning, assertions or protocol validation. In section
Alerts Dictionary on page 121 a complete list of alerts is given as a reference.
NOTE: when an alert is raised a trace request is issued.
An incident is a summarized view of alerts. When multiple alerts describe different aspects of the
same situation, N2OS's powerful correlation engine is able to group them and to provide a simple and
clear view of what is happening in the monitored system.
In section Incidents Dictionary on page 127 a complete list of incidents is given as a reference.
| Basics | 36

Figure 7: The Alerts section

Trace
A trace is a sequence of network packets that have been processed so far and can be downloaded in
a pcap file for subsequent analysis.

The Nozomi Networks Solution shows the button with which you can download the available
traces. A trace can be generated by an alert or by issuing a trace request manually clicking on ; you
can find this icon in all the sections that are related to the trace feature. However, in order to issue a
trace, non admin users need the Trace permission.
For a detailed explanation of the traces configuration go to Configuring trace on page 208.
A continuous trace is a collection of network packets that are kept for future download. Such
collections can be requested through the GUI. The Nozomi Networks Solution will keep registering a
continuous trace from the moment it has been requested until the request is paused.

For a detailed explanation of the continuous traces go to Continuous Traces on page 105.
Some examples:

Figure 8: Some alerts with trace, click on the three


dots then on the cloud icon to download the pcap file

Figure 9: From the Links section click on the bolt icon to issue a manual trace request
| Basics | 37

Figure 10: It is possible to send a trace request also from the graph view

Charts
Charts are often used in the Nozomi Networks Solution to show different kinds of information, from
network traffic to the history of values of a variable. Here is a brief description of the two main chart
controls.

Area charts

A The title of the chart


B The buttons to switch on and off the live update of the chart
C The time window control, click to open the historic view
D The unit of measure of the chart
E The legend, in this case the entries in the legend represent a categorization
of the traffic. It is possible to click each entry to show or hide the associated
data series in the chart

History charts

A Buttons for detaching the chart, exporting the data to an Excel or CSV file
| Basics | 38

B The time window control


C The unit of measure
D The navigator: it is possible to interact with it using the mouse. Drag it to
change the visibility of the time window, enlarge or shrink it to change the
width of the time window

Tables
Tables are used in many sections of the Nozomi Networks Solution, for example for listing nodes or
links. Tables offer different functionalities to the user, here is a brief introduction.

Figure 11: A table with a filter and a sorting applied

A Filtering control: while typing in it the rows in the table will be updated
according to the filter
B Sorting control: clicking on it will sort the table, clicking on the same heading
twice will change the sorting direction. Press the CTRL key while clicking to
activate multiple column sorting
C The reset buttons are separated in two sections and can independently
remove the filters and the sorting from the table
D Clicking this button will update the data in the table, click on Live to
periodically update the table content
E Use this menu to hide or show the columns. In order to save space, certain
tables have hidden columns by default

Navigation through objects

The navigation icon , allows you to go directly to related objects.


Two examples:
| Basics | 39

Figure 12: Navigation options for a node

Figure 13: Navigation options for a link


Chapter

5
User Interface Reference
Topics: In this chapter we will describe every aspect of the graphical user
interface. For each view of the GUI we attached a screenshot with a
• Supported Web Browsers reference explaining the meaning and the behavior of each interface
• Navigation header control.
• Dashboard
• Alerts
• Asset View
• Network View
• Process View
• Queries
• Reports
• Time Machine
• Vulnerabilities
• Settings
• System
• Continuous Traces
| User Interface Reference | 42

Supported Web Browsers


To have the best experience with the Nozomi Networks Solution web console be sure to use one of the
following web browsers:
• Google Chrome version 48 and later
• Chromium version 48 and later
• Safari version 9.0 and later (for macOS)
• Firefox version 49 and later
• Microsoft Internet Explorer version 11
• Microsoft Edge version 12 and later

Navigation header
The navigation bar is always present on the top of the Nozomi Networks Solution user interface. It
enables the user to navigate through the pages and it also displays some useful information about the
status of the system.

A The sections of the Nozomi Networks Solution, by clicking on them you will
change the page
B The user menu, by clicking on it you can logout or access the Other actions
page
C The sub navigation bar with:
• the collapse button, click on it to reduce the height of the navigation bar
• the monitoring mode button, click on it to disable the auto logout
• the time machine status, it is either LIVE, if the displayed data are
realtime, or a timestamp when a time machine snapshot is loaded
• the hostname
• the N2OS version
• the NTP offset
• some disk statistics, that is the used space and the available space
• the information about the license
• the language switcher, click to switch language on the fly
| User Interface Reference | 43

D The button that shows the administration menu.

Figure 14: The administration menu


accessible from the navigation header.
| User Interface Reference | 44

Dashboard
The Nozomi Networks Solution offers multiple dashboards that are fully configurable. If you want to
configure them, go to Dashboard Configuration on page 45.
On top of all dashboards there are some useful controls:
• on the left, a time selector component allows you to choose the time window for the dashboard
data. Notice that all widgets are influenced by the time selector,
• on the right, a dropdown menu and a button with the wrench icon allow you, respectively, to choose
the dashboard that you want to see and to go directly to the dashboard configuration page.

Explanation of the sections of the first default built-in dashboard

Environment information An high level view of what the Nozomi Networks Solution saw
in your network, click on a section (except Protocols) for further
details
Traffic by category A live chart of the traffic volume, divided between OT and IT
Assets Overview Assets divided in levels as per IEC 62443
Alerts flow over time Alerts risk charted over time
Situational awareness Gives you a list of evidences, ordered by severity
Latest alerts Latest alerts as they are raised
Failed assertions A list of your failed assertions

NOTE: it is possible to see more details for a section by clicking on the button (where available).
| User Interface Reference | 45

Dashboard Configuration
Go to Administration > Dashboards and choose the widgets that you want in your dashboard
along with their position and dimensions.
Note: Only allowed users can customize the dashboard.
Note: The first time that you customize your dashboard, you will not find any dashboard defined. In the
Dashboard section you will find just the built-in templates.

Main actions
Here you can find the main actions that you can execute on dashboards.

Import The Import button allows you to choose a dashboard configuration previously
saved in your computer.
New Dashboard... After clicking on the New Dashboard... button you can choose a built-in
template to start from.

Do not specify a template if you want to start from scratch.


Choose a With this dropdown menu, when defined, you can choose the dashboard that
Dashboard you want to modify.

Dashboard actions
Here you can find the main actions that you can execute on the dashboard configuration.

+ Add row With + Add row you can add a new row to the dashboard.
History Using this feature you can restore a previously saved version of the dashboard
that you are editing.
Delete Remove the dashboard from your dashboard list.
| User Interface Reference | 46

Edit By clicking on the Edit button you can rename the dashboard configuration and
customize the dashboard visibility.

Discard When you make some changes to the configuration and you want to discard it,
press Discard.
Clone After choosing a dashboard configuration, click on the Clone button to create a
new dashboard as a copy of the chosen one.
Export This button allows you to save the dashboard configuration to your local
computer.
Save After a change in the configuration, the Save button starts to blink and when
you click on it the new configuration is saved. As mentioned above, if you are an
admin user you will save the new default configuration for all the other users.

Row actions
In this section are explained all the actions that you can perform on a row in the dashboard
configuration page.

+ Add widget With + Add widget you can add a new widget to the row. By default it is added
after the widgets already present in the row.
Move row up/down By clicking on these buttons, you are able to move the row up or down in the
dashboard.
Delete row If you want to completely remove the row from the dashboard, you have to click
on the delete button.

Widget actions
When you want to change the aspect that a widget has in the dashboard, you can follow the
instructions below.
| User Interface Reference | 47

Increase/decrease width With these buttons you can increase or decrease the width of the
widget.
Increase/decrease height With these buttons you can increase or decrease the height of the
widget.
Adjust height in row By clicking on this button, the height of all the other widgets in the same
row is set to the current widget's height.
Move widget before/after With these buttons you can move the widget in the row, one step left or
one step right.
Move widget up/down By clicking on these buttons, you can move the widget in the previous
or in the next row.
Delete widget If you want to completely remove the widget from the row, you have to
click on the delete button.
| User Interface Reference | 48

Alerts
Alerts are listed in the Alerts table. The Alerts comes in two fashions: standard and expert. It is possible
to switch between the two versions by means of the buttons present at the top of the page, as shown in
the figure below.

Figure 15: Standard/Expert mode selection

Non admin users can access this section only if at least one of the groups they belong to has the Alerts
permission enabled. However, only admin users can perform actions on alerts (i.e. acknowledgment,
removal).

Figure 16: Alerts table in standard mode

Figure 17: Alerts table in expert mode

An explanation of the Alerts table (expert mode)

A The time span control enables the user to view alerts in a defined time
range.
B By selecting a grouping field the table will show all the alerts aggregated by
the selected field, for an example see the sample picture
C Clicking on the alert id will show a popup with more details.
D Clicking on the gear icon will open the learning page
| User Interface Reference | 49

Figure 18: The Alerts table grouped by protocol and sorted by risk

Figure 19: The Alerts details popup

Asset View

Figure 20: The Assets table

In this page are listed all the Assets using a table. By clicking on an Asset link it is possible to view a
popup with some additional details about the asset.
| User Interface Reference | 50

Figure 21: The Asset details popup


| User Interface Reference | 51

Network View

Network Nodes

Figure 22: The Nodes table

This page shows all the nodes in the Environment.


In addition to the node information there is an Actions column which enables the user to gain more
information about a node, here is an explanation:

Figure 23: Opens the configuration popup of the node

Figure 24: Opens a popup with only the alerts associated with the current node

Figure 25: Opens a popup with the requested traces

Figure 26: Opens a popup with the form to request a trace

Figure 27: By clicking this icon you can manage the learning of the node

Figure 28: Opens a popup that allows you to navigate to different sections
| User Interface Reference | 52

In this page are listed all the Nodes using a table. By clicking on an IP link it is possible to view a popup
with some additional details about the node.

Figure 29: The Node details popup

Network Links

Figure 30: The Links table

This page shows all the links in the Environment.


In addition to the link information there is an Actions column which enable the user to gain more
information about a link, here is an explanation:

Figure 31: Opens the configuration popup of the link

Figure 32: Opens a popup with only the alerts associated with the current link

Figure 33: Opens a popup with the history of TCP events (Available only for TCP links)
| User Interface Reference | 53

Figure 34: Opens a popup with the urls captured from


the analyzed traffic (Available only for some protocols)

Figure 35: By clicking this icon you can manage the learning
of the link (its color depends on the learning status of the link)

Link Events

Figure 36: The link events popup

A The link availability calculated on the UP and DOWN events


B The time span control enables the user to view only the events in the
specified time range
C The graphical history of the events, a point with value 1 represents an UP
event, a value -1 represents a DOWN event
D The history of events showed in a table

Figure 37: A schematic representation of two link downtimes: d0 and d1

How Link Availability is calculated


A history of events is stored for each link. Two events are of particular interest for computing
availability: UP and DOWN. The former occurs when an activity is detected on an inactive link, whereas
the latter occurs when an active link stops its activity. Every event has a timestamp for tracking the
precise moment at which it happened.
Guardian computes the total downtime of a link by taking the history of events in a finite time window
and summing up all the time spans starting with a DOWN event and ending with an UP event.
By default a link is considered active, therefore the availability of the link will be 100% minus the
percentage of total downtime.
| User Interface Reference | 54

Track Availability
The "Track Availability" feature allows for an accurate computation of the availability. It enables the
monitoring of the activity on the link at regular intervals, generating extra UP and DOWN events
depending on the detected activity on both sides of the link during the last interval.
To specify the interval for a Link, go to the Links table (or any other section where the Link Actions are

displayed) and click on the button, in order to open the following form.

It is advisable to choose a value that is greater than then the expected link polling time, in order to
avoid too frequent checks that are likely to produce spurious DOWN events.

Network Sessions

Figure 38: The Sessions table

In this page are listed all the Sessions using a table. By clicking on the From or To node ids additional
details about the involved Nodes are displayed. The buttons in the Actions column enable the user
to ask or to see the traces and to navigate through the UI. In the other columns there are fine-grained
information about each session, like the source and destination ports, the number of transferred
packets or bytes, etc.

Network Graph
The network graph page gives a visual overview of the network. In the graph, every vertex represents
a network node, while every edge represents one or multiple links between nodes. Edges and
vertexes are annotated to give information about the identification of the node, the protocols used in
| User Interface Reference | 55

the communications between two nodes, and more. The contents of the graph can be filtered using
different criteria in order to obtain a clearer representation, or to evidence specific aspects.
The position of the nodes in the graph is determined by either a specific layout or a dynamic automatic
adjustment algorithm that looks for minimal overlap and best readability of the items.
In order to better visualize the desired nodes/links the user can move and zoom the graph using the
mouse.

Move To move the graph click somewhere, not on a node, and start dragging
Zoom (mode 1) with the mouse inside the window, turn the mouse wheel up and down
to zoom in and out (scrolling). The zoom will be centered on the mouse
position
Zoom (mode 2) Drag in vertical direction while keeping pressed the 'z' key. The zoom will be
centered on the position where started the mouse dragging

Figure 39: The Environment Network Graph showing info for the selected node

A The information pane contains the details about the selected item, that is
either a node or a link
B The button to toggle the information pane
C Drag this vertical line with the mouse to resize the information pane
D A node
E A link
F The button to reset all the customizations and reload the data
G The button to update the data; it keeps the current customizations
H The button to filter by activity time
I The button to toggle the dynamic adjustment motion of the items
J The magic wand button will open a wizard to help the user to filter the graph
and view only the desired information. It contains some solutions to reduce
the size of a big graph.
K The button that configures the appearance of the nodes.
L The button that configures the appearance of the links.
M The button that allows to select a graph layout.
N The button that exports a PDF report containing the graph. Notice that the
graph is exported as it is currently shown on the page.
O The ? button is explained below.
| User Interface Reference | 56

Figure 40: Clicking on the ? button will show the legend for link and
nodes. The content of the legend is aware of the selected perspectives

"Magic wand" options


The wizard help the user with several hints to improve the performance of the graph. Settings
annotated with an orange exclamation mark are considered suboptimal. Green thumbs annotate
options whose settings are considered helpful.
| User Interface Reference | 57

Show broadcast Broadcast addresses are not actual network nodes in that no asset is
bound to a broadcast address. They are used to represent communications
performed by a node towards an entire subnet. Removing broadcast nodes
reduces the complexity of a graph.
Only with Unconfirmed links can be hidden easily to reduce the complexity of an
confirmed data entangled graph.
Only confirmed Unconfirmed nodes can be hidden to reduce the size of a large graph.
nodes
Exclude tangled Nodes whose connections cause the node to be too complex can be
nodes removed to improve the readability of the graph.
Protocols Nodes and edges can be filtered so to show only those items participating
in communications involing one of the selected protocols. By clicking on
"SCADA", all SCADA protocols are selected.
| User Interface Reference | 58

Nodes options

Perspective Change the color of the nodes according to a predefined criterion


Roles Allow you to filter the graph by node roles
Exclude IDs Remove the specified IDs from the graph view; it is possible to specify more
IDs separated by comma
ID filter The graph can be filtered by one or more ID addresses, separated by
comma
ID filter exact If checked, the ID filter will let the graph show only the nodes with exactly
match the specified ID(s) and not with a "start with" criterion
Display Choose the label formatting of the nodes
Group by Nodes with the chosen property (i.e. zone, subnet, etc) are assigned to
the same group, then the way in which the group is displayed depends on
the option chosen in the Layout Options. With Standard layout each
group is shown collapsed as a single node, while with Grouped layout all
the nodes belonging to the same group are placed inside a circle
Show broadcast If checked, it includes in the graph all the nodes with a broadcast IP
Only confirmed If checked, it shows only the nodes that exchanged some data in both
nodes directions while communicating
| User Interface Reference | 59

Links options

Perspective Change the color of the links according to a predefined criterion


Protocols Allows the ability to filter the graph by link protocols
Enable links If checked, links will become bolder in reaction to mouse movements making
highlighting the link easier to select (may affect performance)
Show protocols If checked, every link will show its protocols
Only with If checked, it shows only the links which exchanged some data in both
confirmed data directions

Layout options
The layout define the way in which the nodes and links are shown in the graph.
| User Interface Reference | 60

Standard It is the default layout and the kind of visualization depends on Group_by
property:
• Group_by not defined: All the nodes and links are shown
• Group_by defined: All the nodes belonging to the same groups are
collpsed into a single node

Grouped The nodes are grouped according to the criteria defined in Group_by, and
the graph is visualized as following
• Group_by not defined: All the nodes and links are shown
• Group_by defined: All the nodes belonging to the same group are
shown and are placed inside a circle that represent the group, links
between nodes belonging to the same group are shown, while links
between nodes of different groups are replaced by links between groups
represented as lines that connects the circles

Purdue model Places the nodes in separate groups according to their level. This allows
to distinguish the different levels and isolate potential problems due to
communications that cross two or more level boundaries.

Figure 41: The Environment Graph with the zones pane opened with
the Group_by=Zones, Layout = Grouped and zone perspective.
| User Interface Reference | 61

Figure 42: The Environment Graph with the zones pane opened and the
zones perspective active to highlight the zone of origin of each node.

The zones pane offers the ability to filter the graph by clicking on a zone or on a link between two
zones. The zones graph also has a legend and shares some of the nodes and links options. Clicking
on a node or link in the zone pane will show some additional information about the zone or the links
between the zones. See the basic configuration rules to customize Zones.

Figure 43: The Environment Graph with the transferred bytes node
perspective highlighting the high traffic usage of the master nodes
| User Interface Reference | 62

Traffic
The Traffic tab in the Environment > Network View page shows some useful charts about
throughput, protocols and opened TCP connections.

Figure 44: The traffic charts

An explanation of the sections

A The throughput chart showing traffic divided in macro categories


B The throughput chart showing traffic for each protocol
C A pie chart showing the proportions of packets sent by protocol
D A pie chart showing the proportions of traffic generated by protocol
E The number of opened TCP connections
| User Interface Reference | 63

Process View
The process view tab can be accessed only by users that have the Process view permission.

Process Variables

Figure 45: The process view table, showing a large number of variables

In the variables list there are many details about each variable, here is an explanation for each field:

Actions By clicking on Variable details you will open the variable details page. A
click on Add to favourites will add the variable to favourites variable list.
Host The IP address of the slave which variable belongs to
Host label The label of the variable host
RTU ID An identifier of the variable container, for an explanation of the format see
Protocol on page 35
Name The name assigned to the variable, for an explanation on how this is
calculated see Protocol on page 35
Label A configurable description, for instructions see Configuring variables on
page 202
| User Interface Reference | 64

Type The type of the value, can be analog or digital


Value The current valid value of the variable
Last value The last observed value with an indicator showing if it is valid (green) or not
(red). By clicking on the icon the variable history chart will appear.
Last valid quality The last time the variable had a valid value quality
Last quality Last value quality
Min value The minimum value the variable has ever had
Max value The maximum value the variable has ever had
Unit The unit of measure, for instructions on its configuration see Configuring
variables on page 202
Protocol The protocol used to write or read
# Changes The number of times the variable value changed
# Requests The number of read operations
Last client The IP address of the last client querying the variable
Last FC The function code of the last operation performed
Last FC Info The function code information of the last operation performed
First activity The first time an operation was performed
Last activity The last time an operation was performed
Last change The last time an operation performed on the variable changed its value
Flow control The status of the flow control can be:
status
• CYCLIC if the variable is detected to be updated or read at regular
intervals
• NOT CYCLIC otherwise
• DISABLED if flow control has been disabled from the learning control
panel
• LEARNING if the algorithm is still analyzing the flow
When the status is CYCLIC there is a chart indicating the timing and the
average value in milliseconds.

Flow anomaly in It is true if the system has detected that an anomaly is in progress, it is false
progress otherwise. When an anomaly is in progress a Resolve button appear, by
clicking on it the user can tell the system that the anomaly has ended, if the
anomaly continue another alert is raised.

Active checks It shows the active checks enabled on the variable


History enabled A boolean flag showing if the value history is enabled for the variable

Variable details
To see the details of a variable, you can click on the magnifying glass icon beside the variable.
In the Process Variable details you can see all the info of the variable and its value history in a chart
and in a table (if it is configured as monitored, see Configuring variables on page 202).
| User Interface Reference | 65

With the buttons above the chart, you can open the chart in another window or export the data in Excel
or CSV format.
By default, the chart shows the variable value history only for a specific period of time. Clicking on the
Live update checkbox makes the chart update in real-time.

Figure 46: The detailed view of a variable

Favorite variables
To add a variable to the favorite variables list, you can click on the star icon beside the variable.
Here you can see a chosen group of variables, those variables can also have their values plotted on
the same chart to make a comparison easier.
| User Interface Reference | 66

Figure 47: The process view table with favourites variables on top
| User Interface Reference | 67

Queries
All the data sources of the Nozomi Networks Solution can be queried using N2QL (Nozomi Networks
Query Language) from the query page (Analysis > Queries). In that page, you can also see all the
queries that are already saved in the running installation.
You can choose between Standard (currently offered as beta feature) and Expert, the first allows for
an easier experience, useful if you want to quickly have a look at your data, the second allows for more
complex queries but requires more expertise.

Figure 48: Choose between Standard and Expert

Go to Queries on page 143 to get a complete reference of query commands and data sources.

Query builder
The Query builder enables the user to easily create and execute queries on the observed system. To
do so just click through the different options.

Figure 49: The Query builder

While you build your query the available options change to reflect your choices, guiding you through
the process.
| User Interface Reference | 68

Figure 50: The Query builder during a query

Query Editor
The Query Editor enables the user to execute queries on the observed system. To execute a query just
type the query text in the field and press the enter key on the keyboard.

Figure 51: The Query Editor. Some sample queries are displayed
at the beginning, clicking on them will trigger the execution

After the execution, the result will be displayed like in the figure below. If the user has enough
privileges (i.e. it belongs to a group with admin privileges), by clicking on the floppy icon on the right,
the query will be saved and displayed in the Saved Queries section, otherwise the button is disabled.
To save a query, you must specify a description and a group. By clicking on the Excel or CSV button
the query result will be downloaded in the specified format.

Figure 52: The Query Editor during a query


| User Interface Reference | 69

Saved Queries
When a query is saved, it will be displayed in the Saved Queries section. Here, by using the group
selector, it is possible to change the current group and to restrict the view to the queries of the chosen
group.
Query groups, a simple but powerful method to organize the queries, can be created, renamed and
deleted only by admin users. When a group is deleted, all the queries contained in it will be eliminated.
By clicking on the pen icon, it is possible to change the description and/or the group of a query. By
clicking on the trash icon, the saved query will be deleted. As for the saving actions, the user requires
admin privileges in order to perform such operations.

Figure 53: The Saved Queries


| User Interface Reference | 70

Reports
The report page (Analysis > Reports) permits to generate both Custom Reports and Built-in
Reports
• Built-in Reports: Are reports with predefined content that can be generated by the user when
required and downloaded as PDF. The following built-in reports are available:
• Asset Inventory
• CIS Controls - overview

• Custom Reports: Are reports based on custom queries, that can be downloaded as PDF right
away or, alternatively, can be scheduled for cyclic creation. Custom Reports can be defined using
the Report Editor, and once defined can be downloaded from Generated Reports.

Built-in Reports
When the Built-in reports section is selected, a list of the available built-in reports is shown on
the left. The user can select the desired report type, and a template preview will appear on the right.
The report can then be generated by clicking the Generate PDF button.

Figure 54: Built-in Reports

Additionally, a user with the Allow editor permission can schedule periodic reports. By clicking
on the Schedule report button, the window shown in the figure below will appear: from here, it is
possible to specify the desired properties.
| User Interface Reference | 71

Figure 55: Built-in Reports

Report Editor
The Report Editor enables the user to create or edit report templates that can be downloaded as PDF.
You can also export or import a report template to/from a JSON file.
You can edit the report visibility to filter user access to the template and to the reports generated from
it.
Non admin users can see the report generated only if have enabled Report permission, and can edit
template or report generated only if have write permission. You can edit users reports permission on
the user groups section.

Figure 56: The Report Editor

In creation (or by clicking Edit button) you can choose which user group can see the current report.
Note: Also the visibility of generated reports will follow the same restrictions, but it isn't retroactive; the
visibility restriction is applied when the PDF report is generated

Figure 57: Edit report modal dialog

The Schedule report button permits you to customize the frequency in which the PDF version of
the current reports will be generated.
And to ask, by clicking the Execute now button, a PDF of the current report on-demand.
| User Interface Reference | 72

Figure 58: Scheduling settings for report creation

Generated Reports
When a report is scheduled (or generated on-demand), the PDF version of the report can be found in
the Generated Reports section after its creation.

Figure 59: The Generated Reports

In this section you are able to browse the created reports, download them and in the case delete them.
It is also possible to configure the report retention by clicking on the Configure button. Here you can
set the number of days a report generated from scheduler remain available and the maximum number
of reports can be stored. The default values are 90 days and 500 stored reports.

Note: In case of lack of disk space, last reports will be automatically deleted.
| User Interface Reference | 73

Time Machine
With the Time Machine the user can load a previously saved state (called snapshot) and go back in
time, analysing the data in the Nozomi Networks Solution from a past situation. It's possible to load a
single snapshot and use the platform as usual or load two snapshots and make a comparison in an
user interface that highlights what's changed.

Time Machine Snapshots List

Figure 60: The Time Machine Snapshots List

The snapshots periodically taken by the Nozomi Networks Solution are displayed in this table.
Snapshots can be used to go back in time to analyze the Environment status at a certain point in the
past. Moreover, they can be compared by means of a diff.

To load a snapshot

Figure 61: Load snapshot button

Click on the Load snapshot button to load a snapshot and analyze it like if you were in the past. The
user interface will become gray to highlight that you are watching a static snapshot.
Click one of the forward buttons to return to the present and watch the Environment in real time.

Figure 62: Forward button

Figure 63: Forward button in header


| User Interface Reference | 74

To request a diff
To request a diff from the snapshots list you must select two snapshots by clicking on the plus button
shown in the figure.

Figure 64: Plus button: click on it to include the snapshot in the diff

You can exclude the frequently changing fields from the diff result by selecting the corresponding
checkboxs. The fields like the one representing a time will not influence the result anymore.

Figure 65: Check it to exclude the frequently changing fields

After the snapshots are selected just click on the diff button, the request will be processed and the
differences between the two snapshots will be shown.

Figure 66: The button to execute the diff between two snapshots

To configure retention, snapshot interval and event-based snapshot see Configuring Time Machine on
page 210.

Time Machine Diff from Alert

Figure 67: Fast diff button

Sometimes it is more convenient to request a snapshots diff starting from an alert, this automatic
feature will use the previous and the next snapshots according to the alert time.
To make such a request, just open the alert details popup by clicking on an alert ID in the alerts table
and click on the time machine diff button; you will be redirected to the diff result page.
| User Interface Reference | 75

Time Machine Diff Result

Figure 68: Diff result, click on Show changes to see the differences

A Use these buttons to navigate between the Environment items


B Use these buttons to navigate between the subsections; in the example are
displayed the nodes with changes

In the diff result page there are four sections: Nodes, Links, Variables and Graph. In the Nodes,
Links and Variables sections there are three subsections: Added, Removed, Changed. By navigating
these sections and subsections you can observe how the Environment changed between the two
snapshots. You can see, for example, if a node has been added or if a variable value has changed. In
the next image there is a popup with the detailed changes for a single node.

Figure 69: Diff details for node

In addition to the tabular representation there is also a graph view of changes. Thanks to the graph
view and the use of colors, you can quickly spot which nodes or links have been added, removed or
have some changes. An item that has been added is green, one that has been removed is red and,
finally, one that has changes is blue. Details are shown on the right side of the graph by clicking on a
node or a link with changes.

Figure 70: Diff result as a graph


| User Interface Reference | 76

Vulnerabilities

Figure 71: The Vulnerabilities table

This page lists all the vulnerabilities in a table. The user can filter it to show only the most likely
vulnerabilities; the likelihood threshold can be configured as shown in the picture below.

Figure 72: The most likely filter configuration form

By clicking on a CVE link it is possible to view a popup with some additional details about the
vulnerability.

Figure 73: The vulnerability details popup


| User Interface Reference | 77

Settings

Command Line Interface (CLI)


The Command Line Interface (CLI) allows you to change some configuration parameters and to
perform troubleshooting activities.
See the Configuration section for a complete list of configuration rules.

Figure 74: The Command Line Interface executing the license_info command

Useful commands

help Show the list of available commands


history Show the commands previously entered
clear Clear the console

Keyboard shortcuts

Ctrl+R Reverse search through commands history


Esc Cancel search
Up arrow Previous entry in history
Down arrow Next entry in history
Tab Invoke completion handler
Ctrl+A Move cursor to the beginning of line
Ctrl+E Move cursor to the end of line
| User Interface Reference | 78

OT ThreatFeed
The OT ThreatFeed section allows you to enrich the Nozomi Networks Solution with additional
information to improve the detection of malware and anomalies.

Figure 75: The OT ThreatFeed section

The OT ThreatFeed section sections let the user manage Packet rules, Yara rules, STIX
indicators and Vulnerabilities.
Packet rules are executed on every packet. They raise an alert of type SIGN:PACKET-RULE if a
match is found. For an explanation of the packet rules format see Packet rules on page 128.
Yara rules are executed on every file transferred over the network by protocols like HTTP or SMB.
An alert of type SIGN:MALWARE-DETECTED is raised when a match is found. Yara rules conform
to the specifications found at YaraRules.
STIX indicators contain informations about malicious IP addresses, malware signatures or
malicious DNS domains. This information is used to enrich existing alerts, or to raise new ones.
Vulnerabilities are assigned to each node and depend on the installed software we identify in the
traffic. The Nozomi Networks Solution leverages CVE, a dictionary that provides definitions for publicly
disclosed cybersecurity vulnerabilities and exposures.
OT ThreatFeed already shipped with the Nozomi Networks Solution can be enabled or disabled but not
modified or deleted. New OT ThreatFeed can always be added, edited and deleted by the user.
| User Interface Reference | 79

OT ThreatFeed Update
OT ThreatFeed can also be updated automatically by the Nozomi Networks Solution. If you click on
the "Update" tab (see screenshot above) you will be presented with one or two different sections,
depending on whether your Guardian is connected to a CMC.

When not connected to a CMC

Figure 76: Update service connection configuration

An additional license, named "Update service license", is required in order to enable the service. The
Update service license may be added or modified from the corresponding section in the license page.
Then you may enable the feature by clicking on the checkbox to receive updated OT
ThreatFeed automatically. As the note says, make sure that you can reach https://nozomi-
contents.s3.amazonaws.com from your Guardian / CMC, otherwise the Nozomi Networks Solution
will not be able to fetch any OT ThreatFeed; once you are done, check that the connection can be
established by clicking on the "Check connection" button.

When connected to a CMC

Figure 77: Update service connection configuration when connected to a CMC

In this scenario your OT ThreatFeed will be managed by the CMC to which you are connected.
The Nozomi Networks Solution will synchronize them. If this is your case make sure you have OT
ThreatFeed enabled on your CMC.
| User Interface Reference | 80

Connect through a proxy server

Figure 78: Update service connection configuration through a proxy server

In this scenario your OT ThreatFeed will be downloaded through the configured proxy server which
requires authentication. If your OT ThreatFeed updates are managed by the CMC the proxy server will
not be used.

Manual Update
If you have not the possibility to connect your appliance or CMC to internet you can add the brand
new OT ThreatFeed updates through the manual update. Ask for the manual update package to the
support and use the update drag and drop window you can see on the image. After update all the new
contents will be propagated to the attached appliances or CMC. If one day you want to move to the
OT ThreatFeed online update you have to just flag the "Enable OT ThreatFeed online update provided
by Nozomi Networks" checkbox and click on "Save" and new contents will come from the cloud and
automatically merged and propagated to the attached appliances and cmc. The manual update is only
enabled if the OT ThreatFeed online update is disabled.
| User Interface Reference | 81

Understanding if you have the latest OT ThreatFeed

Figure 79: Update service connection configuration showing update and check times

In this screenshot there is some additional information:


• OT ThreatFeed last update: it specifies the date and time at which Nozomi Networks created a new
OT ThreatFeed update.
• Last contents check: it specifies the last date and time when the Nozomi Networks Solution checked
for new OT ThreatFeed.
The presence of these dates means that your instance has correctly updated its OT ThreatFeed.
To let you know about the latest changes in OT ThreatFeed, we have added this information in the
navigation header; clicking there will also send you to the "Update" tab.

Figure 80: Navigation header showing last update of OT ThreatFeed


| User Interface Reference | 82

Firewall integrations
From this section it is possible to configure several integrations with firewalls offered by Guardian:
• Fortinet FortiGate v5 on page 82
• Fortinet FortiGate V6 on page 83
• Check Point Gateway on page 84
• Palo Alto Networks Next Generation Firewall on page 84
• Palo Alto Networks V9 on page 85
• Cisco ASA on page 86
• Cisco FTD on page 86
• Cisco ISE on page 87

In all these sections the provided user must have administrative privileges.
When the integration is working some policies will be produced and inserted in the firewall, these
policies will be displayed in the policies section.

Fortinet FortiGate v5

Figure 81: The FortiGate v5 configuration section

In addition it is possible to tune the behaviour of the integration with these options:

Enable nodes blocking If checked, the new nodes appearing in the environment will be blocked by
the firewall
Enable links blocking If checked, the new links appearing in the environment will be blocked by the
firewall
Enable session kill If checked, the alerts will trigger a session kill by the FortiGate for the
involved link. It is possible to choose which alert types will be considered
| User Interface Reference | 83

Enable logging If checked, the policies inserted will have the logging feature enabled

Figure 82: The Guardian policies inserted in the FortiGate

This integration use the SSH connection to communicate with the firewall, from the v6 version of the
FortiGate it's no longer possible to use this kind of integration.
From the version v6 or better it is necessary to use the REST API integration (FortiGate v6).

Fortinet FortiGate V6
Access token need to have permission to insert, read delete entities as address, addrgr, route, session
and policy.
Vdom is optional, could contain only a value or more than one separated with ',' eg. vdom1,vdom2 etc.

Figure 83: The FortiGate V6 configuration section

In addition it is possible to tune the behaviour of the integration with these options:

Enable nodes blocking If checked, the new nodes appearing in the environment will be blocked by
the firewall
Enable links blocking If checked, the new links appearing in the environment will be blocked by the
firewall
Enable session kill If checked, the alerts will trigger a session kill by the FortiGate v6 for the
involved link. It is possible to choose which alert types will be considered
Enable logging If checked, the policies inserted will have the logging feature enabled

This integration use the REST API to communicate with FortiGate, it is available only from the version
6.
| User Interface Reference | 84

Check Point Gateway

Figure 84: The Check Point Gateway configuration section

In addition it is possible to tune the behaviour of the integration with these options:

Enable nodes blocking If checked, the new nodes appearing in the environment will be blocked by
the firewall
Enable links blocking If checked, the new links appearing in the environment will be blocked by the
firewall

Figure 85: The Guardian policies inserted in the Check Point Gateway

Palo Alto Networks Next Generation Firewall

Figure 86: The Palo Alto configuration section

Add '!' before endpoint declaration to skip SSL check.


In addition it is possible to tune the behaviour of the integration with these options:

Enable nodes blocking If checked, the new nodes appearing in the environment will be blocked by
the firewall
Enable links blocking If checked, the new links appearing in the environment will be blocked by the
firewall
| User Interface Reference | 85

Figure 87: The Guardian policies inserted in the Palo Alto Networks Next Generation Firewall

Palo Alto Networks V9


Starting from version 9.0, PAN-OS provides a REST API. The Guardian integration relying on this new
API supports the same features as the previous Palo Alto integration and also the following ones:
• Commit by user: commits the current changes required by the user represented by the credentials
used for the api. Global commits are no longer performed
• Dynamic Access Groups for Node Blocking: the Dynamic Access Group references a tag which is
then assigned to new IP address objects that are created on the firewall. This will then automatically
apply the global Guardian blacklist rule to each new address without having to modify the firewall
ruleset

Figure 88: The Palo Alto v9 configuration section

Add '!' before endpoint declaration to skip SSL check.


In addition it is possible to tune the behaviour of the integration with these options:

Enable nodes blocking If checked, the new nodes appearing in the environment will be blocked by
the firewall
Enable links blocking If checked, the new links appearing in the environment will be blocked by the
firewall

Figure 89: The Guardian policies inserted in the Palo Alto Networks Next Generation Firewall
| User Interface Reference | 86

Cisco ASA

Figure 90: The Cisco ASA configuration section

SSL check is always skipped.


In addition it is possible to tune the behaviour of the integration with these options:

Enable nodes blocking If checked, the new nodes appearing in the environment will be blocked by
the firewall
Enable links blocking If checked, the new links appearing in the environment will be blocked by the
firewall
Enable session kill If checked, the alerts will trigger a session kill by the Cisco ASA for the
involved link. It is possible to choose which alert types will be considered

Figure 91: The Guardian policies inserted in the Cisco ASA

Cisco FTD
Permit to kill sessions.

Figure 92: The Cisco FTD configuration section

SSL check is always skipped.


In addition it is possible to tune the behaviour of the integration with these option:
| User Interface Reference | 87

Enable session kill If checked, the alerts will trigger a session kill by the Cisco FTD for the
involved link. It is possible to choose which alert types will be considered

Cisco ISE

The Cisco ISE configuration


The preferred method to authenticate with Cisco ISE is via certificates. Guardian supports:
• Authentication using certificates issued by the ISE internal CA
• Authentication using certificates issued by an external CA (third party certificates)
Along with the client associated with the certificate and the certificate password, you need to upload
the identity certificate and the private key.

Figure 93: The Cisco ISE configuration using an ISE internal CA certificate

If you are using a third party certificate, you need to upload the external CA certificate as well.

Figure 94: The Cisco ISE configuration using a third part certificate

It is also possible to authenticate via username and password. If you want to use an existing client, you
have to specify the password.

Figure 95: The Cisco ISE configuration using an existing client


| User Interface Reference | 88

Otherwise you can create a new client directly from the Guardian integration configuration window
by using the Create client button once you have specified the new client name. Remember that
you need to approve the new client from the Cisco ISE pxGrid Services window. The password
returned by Cisco ISE will not be displayed, but will be kept in the Guardian configuration.

Figure 96: The Cisco ISE configuration to create a new client

The available options are:

Enable nodes blocking If checked, the new nodes appearing in the environment will be blocked
by the firewall. Along with this option you can also choose the policy the
Guardian integration has to use. To do that you need to provide valid
connection details and use the Pull policies button before saving the
configuration.

Figure 97: The Guardian pull policies functionality

A list of policies already available in Cisco ISE will be displayed. In addition to


those, you can also choose the ANC_NOZOMI_BLOCK_IP policy. Once you
have chosen a policy for a given Guardian integration, you will not be able to
change it.

Figure 98: The Guardian policies inserted in the Cisco ISE

Troubleshooting configuration
The UI performs fields validations when the Save and the Pull policies buttons are pressed. In
case of missing fields, a warning message will be displayed. If there are any authentication errors, e.g.
wrong password or certificate mismatch, the UI will display a message detailing the reason of the error.
For further details regarding errors you may experience you can also search for the 'Cisco ISE' string in
the log file /data/log/n2os/n2osjobs.log.
| User Interface Reference | 89

Data Integration
In this section (Administration > Data Integration) users are allowed to configure several
endpoints. Each endpoint could receive Alerts or other items depending on its configuration.

Figure 99: Some examples of configured endpoint

FireEye CloudCollector
Besides Alerts, with FireEye CloudCollector integration it is possible to send Health Logs, DNS Logs,
HTTP Logs and File transfer Logs.

IBM QRadar (LEEF)


The IBM QRadar integration permits the sending of all Alerts (and optionally Health Logs) in LEEF
format. N2OS send to QRadar App (from 2.0.0 version) also assets information.

Common Event Format (CEF)


With this integration you are able to send, in CEF format, Alerts and Health Logs.
| User Interface Reference | 90

ServiceNow
This integration forwards incidents to a ServiceNow instance by using the provided parameters.

Splunk - Common Information Model (JSON)


If you need to send Alerts to a Splunk - Common Information Model instance, you can use this kind of
integration. Data are sent in JSON format and you are also able to filter on Alerts. You can also send
Health Logs and Audit Logs.
| User Interface Reference | 91

SMTP forwarding
To send Reports, Alerts and/or Health Logs to an email address, you can configure an SMTP
forwarding endpoint. In this case, you are also able to filter on Alerts.

SNMP Trap
Use this kind of integration to send Alerts through an SNMP Trap.
| User Interface Reference | 92

Syslog Forwarder
Use this kind of integration to send the captured Syslog events to a Syslog endpoint.
It is useful to passively capture logs and forward them to a SIEM.
Note: In order to enable the Syslog events capture see Enable Syslog capture feature on page 197.

Custom JSON
This type of integration sends all the Alerts to a specific URI using the JSON format.

Custom CSV
This type of integration sends the results of the specified query to a specific URI using the CSV format.
| User Interface Reference | 93
| User Interface Reference | 94

Zone configurations
In this section (Administration > Zone configurations) network zones can be added and
configured.
Several standard zones are preconfigured and cannot be modified.
New zones can be added easily. Each zone is identified by a name that cannot contain spaces and
includes one or multiple subnets. All nodes pertaining to one of the subnets of a zone inherit the
properties of that zone, if any.

Figure 100: Zones table

The table lists all configured zones. Some zones are predefined and cannot be deleted or modified.
User-defined zones can de removed or edited by clicking on the respective icons. A new zone can be
added by clicking on the addition icon on the top right corner of the table.

Figure 101: Zone configuration

In order to configure a zone, a name not containing spaces must be defined. The zone can be set up
to correspond to one or more network segments. Network segments must be separated by commas.
Segments can be specified in CIDR notation <Ip address>/<mask>, e.g., 192.168.2.0/24, or
they can be defined by an IP range, e.g. 192.168.3.0-192.68.3.255. Both ends of a range are
included.
Optionally, a level can be specified. The level defines the position of the nodes pertaining to the given
zone within the Purdue model. Once a level has been set for a zone, all nodes included in that zone
will be assigned the same level, unless a per-node configuration has been specified as well. This
means that, if two or more zones overlap, a node belonging to all of them will inherit the level of the
most restrictive zone.
| User Interface Reference | 95

System

General
In the Administration > General page it is possible to change the hostname of the Appliance
and to specify a login banner. The login banner is optional and, when set, it is shown in the login page
and at the begingging of all SSH connections.

Figure 102: The hostname and login banner input fields

Figure 103: An example of login banner

Date and time


| User Interface Reference | 96

Figure 104: Date and time configuration panel

From the date and time page you can:


• change the timezone of the appliance
• change the current time of the appliance (you can use the Pick a date or Set as client buttons to
set a date in a simple way)
• enable or disable the time synchronization to a NTP server by writing a list of comma-separated
server addresses
| User Interface Reference | 97

Network interfaces

Figure 105: Network interfaces list

Actions With the configuration button, you are able to define/modify the NAT rule to
be applied to the current interface.
Interface The interface name
Is mirror It is true if the interface is likely receiving mirror traffic and not only
broadcast.
Mgmt filter When on the traffic of the appliance is filtered out. It is on by default. To
change the value see the specific configuration rule in Basic configuration
rules on page 193.
BPF filter The BPF filter applied to the sniffed traffic.
NAT The NAT rule applied to the current interface.

In this form you can set the NAT configuration and the BPF filter.
| User Interface Reference | 98

Figure 106: Interface configuration form

In the NAT part you may configure the original subnet, the destination subnet and the CIDR mask for
the NAT rule.
In the BPF filter part you may configure the filter to apply to this interface. There are two ways to
configure the filter, via a visual editor or manually. By clicking on the "BPF Filter editor" the following
visual editor appears. It is possible to edit the most common filters.

Figure 107: BPF filter editor

More complex filters may be inserted manually in the input box by clicking on the toggle.
| User Interface Reference | 99

Figure 108: Manual insertion of a BPF filter


| User Interface Reference | 100

Upload PCAPs
In the Administration > Upload PCAPs page you can play a PCAP file into Guardian, the
appliance ingests the traffic as if it came through the network.

On top, there are flags that you can use to customize the behaviour of the upload/play action.

Use PCAP timestamps Check this if you want to use the time captured in the
PCAP file. Otherwise, the current time is used.
Delete data before play Check this option if you want to delete all data in the
Appliance before running the play action.
Auto play PCAP after upload With this flag enabled, the PCAP is played immediately
after the upload.

On every single PCAP file uploaded there are some available actions as shown below.

Replay PCAP With this action you can replay the PCAP.
Edit note If you need to share some note about the uploaded PCAP.
Delete from the list Erase the PCAP file from the Appliance, no Environment data will
be affected.

Note: By default, the Appliance has a retention of 10 PCAP files. To configure this value see
Configuring retention on page 211

Import

Load CSV file


This feature allows you to add nodes and assets from scratch (flagging create non-existing
nodes) or to enrich existing ones.
| User Interface Reference | 101

Figure 109: The import page

It is easy to bind the CSV fields to the Nozomi's one. If the CSV provides the headers in the first line
of the file be sure to flag the Has header option to view the column titles. To put the data in the right
items be sure to match the right Nozomi field with the imported data, for example if the CSV file that
has to be imported contains a list of IP addresses select the ip field in the Nozomi data field
dropdown. For each column in the CSV file to import it is possible to specify in which field the data has
to be imported by using the Nozomi field dropdown.
You can only match csv fields with Nozomi mac_address and ip fields. For matching fields binding is
disabled cause it use the matching info to bind the field. It's not possible to bind fields before choosing
a match.
Nozomi field type can only have value:
• switch
• router
• printer
• group
• OT_device
• broadcast
• computer
• cctv_camera
• PLC
• HMI
• barcode_reader
• sensor
• digital_io
• inverter
• controller
other values are not considered.
Nozomi field role can only have value:
• master
• slave
• engineering_station
• historian
• terminal
• web_server
• dns_server
| User Interface Reference | 102

• db_server
• time_server
• antivirus_server
• gateway
• local_node
other values are not considered.
Nozomi field zone must match with an existing zone to match, you can add a zone to make it match.

Figure 110: Binding fields

It is even possible to create and import custom fields (only for assets list).
To create a new field go to Administration > Data model and choose a name and a type for
your custom fields. After this operation the field will be available in the import page in the Nozomi
field binding dropdown.

Figure 111: Data model page

An example of a valid CSV file.

Figure 112: CSV example

Import Rockwell Configuration


With this feature a Rockwell project file can be imported, the information written in the project file will be
added to the asset data in the Nozomi Networks Solution.

Import Yokogawa Project


With this feature a Yokogawa project file can be imported, the information written in the project file will
be added to the asset data in the Nozomi Networks Solution.

Import Siemens Step7 (S7-400) Configuration


With this feature a Siemens (S7-400) configuration file can be imported, the parsed information will be
added to the asset data in the Nozomi Networks Solution.
| User Interface Reference | 103

Health
All the sections described below are available for admin user. Additionally, access is granted to all
users with Health permission.

Performance
In this tab there are three charts showing, respectively, the CPU, RAM and disk usage over time.

Figure 113: The performance charts

Health Log
If there are any kind of performance issues on the appliance, here you can see the history of those
problems with a small description as the number of packet, the session or nodes discarded in the last
30 seconds.

Figure 114: The health log table

Audit
In the Administration > Audit page are listed all relevant actions made by users, starting from
Login/Logout action to all the configuration operations, such as learn/delete of objects in Environment.
All the recorded actions are related to the IP and the username of the user who made the action and,
as seen in the other Nozomi Networks Solution tables, you can easily filter and sort on this data.
| User Interface Reference | 104

Figure 115: The audit table

Reset Data
In the Administration > Data page it is possible to selectively reset several kinds of data used by
the Nozomi Networks Solution.

Environment All learned nodes, links and variables


Network Data All the history of network data visible in the charts
Process Data All the history of process data visible in the charts like the variables history
Assets Data All the information related to asset (e.g. software and hardware version)
Alerts The raised alerts
Traces The traces both generated by the alert and by a user request
Time machine The snapshots saved by the time machine
Queries The queries and query groups saved by each user
Assertions The assertions saved by each user

In addition to the usual buttons for selecting and deselecting all the checkboxes, All and None, there
is also a Only data button that selects everything but traces, queries and assertions.

Figure 116: The reset data form


| User Interface Reference | 105

Continuous Traces
The continuous trace page can be accessed through <Username> > Other actions >
Continuous trace. Here is where the continuous traces can be requested, managed, inspected and
downloaded. For non admin users, in order to be able to reach this section and to perform any action, it
is mandatory to belong to a group with Trace permission.

Each trace is saved in pcap files whose maximum size is 100MB. When a file has reached this
threshold, it is closed and a new file is created to keep collecting the network packets. The trace files
are saved in the hard disk of the appliance. Guardian makes sure 10% of the disk is always free. For
that reason, when the hard disk usage approaches the limit, the oldest pcap files belonging to the
continuous traces are deleted.
Traces can be stopped and resumed. When a trace is resumed, a new pcap file is created.
Continuous traces are persistable. When an appliance is restarted the continuous traces, their
collected data and their statuses are resumed automatically.
In order to request a trace, enter a BPF filter in the corresponding field and click the Start button.
From the moment the button is pressed, Guardian will begin collecting packets corresponding to the
provided filter. The filter can be left empty, in which case all packets will be collected by the requested
continuous trace.
The table at the bottom of the page shows the continuous traces that have been requested. The
following information is given:

Time The time at which the trace has been requested.


ID A unique identifier of the trace request.
User The user who requested the trace.
Packet filter The BPF filter defining the collection.
In progress Whether the collection is active or stopped.

Several actions are available to manage the traces:

Figure 117: Starts the trace collection (disabled if the trace is currently in progress)
Figure 118: Stops the trace collection (disabled if the trace is currently paused)

Figure 119: Destroy the trace and discard all data collected

Figure 120: Download an archive containing all the pcap files belonging to the trace

Figure 121: List and download the pcap files collected by the trace
Chapter

6
Security Profile
Topics: In this chapter we will explain how a Tailored Security Profile can
be automatically built by Guardian and subsequently tuned to fit
• Security Control Panel specific needs.
• Learned Behavior
Once the Security Profile has been built, different kinds of Alerts
• Alerts will be raised when potentially dangerous conditions are met. There
• Manage Network Learning are four main categories of Alerts, each originating from different
• Custom Checks: Assertions engines within the product:
• Custom Checks: Specific 1. Protocol Validation: every packet monitored by Guardian will
Checks be checked against inherent anomalies with respect to the
• Alerts Customization specific transport and application protocol. This first step is
• Security Profile useful to easily detect buffer overflow attacks, denial of service
• Alerts Dictionary attacks and other kind of attacks that aim to stress non-resilient
software stacks. This engine is completely automatic, but can be
• Incidents Dictionary
eventually tuned as specified in Alerts Customization on page
• Packet rules 118.
• Hybrid Threat Detection 2. Learned Behavior: the product incorporates the concept of
a learning phase. During the learning phase the product will
observe all network and application behavior, especially SCADA/
ICS commands between nodes. All nodes, connections,
commands and variables profiles will be monitored and analyzed
and, after the learning phase is closed, every relevant anomaly
will result in a new Alert. Details about this engine are described
in Learned Behavior on page 108.
3. Built-in Checks: known anomalies are also checked in real
time. Similarly to Protocol Validation, this engine is completely
automatic and works also when in Learning mode, but can be
eventually tuned as specified in Alerts Customization on page
118.
4. Custom Checks: automatic checks such as the ones deriving
from Protocol Validation and Learned Behavior are powerful and
comprehensive, but sometimes something specific is needed.
Here comes Custom Checks, a category of custom Alerts
that can be raised by the product in specific conditions. Two
subfamilies of Custom Checks exist and are described in Custom
Checks: Assertions on page 115 and Custom Checks: Specific
Checks on page 117.
The powerful automatic autocorrelation of Guardian will generate
Incidents that will group specific Alerts into higher level actionable
items. A complete dictionary of Alerts is described at Alerts
Dictionary on page 121 and Incidents Dictionary on page 127.
Additionally, changing the value of the Security Profile changes the
visibility of the alerts shown by Guardian based on the alerts type.
| Security Profile | 108

Security Control Panel


The Security Control Panel allows you to view the current status and to make changes to Process and
Network Learning, Alert rules and Security Profile.

Figure 122: The Security Control Panel page

Learned Behavior
The Learned Behavior category of Alerts is based on two learning engines: the Network Learning and
Process Learning. Both engines can work in LEARNING mode and PROTECTION mode, and can be
governed independently.
1. Network Learning is about the learning of Nodes, Links, and Function Codes (e.g. commands) that
are sent from one Node to another. A wide range of parameters is checked in this engine and can
be fine-tuned as described in Manage Network Learning on page 110.
2. Process Learning is about the learning of Variables and their behavior. This learning can be fine-
tuned also with specific checks as described in Custom Checks: Specific Checks on page 117.
The learning progress of both engines can be monitored with the Last detected change and the
Learning started, that will report the point in time when the last behavior change was detected and the
time when the learning is started.
With the Dynamic Window option you can configure the time interval in which an engine considers a
change to be learned (every engine does this kind of evaluation per node and per network segment).
After this period of time, the learning phase is safely automatically switched to protection mode, with
the effect of:
• raising alerts when something is different from the learned baseline
• adding suspicious components to the Environment with the "is learned" attribute set to off, in such a
way that an operator can confirm, delete or take proper action from the manage panel.
In this way, stable network nodes and segments become protected automatically thus you are not
overwhelmed with alerts due to the premature closing of learning mode.

Figure 123: The learning overview page


| Security Profile | 109

Alerts
Alerts are generated by the different engines and can be very detailed, and suitable for drill-down
analysis.
To provide a higher level view, and faster operation of the system also by users without complete
knowledge of the observed system, Incidents are generated from a powerful autocorrelation engine out
of all generated Alerts.
Incidents allow to summarize Alerts providing a high-level explanation of what really happened. They
are visible by default in the Alerts table, but can be easily hidden if a more detailed view is required.
| Security Profile | 110

Manage Network Learning


In the Manage Network Learning tab it is possible to review and manage the Network Learning status
in detail. The graph is initialized with the node and link not learned perspectives which highlight in
red or orange the items unknown to the system. In this way it is easy to discover new elements and
take an action on them.

Figure 124: The manage page with the selection on an unlearned link

A A node which is not learned


B A link which is not learned. If the link is highlighted in orange it is learned,
but some protocols in it are not
C The information correlated to the current selection, the user can select the
items in it using the checkboxes and then execute some actions. When an
item is not learned it will be red, otherwise it will be green
D With the delete button the user can remove the selected item(s) from the
system
E With the learn button the user can insert the selected item(s) in the system
F When the configuration is complete the user can make it persistent using the
save button
G The discard button undo all the unsaved changes to the system

How to learn protocols


1. Click on red or orange link, information about the selection will be displayed on the right pane
| Security Profile | 111

2. Check the protocol that you want to learn. In this example we check browser. It is possible to
check more than one item at once

3. Click on the Learn button, a mark will appear on all the checked items which will be learned and
the Save button will start to blink indicating some unsaved changes

4. Click on the Save button, the protocol will be learned and it will become green. In this case also the
link will change color and become orange because some protocols are learned and some others are
not

5. Learning all remaining protocols will result in a completely learned grey link
| Security Profile | 112

How to learn function codes


If a protocol is a SCADA protocol, the information pane will also display the function codes. The
procedure for learning function codes is equivalent to the procedure for learning protocols.

Figure 125: A SCADA protocol with function codes

How to learn nodes


1. Click on a red node, its information will be displayed in the right pane

2. Check the item that you want to be learned


| Security Profile | 113

3. Click on the Learn button, a mark will appear on all the checked items which will be learned and
the Save button will start to blink indicating some unsaved changes

4. Click on the Save button, the information pane will turn to green, the learned items and the node in
the graph will become grey

Learning from alerts or incidents

Automatic learning
1. Click on the Close alert button.

2. Choose if an alert/incident is security related or if it is just a change in the configuration of the


network. In the second case the changes which originated the alert/incident will be learned by the
Environment.
| Security Profile | 114

Manual learning
1. Click on the gear icon to go to the learning page.

2. The graph will be focused on the link involved in the alert (by clicking on the X button the focus will
be removed). According to the alert there is a new node, follow the already explained procedure to
learn the desired items.
| Security Profile | 115

Custom Checks: Assertions


Assertions can be managed in Analysis > Assertions and are based on N2QL (fully explained
in section Queries on page 67). Thanks to the powerful query language it is possible to ensure that
certain conditions are met on the observed system and to be notified when an assertion is not satisfied.

Figure 126: The assertions page with a saved failing


assertion and another assertion during the editing phase.

A valid assertion is just a normal query with another special command appended at the end. The
assertion commands are:

assert_all <field> The assertion will be satisfied when each element in the query result set
<op> <value> matches the given condition
assert_any The assertion will be satisfied when at least one element in the query result
<field> <op> set matches the given condition
<value>
assert_empty The assertion will be satisfied when the query returns an empty result set
assert_not_empty The assertion will be satisfied when the query returns a non-empty result set

For example, it is possible to be notified when someone uses the insecure telnet protocol by saving
the assertion

links | where protocol == telnet | assert_empty

Editing an assertion
To edit an assertion just enter the text in the textbox and press the enter key to execute it. Multiple
assertions can be combined by using the logical operators && (and) and || (or). Round brackets
change the logical grouping as in a mathematical expression.
| Security Profile | 116

(links | where protocol == telnet | assert_empty && links | where protocol


== iec104 | assert_empty) && (nodes | where is_learned == false |
assert_empty)

Figure 127: A complex assertion being debugged

An assertion with logical operators and brackets can quickly become complex, to make the editing task
easier a debug functionality is present. By pressing the debug button (on the right side of the textbox)
the query will be decomposed and the single pieces will be executed to show the intermediate results.

Saving an assertion

Assertions can be saved in order to have them continuously executed in the system. To save an
assertion just write it in the textbox, press the enter key to execute it and then click on the save button.
A dialog will pop up asking for the assertion name and some other information. In particular the
| Security Profile | 117

assertion needs to be assigned to an existing group. It is possible to create a new group by clicking on
the "New Group" button. The following dialog will appear asking for a group name.

It is also possible to choose whether the assertion has to trigger an alert. The saved assertion will be
listed at the bottom of the page with a green or red color to indicate the result.
NOTE: when editing the alert risk only the new raised alerts are affected.

Custom Checks: Specific Checks


Specific Checks can be added to Links and Variables by opening the dedicated configuration dialog.
To configure checks on a Link, go to the Links table (or any other section where the Link Actions are

displayed) and click on the button.

Here you can flag and configure these checks:


1. Is persistent: when enabled, this check will raise a new Alert whenever a TCP handshake is
successfully completed on the Link.
2. Alert on SYN: when enabled, this check will raise a new Alert whenever a TCP SYN sent by a client
on the Link.
3. Last Activity check: when enabled, this check will raise an Alert whenever the link is not receiving
any data for more than the specified amount of seconds.
(Track Availability instead does not trigger any alert).
| Security Profile | 118

To configure checks on a Variable, go to the Variables table and click on the button.

Here you can flag and configure these checks:


1. Last Activity check: when enabled, this check will raise an Alert whenever the Variable is not being
measured or changed for more than the specified amount of seconds.
2. Invalid quality check: when enabled, this check will raise an Alert whenever the Variable keeps an
invalid quality for more than the specified amount of seconds.
3. Disallowed qualities check: when enabled, this check will raise an Alert whenever the Variable gains
one of the specified qualities.

Alerts Customization
In the Alert rules tab of the Security Control Panel it is possible to customize the alerts behavior.
Specifically, a matching criteria can be specified by providing to the specific dialog the IP or MAC
addresses, the alert type ID and the protocol.

All these custom alert modifiers can be viewed, modified or erased by clicking on the action icons in
each row of the table.

To create a new modifier click on the button on top of this table or in the actions column of an
alert in the Alerts page.
| Security Profile | 119

IP source/destination Set the IP of the source/destination that you want to filter.


MAC source/destination Specify the MAC of the source/destination that you want to filter.
Match IPs and MACs in both Check this if you want to select all the communications between
directions two nodes (IP or MAC) independently of their role in the
communication (source or destination).
Type ID The type ID of the alert, this field is precompiled if you create a
new modifier from an alert in Alerts page.
Protocol Set the protocol that you want to filter.
Execute action Select an action to perform on the matched alerts:

Mute Switch ON/OFF: to mute or not the alert


Change risk Set a custom risk value for the alert
Change trace filter Define a custom trace filter to apply to this
alert

Security Profile
By default the Security Profile is set to High.

Figure 128: Current value of the Security Profile

If you want to change the current value of the Security Profile or check the types of alerts that are
shown based on the current value you can open the Security Profile tab.
| Security Profile | 120

Figure 129: The Security Profile tab

Changing the value of the Security Profile has immediate effect on newly generated alerts and it has no
effect on existing alerts as highlighted by the message shown on top of the dropdown menu.

Figure 130: Changing the value of the Security Profile


| Security Profile | 121

Alerts Dictionary
As explained at the beginning of this chapter, four categories of Alerts can be generated from the
Nozomi Networks Solution. Here we propose a complete list of the different kinds of Alerts that can
be raised. It should be noted that some Alerts can specify the triggering condition: for instance the
Malformed Packet Alert can be instantiated by each protocol with some specific check information.

List of Alerts

Category Type ID Name Trigger


Protocol SIGN:NETWORK-MALFORMED Malformed A malformed packet is detected
Validation network packet during the Deep Packet Inspection
phase.
Protocol SIGN:SCADA-MALFORMED Malformed A malformed packet is detected
Validation SCADA network during the Deep Packet Inspection
packet phase.
Protocol SIGN:SCADA-INJECTION Injection of a A traffic injection of SCADA packets
Validation SCADA packet has been detected in the network.
Protocol SIGN:INVALID-IP Invalid IP A packet with invalid IP packets
Validation addresses reserved for special purposes (e.g.
loopback addresses). Packets with
such addresses can originate from
misconfiguration or spoofing/denial of
service attacks.
Protocol SIGN:DHCP-OPERATION Suspicious DHCP A DHCP request from an unknown
Validation activity device has been found in the
network, as a sign of a new device
that is trying to obtain an address.
Protocol SIGN:WEAK-ENCRYPTION Insecure TLS An old and insecure version of TLS
Validation version detected has been used to encrypt https traffic.
Learned VI:NEW-ARP New device A new unseen node appeared
Behavior appeared through ARP traffic. This Alert is
useful to detect also devices that are
connected near the sniff interfaces
of Guardian but are not sending
relevant application-level packets
through the network.
Learned VI:NEW-MAC New MAC A new unseen MAC address has
Behavior appeared appeared in the network.
Learned VI:GLOBAL:NEW-MAC-VENDOR New MAC vendor A previously unseen MAC vendor
Behavior appeared has appeared in the network.
Learned VI:GLOBAL:NEW-FUNC-CODE New Function A previously unseen Function Code
Behavior Code has been for a protocol has been observed in
used the network.
Learned VI:NEW-NET-DEV New network A new unseen network device, such
Behavior device appeared as a switch, router or firewall has
appeared in the network.
Learned VI:NEW-NODE New source node A new unseen node starts to send
Behavior appeared packets in the network.
Learned VI:NEW-NODE:TARGET New target node A new unseen node starts to send
Behavior appeared packets in the network.
| Security Profile | 122

Category Type ID Name Trigger


Learned VI:NEW-SCADA-NODE New SCADA node A new unseen node speaking
Behavior appeared SCADA protocols starts to send
packets in the network.
Learned VI:NEW-LINK New target used A node tries to communicate with a
Behavior node not contacted before.
Learned VI:NEW-PROTOCOL New protocol used A new protocol has been tried
Behavior between two nodes.
Learned VI:NEW- Protocol is A protocol between two nodes
Behavior PROTOCOL:CONFIRMED confirmed has been confirmed at Layer 4
(the endpoint has accepted the
connection).
Learned VI:NEW- Application A Layer 7 protocol has been detected
Behavior PROTOCOL:APPLICATION protocol detected in a Layer 4 protocol.

Learned VI:NEW-FUNC-CODE New SCADA A node starts using a function code


Behavior function code never seen before.
Learned VI:PROC:NEW-VAR New SCADA A new variable has been detected in
Behavior variable a SCADA slave.
Learned VI:PROC:NEW-VALUE New behavior on A new variable value or behavior has
Behavior SCADA variable been detected in a SCADA slave.
Learned VI:PROC:PROTOCOL-FLOW- Protocol flow This kind of alert is raised when
Behavior ANOMALY anomaly the Process-related behavior of a
protocol changes in a suspicious
manner.
Learned VI:PROC:VARIABLE-FLOW- Unexpected timing The access over time to a variable
Behavior ANOMALY flow for a variable has changed in a unexpected
manner.
Learned VI:NEW-NODE:MALICIOUS-IP Bad reputation IP A node with a bad reputation IP
Behavior has been created. It is suggested
to validate the health status of
communicating nodes, as they may
be infected by some malware.
Learned SIGN:MALICIOUS-IP Bad ip reputation A node with a bad reputation ip
Behavior was find. If the ip is valid mark as
confirmed.
Learned PROC:CRITICAL-STATE-ON Entered in The system has entered in a Process
Behavior/Custom Process Critical Critical State that has either been
Checks State learned or inserted as a custom
check.
Learned PROC:CRITICAL-STATE-OFF Exited from The system has exited from a
Behavior/Custom Process Critical Process Critical State.
Checks State
Built-in Checks SIGN:PACKET-RULE Packet rule match A packet rule has matching a specific
security check has matched. This
Alert requires to thoroughly check
what happened to verify if an attacker
is trying to compromise one or more
host.
Built-in Checks SIGN:MALWARE-DETECTED Malware detected A malicious payload has been
transferred over the network.
| Security Profile | 123

Category Type ID Name Trigger


Built-in Checks SIGN:UNSUPPORTED-FUNC Unsupported An unsupported function has been
function was called on the remote peer. This may
asked mean that a malfunctioning software
is trying to perform an operation
without success or that a malicious
attacker is trying to understand the
functionalities of the device.
Built-in Checks SIGN:PROC:MISSING-VAR Non existing A tentative to access a nonexistent
variable accessed variable has been performed. This
can be due to a reconnaissance
activity or configuration change.
Built-in Checks SIGN:PROC:UNKNOWN-RTU Unknown RTU ID An attempt to access an unexisting
requested RTU has been made. This may
be due to a misconfiguration or a
tentative to discover valid RTUs of a
slave.
Built-in Checks SIGN:PROTOCOL-ERROR Protocol error A generic protocol error occurred,
this usually relates to a state
machine, option or other general
violation of the protocol.
Built-in Checks SIGN:OT_DEVICE-START OT device start The OT device program has been
requested requested to start again by the
sender host. This event may be
something correct during Engineering
operations on the OT device, for
instance the maintenance of the
program itself or a reboot of the
system for updates. However, it may
indicate suspicious activity of an
attacker trying to manipulate the state
of the OT device.
Built-in Checks SIGN:OT_DEVICE-STOP OT device stop The OT device program has been
requested requested to stop by the sender
host. This event may be something
correct during Engineering operations
on the OT device, for instance the
maintenance of the program itself.
However, it may indicate suspicious
activity of an attacker trying to halt
the process being controlled by the
OT device.
Built-in Checks SIGN:OT_DEVICE-REBOOT OT device reboot The OT device has been requested
requested to reboot by the sender host. This
event may be something correct
during Engineering operations on
the OT device, for instance the
maintenance. However, it may
indicate suspicious activity of an
attacker trying to disrupt the process
being controlled by the OT device.
Built-in Checks SIGN:PROGRAM:DOWNLOAD Program The program of the OT device has
downloaded from been downloaded from another host.
device This can be a legitimate operation
during maintenance and update
| Security Profile | 124

Category Type ID Name Trigger


of the software or an unauthorized
tentative to read the program logic.
Built-in Checks SIGN:PROGRAM:UPLOAD Program uploaded The program of the OT device
to device has been uploaded. This can
be a legitimate operation during
maintenance and update of the
software or an unauthorized tentative
to disrupt the normal behavior of the
system.
Built-in Checks SIGN:PROGRAM:CHANGE Program change The program on the OT device has
detected been uploaded and changed. This
can be a legitimate operation during
maintenance and update of the
software or an unauthorized tentative
to read the program logic.
Built-in Checks SIGN:DEV-STATE-CHANGE Device state This kind of alert is raised when a
change detected change of the state of a device is
detected, for example when an OT
device is asked to enter in a new
mode or a factory reset is issued.
Built-in Checks SIGN:MAN-IN-THE-MIDDLE Man-in-the-middle This kind of alert is raised when a
detected man-in-the-middle attack is detected.
Built-in Checks SIGN:CONFIGURATION- Configuration The configuration on the device has
CHANGE change detected been uploaded and changed. This
can be a legitimate operation during
maintenance or an unauthorized
tentative to modify the behaviour of
the device.
Built-in Checks SIGN:CPE:CHANGE Installed software This kind of alert is raised after the
change detected detection of an installed software
change.
Built-in Checks SIGN:PASSWORD:WEAK Weak password A weak password has been used to
used access a resource. To safely protect
your systems, change passwords
of devices and manage them in a
secure manner.
Built-in Checks SIGN:MULTIPLE- Multiple This kind of alert occurs when an
UNSUCCESSFUL-LOGINS unsuccessful host is repeatedly trying to login to a
logins service without success.
Built-in Checks SIGN:TCP-FLOOD Generic TCP flood This kind of alert occurs when one or
many host send a great amount of
anomalous TCP packets or TCP FIN
packets to a single host.
Built-in Checks SIGN:TCP-SYN-FLOOD TCP SYN flood This kind of alert occurs when one or
many host send a great amount of
TCP SYN packets to a single host.
Built-in Checks SIGN:PROTOCOL-FLOOD Protocol flood This kind of alert occurs when one or
many host send a suspiciously high
amount of packets with the same
application layer (e.g., ping requests)
to a single host.
| Security Profile | 125

Category Type ID Name Trigger


Built-in Checks SIGN:ARP:DUP Duplicated IP This kind of alert occurs when a
address duplicated IP is spotted on the
network by analyzing the ARP
protocol.
Built-in Checks NET:RST-FROM-SLAVE Slave sent RST on A slave closed the connection to the
Link master. This can be due to the device
restarting or behaving in a strange
manner.
Built-in Checks PROC:WRONG-TIME Time issue A slave reported a wrong time
detected regarding Process data. This may be
due to incorrect time synchronization
of the slave, a misbehavior or a sign
of compromise of the device.
Built-in Checks SIGN:CLEARTEXT-PASSWORD Cleartext A cleartext password was issued or
password requested by an host.
Built-in Checks SIGN:DDOS DDOS attack A suspect Distributed Denial of
Service has been found on the
network. Verify that all the devices
in the network are allowed and
behaving correctly.
Built-in Checks SIGN:NETWORK-SCAN Network Scan A node starts a Network Scan in your
network, i.e. TCP/UDP portscan, ping
sweep.
Built-in Checks SIGN:ILLEGAL-PARAMETERS A request with A request with illegal parameters has
illegal parameters been performed. This may mean that
was asked a malfunctioning software is trying to
perform an operation without success
or that a malicious attacker is trying
to understand the functionalities of
the device.
Built-in Checks SIGN:MALICIOUS-DOMAIN Malicious domain A DNS query towards a malicious
domain has been detected. It is
suggested to investigate the health
status of the involved nodes.
Built-in Checks SIGN:MALICIOUS-URL Malicious URL An request towards a malicious URL
has been detected. It is suggested
to investigate the health status of the
involved nodes.
Built-in Checks SIGN:MULTIPLE-OT_DEVICE- Multiple OT device This kind of alert occurs when an
RESERVATIONS reservations host is repeatedly trying to reserve
the usage of an OT device causing a
potential denial-of-service.
Built-in Checks SIGN:FIRMWARE-CHANGE Firmware change A firmware has been uploaded to
requested the device. This can be a legitimate
operation during maintenance or an
unauthorized tentative to change the
behaviour of the device.
Built-in Checks SIGN:MALICIOUS-PROTOCOL Malicious Protocol An attempted communication by
detected a malicious protocol has been
detected.
Custom Checks ASRT:FAILED Assertion Failed A custom Assertion has failed.
| Security Profile | 126

Category Type ID Name Trigger


Custom Checks NET:TCP-SYN Link connection A link configured with the specific
check has received a new TCP SYN.
Custom Checks NET:LINK-RECONNECTION Link reconnection A link configured as persistent has a
new TCP handshake.
Custom Checks NET:INACTIVE-PROTOCOL Inactive protocol A link configured with
(ON|OFF) :check_last_activity N stays
inactive for more than N seconds.
Custom Checks PROC:STALE-VARIABLE Stale variable A variable configured with
(ON|OFF) :check_last_update N does not
have its value updated for more than
N seconds.
Custom Checks PROC:INVALID-VARIABLE- Invalid variable A variable configured with
QUALITY quality (ON|OFF) :check_quality N keeps its value
with an invalid quality for more than N
seconds
Custom Checks PROC:NOT-ALLOWED-INVALID- Variable with A variable that has been configured
VARIABLE invalid quality with a specific check has been
detected to have a not allowed
quality.
Custom Checks PROC:SYNC-ASKED-AGAIN OT device A new general interrogation
synchronization command is issued, this can be an
asked anomaly since this command should
be performed once per OT device.
| Security Profile | 127

Incidents Dictionary

List of Incidents

Category Type ID Name Trigger


Learned INCIDENT:NEW-NODE New Node A new unseen node starts to send
Behavior packets in the network
Learned INCIDENT:NEW- New A node starts to communicate with a
Behavior COMMUNICATIONS Communications new protocol

Learned INCIDENT:VARIABLES-FLOW- Variables flow A timing change on a variable which


Behavior ANOMALY anomaly used to be updated or read with a
regular interval
Learned INCIDENT:VARIABLES-FLOW- Variables flow A master which used to update or
Behavior ANOMALY:MASTER anomaly from read a variable with a regular interval
master changed its timing
Learned INCIDENT:VARIABLES-FLOW- Variables flow A slave which used to update or
Behavior ANOMALY:SLAVE anomaly from read a variable with a regular interval
slave changed its timing
Learned INCIDENT:VARIABLES-NEW- New values on New variable values or behavior has
Behavior VALUES slave been detected in a SCADA slave

Learned INCIDENT:VARIABLES-NEW- New variables on New variables has been detected in


Behavior VARS slave the SCADA system

Learned INCIDENT:VARIABLES-NEW- New variables A new variable has been detected in


Behavior VARS:MASTER requested from a SCADA master
master
Learned INCIDENT:VARIABLES-NEW- New variables A new variable has been detected in
Behavior VARS:SLAVE arrived from slave a SCADA slave

Learned INCIDENT:VARIABLES-SCAN Suspect variables A node in the network started to


Behavior scan probe for not existing variables
Learned INCIDENT:PORT-SCAN Port scan A node started a port scan
Behavior
Protocol INCIDENT:ANOMALOUS- Anomalous Malformed packets are detected
Validation PACKETS packets during the deep inspection.
| Security Profile | 128

Packet rules

Introduction
Packet rules are a tool provided by the Nozomi Networks Solution to enrich and expand the checks
that are already performed on the network traffic. With packet rules the user can add a check in every
moment and receive an alert of type SIGN:PACKET-RULE when a match is found. Packet rules can be
explored and edited in the section OT ThreatFeed on page 78.
In the next section there is an explanation of the language used to write new packet rules.

Format
<action> <transport> <src_addr> <src_port(s)> -> <dst_addr> <dst_port(s)>
(<options>)

Basic options

action The action to execute on match, at the moment only alert is supported
transport The transport protocol to match, can be tcp, udp or ip
src_addr The set of source ip address to match (not supported at the moment, the
value will be ignored)
src_port(s) The source ports to match. The format can be any (to match everything), a
single number, a set (eg. [80,8080]), a range (eg. 400:500), a range open to
the left bound (eg. :500), a range open to the right bound (eg. 400:). A set
can contain a combination of comma separated single ports and ranges (eg.
[:5,9,10,12:]).
dst_addr The set of destination ip address to match (not supported at the moment,
the value will be ignored)
dst_port(s) The destination ports to match. The format can be any (to match
everything), a single number, a set (eg. [80,8080]), a range (eg. 400:500), a
range open to the left bound (eg. :500), a range open to the right bound (eg.
400:). A set can contain a combination of comma separated single ports and
ranges (eg. [:5,9,10,12:]).
options The options alter the behaviour of the packet rule and attach some
information to it. The current set of supported options is: content,
byte_extract, byte_test, byte_jump, isdataat, pcre, msg and
reference.
The options are a list of semicolon-separated key-value pairs (eg. content:
<value1>; pcre: <value2>).
They are explained in details in the next section.

msg option
Define the message that will be present in the alert
Example usage: msg:"a sample description"

reference option
Define the CVE associated with the packet rule.
Example usage: reference:cve,2017-0144;

content option
The content option specifies some data to be found in the payload. The option can contain printable
chars, bytes in hexadecimal format delimited by pipes or a combination of them.
| Security Profile | 129

Examples:
- content: "SMB" will search for the string SMB in the payload,
- content: "|FF FF FF|" will search for 3 bytes FF in the payload,
- content: "SMB|FF FF FF|" will search for the string and 3 bytes FF in the payload.
The content option can have several modifiers which influence the behaviour:
- depth: specifies how far into the packet the content should be searched
- offset: specifies where to start searching in the packet
- distance: specifies where to start searching in the packet relatively to the last option match
- within: to be used with distance, specifies how many bytes are between pattern matches
Examples:
Given the rule alert tcp any any -> any any (content:\"x\"; content:\"y\";
distance: 2; within: 2;) the packet {'x', 0x00, 0x00, 0x00, 'y'} will match, the packet {'x', 0x00,
0x00, 0x00, 0x00, 'y'} will not because the distance and within constraints are not respected.

byte_extract option
The byte_extract option reads some bytes from the packet and saves them in a variable.
The syntax is: byte_extract:<bytes_to_extract>, <offset>, <name> [, relative][,
big|little]
For example: byte_extract:2,26,TotalDataCount,relative,little will read two bytes from
the packet at the offset 26 and put them in a variable called TotalDataCount, the offset is relative to the
last matching option and the data encoding is little endian.

byte_test option
Test a byte against a value or a variable.
The syntax is: byte_test:<bytes to convert>, <operator>, <value>, <offset> [,
relative][, big|little] where <operator> can be = or >.
For example: byte_test: 2, =, var, 4, relative; will read two bytes at offset 4 (relative to
the last matching option) and test if the value is equal to the variable called var.

byte_jump option
Read the given number of bytes at the given offset and move the offset by their numeric
representation.
The syntax is: byte_jump:<bytes to convert>,<offset>[,relative][,little][,align]
For example: byte_jump:2,1,little; will read two bytes at offset 1, intepret them as little endian
and move the offset.

isdataat option
Verify that the payload has data at the given position.
The syntax is: isdataat:<offset>[,relative]
For example: isdataat:2,relative; verify that there is data at offset to relative to the previous
match.

pcre option
The pcre option specifies a regex to be found in the payload.
The syntax is: pcre:"(/<regex>/[ismxAEGR]"
Pcre modifiers:
| Security Profile | 130

- i: case insensitive
- s: include newline in dot metacharacter
- m: ^ and $ match immediately following or immediately before any newline
- x: ignore whitespace in the pattern, except when escaped or in characters class
- A: match only at the start
- E: $ will match only at the end of the string ignoring newlines
- G: invert the greediness of the quantifiers
- R: match is relative to the last matching option
| Security Profile | 131

Hybrid Threat Detection


The Nozomi Networks Solution can leverage on four types of threat detection.
The first one is the anomaly-based analysis, where Guardian learns the behaviour of the observed
network and alerts the user when a significant deviation is detected in the system. This analysis is
generic and can be applied to every system.
The second analysis is done by Yara rules. Guardian is able to extract files transferred by protocols
such as HTTP or SMB and trigger on them an inspection by the Yara engine; when a Yara rule
matches, an alert is raised. The typical use of Yara rules is to detect the transfer of a malware. A set of
Yara rules is provided by Nozomi and can also be expanded by the user.
The third analysis is done by packet rules. They enable the user to define a criterion to match a
malicious packet and raise an alert. A set of packet rules is provided by Nozomi and can also be
expanded by the user.
The fourth analysis is done by other Indicators of Compromise (IoC) loaded via STIX. They provide
several hints like malicious domains, URLs, IPs, etc.
Guardian can correlate the output obtained with these three approaches to provide a powerful and
comprehensive threat detection strategy.
Chapter

7
Vulnerability Assessment
Topics: In this section we will cover the Vulnerability Assessment module.

• Basics A Vulnerability Assessment is the process of identifying, quantifying,


and ranking the vulnerabilities in a system.
• Passive detection
• Configuration The Nozomi Networks Solution provides the ability to find vulnerable
system applications, operating systems or hardware components.
| Vulnerability Assessment | 134

Basics
To manage vulnerability assessment the Nozomi Networks Solution uses NVD (National Vulnerability
Database) format files; the vulnerabilities files match a CPE (Common Platform Enumeration) with a
CVE (Common Vulnerabilities and Exposures):
• CPE is a structured naming scheme for information technology systems, software, and packages.
Based upon the generic syntax for Uniform Resource Identifiers (URI), CPE includes a formal name
format.
• Common Vulnerabilities and Exposures (CVE) is a dictionary of common names for publicly known
cybersecurity vulnerabilities. CVE's common identifiers make it easier to share data across separate
network security databases and tools. With CVE Identifiers, you may quickly and accurately access
fix information in one or more separate CVE-compatible databases to remediate the problem.
• The Common Weakness Enumeration Specification (CWE) provides a common language of
discourse for discussing, finding and dealing with the causes of software security vulnerabilities as
they are found in code, design, or system architecture. Each individual CWE represents a single
vulnerability type.

Figure 131: Vulnerability detail


| Vulnerability Assessment | 135

Passive detection
The Nozomi Networks Solution offers continuous vulnerability detection, since it detects vulnerabilities
within a network by only passively listening to network traffic. This technique allows for a
comprehensive state of risk without impacting in any way the production equipment.
We will consider a passive vulnerability as any vulnerability that may be detected simply through
analysis of network traffic.
The passive vulnerability detection is a valuable component because an active scan can affect the
timing of the device or its sensitive processes.
Passive monitoring is not intrusive on network performance or operation. It is real time and can be very
useful to trace certain network security problems and to verify suspected activity.
Configuration
Vulnerabilities-related information can be provided to the Nozomi Networks Solution as follows:
• via the OT ThreatFeed service (see OT ThreatFeed on page 78 for more information)
• or by using our vulnerabilities-only database, if OT ThreatFeed has not been subscribed.
To use the vulnerability-only database (that can be downloaded from Nozomi Networks at https://
nozomi-contents.s3.amazonaws.com/vulns/vulnassdb.tar.gz), use a tool like scp or
WinSCP to upload it to the /data/contents/vulnass folder:

scp vulnassdb.tar.gz admin@<appliance_ip>:/data/contents/vulnass

Execute this command in the appliance while staying in the /data/contents/vulnass directory:

tar xzf vulnassdb.tar.gz

Now reload the database with the command:

service n2osva restart

Additional vulnerabilities can be added to the system. They must be in the NVD (National Vulnerability
Database) format, and be placed in the /data/contents/vulnass folder. However, Nozomi
Networks gives full support only for the own-distributed files.
Chapter

8
Smart Polling
Topics: This section gives an overview of Smart Polling, the feature that
allows Guardian to contact nodes in order to gather new information
• Strategies or to improve the already existing one.
• Configurations
Smart Polling is built around the concept of strategy, a runnable
• Extracted information component that is able to communicate in a certain specific way
with target nodes (e.g. the SNMPv3 strategy communicates by
means of the SNMPv3 protocol).
A strategy can be run only with a configuration which specifies,
among other things, the nodes to contact and how often it must be
run.
When run successfully, strategies extract information that can be
observed in two ways: by going into the Smart Polling Display page,
which provides a detailed summary of the Smart Polling activity, or
by directly looking at the targeted nodes in the rest of the product
(e.g. Network View on page 51).
Note: to enable Smart Polling it is required to instal and upgrade
using the advanced bundle, e.g. VERSION-advanced-
update.bundle; do not use VERSION-standard-
update.bundle
| Smart Polling | 138

Strategies
The currently supported strategies are:

EthernetIP To be used with devices that support the EthernetIP protocol


Modicon Modbus To be used with Modicon Modbus devices
SEL To be used with SEL devices
SNMPv1 To be used with devices that expose the SNMPv1 service
SNMPv2 To be used with devices that expose the SNMPv2 service
SNMPv3 To be used with devices that expose the SNMPv3 service
SSH To be used with devices that expose the SSH service
WinRM To be used with Windows devices that expose the WinRM service
WMI To be used with Windows devices that expose the WMI service
CB Defense (External To be used with Carbon Black services
Service)
DNS reverse lookup The strategy extracts information about nodes by using DNS protocol
(External Service)
Aruba ClearPass (External The strategy send and extract assets information from ClearPass
Service) through HTTP Rest APIs
Cisco ISE (External Service) This strategy extracts assets information from Cisco ISE using the
pxGrid HTTP API

Configurations
Configurations consist of parameters and they influence when and how strategies are run. They can be
created, modified and controlled via the Administration > Smart Polling page.

Figure 132: Configurations page

A configuration is tied to a single strategy and it specifies at least the following information:
• the query that determines the nodes to contact (see Queries on page 143 for more information).
For example, if we want to poll a whole subnet we can use the following query:

nodes | where ip in_subnet? 192.168.1.0/24

If we want to be more precise and poll just one single node we can use:

nodes | where ip == 192.168.1.3


• the run interval, that is, how often the run must be performed expressed in seconds
At any point in time, a configuration can be either enabled or disabled. When enabled, it is continually
run accordingly to the specified run interval.
| Smart Polling | 139

Additionally, there can be other parameters directly related to the chosen strategy. As an example,
consider the configuration for the WinRM strategy in the following image. It also has three strategy-
specific parameters, namely: username, password and a flag to control whether SSL will be used
during the communication.

Figure 133: New WinRM configuration

Configuration actions
Once a configuration has been created, there are some actions that can be performed on it.

Figure 134: Actions that can be performed on an existing configuration

Enable/Disable Enable and, respectively, disable the scheduled execution of the


configuration
Show log Show last log messages with live-update

Figure 135: Example of last log messages

Download log Download the whole log


Edit configuration Update the configuration parameters
Delete configuration Delete the configuration

Connection check
When creating or modifying a configuration, it can be useful to have a way for quickly checking whether
the provided parameters work properly or not. The connection check does exactly that by executing the
initial steps of the given configuration. If everything goes well, the executed steps will all be marked as
successful and the first three extracted information will be shown below the steps as in the following
image.
| Smart Polling | 140

Figure 136: Example of successful connection check

Clearpass configuration
Integration permits to send asset information to Clearpass service. To configure Clearpass you need to
add credentials (username and password) and also the bearer token.

Figure 137: Clearpass configurations

Extracted information
The information that strategies extract during their activity are directly integrated with the information
that were already attached to the targeted nodes. This means that they can be observed in other
parts of the Nozomi Networks Solution such as Asset View on page 49, Network View on page 51 or
Vulnerabilities on page 76.
For example, the following image shows an asset whose product name has been retrieved with Smart
Polling.
| Smart Polling | 141

Figure 138: Source information tooltip of a product name

Integrating the new information in this way is very useful, but it does not clearly show what was
collected overall and, more importantly, how information evolved over time. All of this can be found
in the Smart Polling display page, which is accessible from the navigation menu with the Smart
Polling item.

Figure 139: Smart Polling Display page

The page is divided into three columns, each one representing an increasing level of details with
respect to the currently selected row.
The first column provides a list of all the nodes that have been contacted by at least a strategy along
with an excerpt of the last extracted information. It is sorted from the most recently contacted node to
the least recently contacted node and it can be easily filtered by address or node name by means of
the input field positioned at its top.
The second column refers to the node selected in the first column and it lists all the last inserted values
for each kind of extracted information.
Finally, the third column shows the last twentyfive inserted values for the currently selected information
in the second column. For numeric values, this last column is enriched with a graph that helps in
understanding how values changed over time.

Querying extracted information


Another interesting way to explore what Smart Polling collected consists in using the queries
mechanism with the node_points data source. For example, we can look at how many different
product names a node has had with the following query:

node_points | where node_id == 192.168.1.3 | where human_name ==


product_name | select value | uniq | count
Chapter

9
Queries
Topics: In this chapter are listed all the data sources, commands and
functions which can be used in N2QL (Nozomi Networks Query
• Overview Language).
• Reference
• Examples
| Queries | 144

Overview
Each query must start by calling a data source, for example:

nodes | sort received.bytes desc | head

will show in a table the first 10 nodes which received the most bytes.
By adding the pie command at the end it is possible to display the result as a pie chart where each
slice has the node ip as label and the received.bytes field as data:

nodes | sort received.bytes desc | head | pie ip received.bytes

Sometimes query commands are not enough to achieve the desired result. As a consequence, the
query syntax supports functions. Functions allow you to apply calculations on the fields and to use the
result as a new temporary field.
For example, the query:

nodes | sort sum(sent.bytes,received.bytes) desc | column ip


sum(sent.bytes,received.bytes)

uses the sum function to sort on the aggregated parameters and to produce a chart with the columns
representing the sum of the sent and received bytes.
| Queries | 145

Reference

Data sources
These are the available data sources with which you can start a query:

help Show this list of data sources


alerts All the alerts raised
assertions All the assertions saved by the users
assets All the assets identified in the system
captured_urls All the urls captured from network protocols
function_codes All the function codes
links The links in the system, each link has a one-to-one association with a
protocol
link_events The link events saved for each link, for instance channel up/down
events, protocol-specific parameters, etc.
nodes The nodes in the system
node_cpes All the CPEs (hardware, operating system and software versions)
detected on nodes
node_cpe_changes CPEs (hardware, operating system and software versions) changes
collected over time
node_cves All vulnerabilities detected on node's CPEs
sessions All currently live network sessions
variables The SCADA variables of the slaves
variable_history The history of the variable values
variable_history_month The history of the variable values within the month specified
zones The zone nodes
zone_links The zone links
| Queries | 146

Commands
Here is the complete list of commands:

Syntax select <field1> <field2> ... <fieldN>


Parameters • the list of field(s) to output

Description The select command takes all the input items and outputs them with only the
selected fields

Syntax exclude <field1> <field2> ... <fieldN>


Parameters • the list of field(s) to remove from the output

Description The exclude command takes all the input items and outputs them without
the specified field(s)

Syntax where <field> <==|!=|<|>|<=|>=|include?|start_with?|


end_with?|in_subnet?> <value>
Parameters • field: the name of the field to which the operator will be applied
• operator
• value: the value used for the comparison. It can be a number, a string
or a list (using JSON syntax), the query engine will understand the
semantics

Description The where command will send to the output only the items which fulfill the
specified criterion, many clauses can be concatenated using the boolean OR
operator
Example • nodes | where roles include? master OR zone == office
• nodes | where ip in_subnet? 192.168.1.0/24

Syntax sort <field> [asc|desc]


Parameters • field: the field used for sorting
• asc|desc: the sorting direction

Description The sort command will sort all the items according to the field and the
direction specified, it automatically understands if the field is a number or a
string

Syntax group_by <field> [ [avg|sum] [field2] ]


Parameters • field: the field used for grouping
• avg|sum: if specified, the relative operation will be applied on field2

Description The group_by command will output a grouping of the items using the field
value. By default the output will be the count of the occurrences of distinct
values. If an operator and a field2 are specified, the output will be the
average or the sum of the field2 values

Syntax head [count]


Parameters • count: the number of items to output

Description The head command will take the first count items, if count is not specified
the default is 10
| Queries | 147

Syntax uniq
Parameters
Description The uniq command will remove from the output the duplicated items

Syntax expand <field>


Parameters • field: the field containing the list of values to be expanded

Description The expand command will take the list of values contained in field and for
each of them it will duplicate the original item substituting the original field
value with the current value of the iteration

Syntax sub <field>


Parameters • field: the field containing the list of objects

Description The sub command will output the items contained in field

Syntax count
Parameters
Description The count command outputs the number of items

Syntax pie <label_field> <value_field>


Parameters • label_field: the field used for each slice label
• value_field: the field used for the value of the slice, must be a numeric
field

Description The pie command will output a pie chart according to the specified
parameters

Syntax column <label_field> <value_field ...>


Parameters • label_field: the field used for each column label
• value_field: one or more field used for the values of the columns

Description The column command will output an histogram, for each label a group of
columns is displayed with the value from the specified value_field(s)

Syntax history <count_field> <time_field>


Parameters • count_field: the field used to draw the Y value
• time_field: the field used to draw the X points of the time series

Description The history command will draw a chart representing an historic series of
values

Syntax distance <id_field> <distance_field>


Parameters • id_field: the field used to identify the data
• distance_field: the field on which the distances are calculated

Description The distance command will calculate a series of distances from the original
series. Each distance value is calculated as the difference between a value
and its subsequent occurrence
| Queries | 148

Syntax bucket <field> <range>


Parameters • field: the field on which the buckets are calculated
• range: the range of tolerance in which values are grouped

Description The bucket command will group data in different buckets, different records
will be put in the same bucket when the values fall in the same multiple of
<range>

Syntax join <other_source> <field> <other_source_field>


Parameters • other_source: the name of the other data source
• field: the field of the original source used to match the object to join
• other_source_field: the field of the other data source used to match the
object to join

Description The join command will take two records and will join them in one record
when <field> and <other_source_field> have the same value

Syntax gauge <field> [min] [max]


Parameters • field: the value to draw
• min: the minimum value to put on the gauge scale
• max: the maximum value to put on the gauge scale

Description The gauge command will take a value and represent it in a graphical way

Syntax value <field>


Parameters • field: the value to draw

Description The value command will take a value and represent it in a textual way

Syntax reduce <field> [sum|avg]


Parameters • field: the field on which the reduction will be performed
• sum or avg: the reduce operation to perform, it is sum if not specified

Description The reduce command will take a series of values and calculate a single
value

Nodes-specific commands reference

Syntax where_node <field> < ==|!=|<|>|<=|>=|include?|exclude?|


start_with?|end_with? > <value>
Parameters • field: the name of the field to which the operator will be applied
• operator
• value: the value used for the comparison. It can be a number, a string
or a list (using JSON syntax), the query engine will understand the
semantics

Description The where_node command will send to the output only the items which fulfill
the specified criterion, many clauses can be concatenated using the boolean
OR operator. Compared to the generic where command, the adjacent nodes
are also included in the output.
| Queries | 149

Syntax where_link <field> < ==|!=|<|>|<=|>=|include?|exclude?|


start_with?|end_with? > <value>
Parameters • field: the name of the field to which the operator will be applied
• operator
• value: the value used for the comparison. It can be a number, a string
or a list (using JSON syntax) the query engine will understand the
semantics

Description The where_link command will send to the output only the nodes which are
connected by a link fulfilling the specified criterion. Many clauses can be
concatenated using the boolean OR operator.

Syntax graph [node_label:<node_field>]


[node_perspective:<perspective_name>]
[link_perspective:<perspective_name>]
Parameters • node_label: add a label to the node, the label will be the content of the
specified node field
• node_perspective: apply the specified node perspective to the resulting
graph. Valid node perspective values are:
• roles
• zones
• transferred_bytes
• not_learned
• public_nodes
• reputation
• appliance_host
• link_perspective: apply the specified link perspective to the resulting
graph. Valid link perspectives are:
• transferred_bytes
• tcp_firewalled
• tcp_handshaked_connections
• tcp_connection_attempts
• tcp_retransmitted_bytes
• throughput
• interzones
• not_learned

Description The graph command renders a network graph by taking some nodes as
input.

Link Events-specific commands reference

Syntax availability
Parameters
Description The availability command computes the percentage of time a link is UP. The
computation is based on the link events UP and DOWN that are seen for the
link.

Syntax availability_history <range>


Parameters • range: the temporal window in milliseconds to use to group the link
events
| Queries | 150

Description The availability_history command computes the percentage of time a link is


UP by grouping the link events into many buckets. Each bucket will include
the events of the temporal window specified by the range parameter.

Syntax availability_history_month <months_back> <range>


Parameters • months_back: number of months to go back in regards to the current
month to group the link events
• range: the temporal window in seconds to use to group the link events

Description The availability_history command computes the percentage of time a link is


UP by grouping the link events into many buckets. Each bucket will include
the events of the temporal window specified by the range and months
parameters.
| Queries | 151

Functions
Here is the complete list of functions:

Syntax sum(<field>,...)
Parameters • a list of fields to sum

Description The sum function returns the sum of the fields passed as arguments
Warning Only available for nodes, links, variables and function_codes

Syntax color(<field>)
Parameters • field: the field on which to calculate the color

Description The color function generates a color in the rgb hex format from a value
Warning Only available for nodes, links, variables and function_codes

Syntax date(<time>)
Parameters • time defined as unix epoch

Description The date function returns a date from a raw time

Syntax dist(<field1>,<field2>)
Parameters • the two fields to subtract

Description The dist function returns the distance between field1 and field2

Syntax abs(<field>)
Parameters • the field on which to calculate the absolute value

Description The abs function returns the absolute value of the field

Syntax div(<field1>,<field2>)
Parameters • field1 and field2: the two field to divide

Description The div function will calculate the division field1/field2

Syntax coalesce(<field1>,<field2>,...)
Parameters • a list of fields or string literals in the format "<chars>"

Description The coalesce function will output the first value that is not null

Syntax concat(<field1>,<field2>,...)
Parameters • a list of fields or string literals in the format "<chars>"

Description The concat function will output the concatenation of the input fields or values

Syntax round(<field>,[precision])
Parameters • field: the numeric field to round
• precision: the number of decimal places
| Queries | 152

Description The round function takes a number and output the rounded value

Syntax split(<field>,<splitter>,<index>)
Parameters • field: the field to split
• splitter: the character used to separate the string and produce the tokens
• index: the 0 based index of the token to output

Description The split function takes a string, separates it and outputs the token at the
<index> position

Syntax is_recent(<time_field>)
Parameters • time_field: the field representing a time

Description The is_recent function takes a time field and returns true if the time is not
farther than 30 minutes

Syntax seconds_ago(<time_field>)
Parameters • time_field: the field representing a time

Description The seconds_ago function returns the amount of seconds passed between
the current time and the time field value

Syntax minutes_ago(<time_field>)
Parameters • time_field: the field representing a time

Description The minutes_ago function returns the amount of minutes passed between
the current time and the time field value

Syntax hours_ago(<time_field>)
Parameters • time_field: the field representing a time

Description The hours_ago function returns the amount of hours passed between the
current time and the time field value

Syntax days_ago(<time_field>)
Parameters • time_field: the field representing a time

Description The days_ago function returns the amount of days passed between the
current time and the time field value

Syntax bitwise_and(<numeric_field>,<mask>)
Parameters • numeric_field: the numeric field on which apply the mask
• mask: a number that will be interpreted as a bit mask

Description The bitwise_and function calculates the bitwise & operator between the
numeric_field and the mask entered by the user
| Queries | 153

Examples

Creating a pie chart


In this example we will create a pie chart to understand the MAC vendor distribution in our network. We
choose nodes as our query source and we start to group the nodes by mac_vendor:

nodes | group_by mac_vendor

We can see the list of the vendors in our network associated with the occurrences count. To better
understand our data we can use the sort command, so the query becomes:

nodes | group_by mac_vendor | sort count desc

In the last step we use the pie command to draw the chart with the mac_vendor as a label and the
count as the value.

nodes | group_by mac_vendor | sort count desc | pie mac_vendor count

Creating a column chart


In this example we will create a column chart with the top nodes by traffic. We start by getting the
nodes and selecting the id, sent.bytes, received.bytes and the sum of sent.bytes and received.bytes.
To calculate the sum we use the sum function, the query is:

nodes | select id sent.bytes received.bytes sum(sent.bytes,received.bytes)

If we execute the previous query we notice that the sum field has a very long name, we can rename it
to be more comfortable with the next commands:

nodes | select id sent.bytes received.bytes


sum(sent.bytes,received.bytes)->sum

To obtain the top nodes by traffic we sort and take the first 10:

nodes | select id sent.bytes received.bytes


sum(sent.bytes,received.bytes)->sum | sort sum desc | head 10

Finally we use the column command to display the data in a graphical way:

nodes | select id sent.bytes received.bytes


sum(sent.bytes,received.bytes)->sum | sort sum desc | head 10 | column id
sum sent_bytes received_bytes

Note: you can access an inner field of a complex type with the dot syntax, in the example the dot
syntax is used on the fields sent and received to access their bytes sub field.
Note: after accessing a field with the dot syntax it will gain a new name to avoid ambiguity, the dot is
replaced by an underscore. In the example sent.bytes become sent_bytes.
| Queries | 154

Using where with multiple conditions in OR


With this query we want to get all the nodes with a specific role, in particular we want all the nodes
which are web servers or DNS server.
With the where command it is possible to achieve this by writing many conditions separated by OR.
Note: the roles field contains a list of values, thus we used the include? operator to check if a value
was contained in the list.

nodes | where roles include? web_server OR roles include? dns_server |


select id roles

Using bucket and history


In this example we are going to calculate the distribution of link events towards an IP address. We start
by filtering all the link_events with id_dst equal to 192.168.1.11.
After this we sort by time, this is a very important step because bucket and history depend on how
the data are sorted.
At this point we group the data by time with bucket. The final step is to draw a chart using the
history command, we pass count as a value for the Y axis and time for the X axis.
The history command is particularly suited for displaying a big amount of data, in the image below we
can see that there are many hours of data to analyze.

link_events | where id_dst == 192.168.1.11 | sort time asc | bucket time


36000 | history count time

Using join
In this example we will join two data sources to obtain a new data source with more information. In
particular we will list the links with the labels for the source and destination nodes.
| Queries | 155

We start by asking for the links and joining them with the nodes by matching the from field of the links
with the id field of the nodes:

links | join nodes from id

After executing the query above we will get all the links fields plus a new field called
joined_node_from_id, it contains the node which satisfies the link.from == node.id
condition. We can access the sub fields of joined_node_from_id by using the dot syntax.
Because we want to get the labels also for the to field of the links we add another join and we
exclude the empty labels of the node referred by to to get more interesting data:

links | join nodes from id | join nodes to id | where


joined_node_to_id.label != ""

We obtain a huge amount of data which is difficult to understand, just use a select to get only the
relevant information:

links | join nodes from id | join nodes to id | where


joined_node_to_id.label != "" | select from joined_node_from_id.label to
joined_node_to_id.label protocol

Computing availability history


In this example we will compute the availability history for a link. In order to achieve a reliable
availability it is recommended to enable the "Track availability" feature on the desired link.
We start from the link_events data source, filtered by source and destination ip in order to precisely
identify the target link. Consider also filtering by protocol to achieve a higher degree of precision.

link_events | where id_src == 10.254.3.9 | where id_dst == 172.31.50.2

The next step is to sort the events by ascending time of creation. Without this step the
availability_history might produce meaningless results, such as negative values. Finally, we compute
the availability_history with a bucket of 1 minute (60000 milliseconds). The complete query is as
follows.

link_events | where id_src == 10.254.3.9 | where id_dst == 172.31.50.2 |


sort time asc | availability_history 60000
Chapter

10
Maintenance
Topics: In this chapter you will get the complementary information to keep
the Nozomi Networks Solution up and running with ordinary and
• System Overview extraordinary maintenance tasks.
• Data Backup and Restore
• Reboot and shutdown
• Software Update and Rollback
• Data Factory Reset
• Support
| Maintenance | 158

System Overview
In this section a brief overview of the Nozomi Networks Solution OS (N2OS) main components is given,
so as to provide further background to administer and maintain a production system.

Partitions and Filesystem layout


In this section we will give a look at the N2OS filesystem, services and commands.
The first thing to know about the N2OS structure is the presence of four different disk partitions:
1. N2OS 1st partition, where a copy of the OS is kept and run from. Two different partitions are used
by the install and update process in order to deliver fast-switch between the running release and
new versions
2. N2OS 2nd partition, that copes with the first one to provide reliable update paths.
3. OS Configuration partition, located at /cfg, where low-level OS configuration files are kept (for
instance, network configurations, shell admin users, SSH keys, etc). This partition is copied on /etc
at the start of the bootstrap process.
4. Data partition, located at /data where all user data is kept (learned configuration, user-imported
data, traffic captures, persistent database)

Figure 140: The N2OS standard partition table

A closer look at the /data partition reveals some sub-folders, for instance:
1. cfg: where all automatically learned and user-provided configurations are kept. Two main
configuration files are stored here:
a. n2os.conf: for automatically learned configurations
b. n2os.conf.user: for additional user-provided configurations.
2. data: working directory for the embedded relational database, used for all persistent data
3. traces: where all traces are kept and rotated when necessary.
4. rrd: this directory holds the aggregated network statistics, used for example for the Traffic on page
62.

Core Services
There are some system services that you need to know for proper configuration and troubleshooting:
1. n2osids, the main monitoring process. It can be controlled with

service n2osids <operation>

(<operation> can be any of start, stop, restart). Its log files are under /data/log/n2os and start
with n2os_ids*.
2. n2ostrace, the tracing daemon. It can be controlled with

service n2ostrace <operation>

Its log files start with n2os_trace* and are located under /data/log/n2os.
| Maintenance | 159

3. n2osva, the Asset Identification and Vulnerability Assessment daemon. It can be controlled with

service n2osva <operation>

Its log files start with n2os_va* and are located under /data/log/n2os.
4. n2ossandbox, the file sandbox daemon. It can be controlled with

service n2ossandbox <operation>

Its log files start with n2os_sandbox* and are located under /data/log/n2os.
5. nginx, the web server behind the web interface. It copes with unicorn to provide the https service up
and secured. It can be controlled with

service nginx <operation>

In order to be able to perform any operation on these services, you need to obtain the privileges using
enable-me. For instance, the following commands allow to restart the n2osids service:

enable-me
service n2osids restart

Several other tools and daemons are running in the system to deliver N2OS functionalities.

Data Backup and Restore


In this section you will get informed about available methods to backup the system and, respectively,
to restore it from a backup. Please note that a backup will contain just the data -- the system software
will be left untouched.
Two different kinds of backup are available: Full Backup and Environment Backup. The former
contains all data, but it requires the system to be in a maintenance mode where the functionalities are
not available. The latter lacks historical data, extended configurations and some other information, but
it can be performed while the system is running. Moreover, it can be used to restore the most important
part of the system on another appliance for analysis, or as delta backup when a full backup is available.

Full Backup
In this section you will learn how to backup the N2OS data of an existing installation.
1. Go to a terminal and execute this command

n2os-fullbackup

2. The backup file can now be copied through SFTP to a remote location of choice. The file to copy
is admin@<appliance_ip>:/data/<backup_hostname_date.tar.gz>. Both the command
line scp program or a user interface program like WinSCP can be used to copy the file remotely.

Full Restore
In this section you will learn how to restore from a full backup the N2OS software of an existing
installation.
1. Copy via SFTP the backup archive from the location where it was saved to the
admin@<appliance_ip>:/data/tmp/<backup_hostname_date.tar.gz> path of the
appliance, then on the appliance move it to /data. For instance, using the scp command line:

scp <backup_location_path>/<backup_hostname_date.tar.gz>
admin@<appliance_ip>:/data/tmp/<backup_hostname_date.tar.gz>
| Maintenance | 160

2. Go to a terminal and execute these commands

mv /data/tmp/<backup_hostname_date.tar.gz> /data/
<backup_hostname_date.tar.gz>

n2os-fullrestore -y <backup_hostname_date.tar.gz>

Now you have completely restored the previous backup. Only if you need to restore the
configuration files under /etc as well, execute the previous command appending the option --
etc_restore. Notice that restoring /etc files may lead to setup for example previous IP addresses
or old certificates.

Environment Backup
In this section you will learn how to backup the Environment backup of an existing installation.
1. Issue the save command from the CLI
2. Copy via SFTP the content of the /data/cfg folder to a safe place.

Environment Restore
In this section you will learn how to restore a Nozomi Networks Solution Environment to an existing
installation.
1. Copy the saved content of the cfg folder to the /data/cfg folder into the appliance.
2. From the console, issue the service n2osids restart command.

Reboot and shutdown


Reboot and shutdown commands can be performed from the web interface under Administration
> Operations

In addition, both commands can be entered in the text console or inside an SSH session.
To reboot the system issue the following command:

enable-me
shutdown -r now

To properly shutdown the system issue the following command:

enable-me
shutdown -p now
| Maintenance | 161

Software Update and Rollback


In this section you will get informed about available methods for updating the system to a newer
release and rolling back to the previous one.
Rolling back to the previously installed release is transparent, and all data is migrated back to the
previous format. However, rollback to a release older than the previously installed one requires to have
a full backup available to restore.
Although the software update is built to be transparent to the user and to preseve all data, we suggest
to always have at least a Environment Backup of the system in a safe place.
An interesting aspect of the Nozomi Networks Solution update file is that it applies to both the
Guardian and the CMC, and will work for all the physical and virtual appliances to make the updating
experience frictionless. Special considerations need to be done for the Container, where different
update commands and procedures apply.

Update: Graphical method


In this section you will learn how to update the Nozomi Networks Solution software of an existing
installation.
You need to already have the new VERSION-update.bundle file that you want to install.
A running system must be updated with a more recent N2OS release.
1. Go to Administration > System operations

2. Click on Software Update and select the VERSION-update.bundle file


Warning: the system must be at least version 18.5.9 to support the .bundle format; if your system is
running a version lower than 18.5.9 you must first update to 18.5.9 to proceed
The file will be uploaded
3. Click the Proceed button
Warning: if updating version 18.5.9, the system prompts to insert the checksum that is distributed
with the .bundle; only after checksum verification the button is enabled
The update process begins. Wait some minutes for the update to complete.

Update: Command line method


In this section you will learn how to update the Nozomi Networks Solution software of an existing
installation.
You need to already have the new update file you want to install.
A running system must be updated with a more recent N2OS release.
1. Go to a terminal and cd into the directory where the VERSION-update.bundle file is located.
Then copy the file to the appliance with:

scp VERSION-update.bundle admin@<appliance_ip>:/data/tmp

2. Start the installation of the new software with:

ssh admin@<appliance_ip>

enable-me

install_update /data/tmp/VERSION-update.bundle
| Maintenance | 162

The appliance will now reboot with the new software installed.

Rollback to the previous version


In this section you will learn how to rollback the software to the very previous version. If you would like
to rollback to a release older than the previous one, follow the instructions in the next section.
You need to have performed a release update at least once.
1. Go to the console and type the command

rollback

2. Answer y to the confirmation message and wait while the system is rebooted. All configuration and
historical data will be automatically converted to the previous version, thus no manual intervention
will be required.

Rollback to an older version


In this section you will learn how to rollback the software to a version older than the previous one.
You need to have a full backup available. If you do not have one, you cannot rollback to an older
release. Please note that this operations takes longer time than rolling back to the very previous one,
requires a full backup and does not preserve recently changed data.
1. Take the software update file VERSION-update.bundle that you want to rollback to, and install it
like if it was a new version as explained in Software Update and Rollback on page 161.
Warning: if you want to rollback to a version older than 18.5.9, you have to rollback to version
18.5.9 first since support for the previous file format was removed since
2. At reboot, ignore any error and log into the console.
3. Now follow the steps for a full restore with a backup file from the very same version of the software
that has just been re-installed.
| Maintenance | 163

Data Factory Reset


In this section you will learn how to completely erase the N2OS data partition. IP configuration will be
kept, and the procedure is safe to execute remotely. Executing this procedure will cause the system to
lose all data!
1. Go to a terminal and execute the command:

n2os-datafactoryreset -y

2. The system will start over with a fresh data partition. Refer to Setup Phase 2 on page 17 to
complete the configuration of the system.

Sanitization
In this section you will learn how to safely sanitize the N2OS data partition, this process follows the
guidelines suggested by the NIST. Executing this procedure will cause the system to lose all data!
1. Go to a terminal and execute the command:

n2os-datasanitize -y

2. The system will start over with a fresh data partition. Refer to Setup Phase 2 on page 17 to
complete the configuration of the system.

Support
In this section you will learn how to generate the archive needed to ask support to Nozomi Networks.
Go to Administration > Support click on download button and your browser will start
downloading the support packet file. Send an email to support@nozominetworks.com attaching the
file.
Chapter

11
Central Management Console
Topics: In this section we will cover the Central Management Console
product, a centralized monitoring variant of the standalone
• Overview Appliance.
• Deployment
The main idea behind the Central Management Console is to deliver
• Settings a unified experience with the Appliance, consequently the two
• Connecting Appliances products appear as similar as possible.
• Troubleshooting
• Propagation of users and user
groups
• CMC connected appliance -
Date and Time
• Appliances List
• Appliances Map
• HA (High Availability)
• Alerts
• Functionalities Overview
• Updating
• Single-Sign-On through the
CMC
| Central Management Console | 166

Overview
The Central Management Console (CMC) has been designed to support complex deployments that
cannot be addressed with a single Appliance.
A central design principle behind the CMC is the Unified Experience, that allows to access information
in the same manner as the Appliance. Some additional functionalities have been added to allow
the simple management of hundreds of appliances, and some other functionalities relying on live
traffic availability have been removed to cope with real-world, geographic deployments of the Nozomi
Networks Solution architectures. In Functionalities Overview on page 178 a detailed overview of
differences will be given.
In the Appliances page all connected appliances can be seen and managed, here is shown a
graphical representation of all the hierarchical structure of the connected Appliances, the Appliance
Map, is also present to allow a quick health check on a user-provided raster map. In Appliances List on
page 171 and Appliances Map on page 173 these functionalities will be explained in detail.
Once Appliances are connected, they are periodically synchronized with the CMC. In particular, the
Environment of each Appliance is merged into a global Environment and Alerts are received for a
centralized overview of the system. Of course, Alerts can also be forwarded to a SIEM directly from the
CMC, thus enabling a simpler decoupling of components in the overall architecture.
Firmware update is also simpler with a CMC. Once the new Firmware is deployed to it, all connected
Appliances are also automatically updated. In Updating on page 179 an overview of the update
process is provided for the CMC scenario.
| Central Management Console | 167

Deployment
The first step to setup a CMC is to deploy its Virtual Machine (VM).
The CMC VM can be deployed following the steps provided in Installing the Virtual Machine on page
12. The main difference is that the CMC version of N2OS must be used in the installation.
The only difference is during the Initial Setup phase: you have to locate and configure the management
NIC but not the sniff interfaces. The reason is that the CMC does not have to sniff live traffic.
In order for the CMC to be hosted on Amazon Web Services (AWS), contact
support@nozominetworks.com and let us know your Account Id. Access to our CMC Amazon Machine
Images (AMI) will be granted upon receiving your AWS Account Id.

CMC Virtual Machine Sizing


In this table minimum requirements based on amount of Appliances are given to help size the CMC
VM. This is purely an indication, and differences in distribution of protocols and hypervisor hardware
will affect the reccommended settings.

Appliances vCPU RAM (GB) Disk (GB)


10 2 4 100+
50 4 8 250+
100 8 16 500+
500 16 32 1000+
| Central Management Console | 168

Settings
The Administration > Synchronization settings page allows you to customize all the CMC
related parameters.

Sync token The token that must be used by all the appliances willing to
synchronize to the CMC.
Sync ID The current CMC ID. It will be shown in the CMC on which we
want to replicate data.
CMC context Multicontext indicates that the data gathered from the
Appliances connected to the CMC will be collected and kept
separately, whereas All-in-one means that the information
will be merged.
Appliance update policy It determines whether the Appliances connected to the CMC
will automatically receive updates when a new version of the
software is available.
Remote access to connected It enables/disables remote access of an Appliance by passing
Appliances through the CMC.
Allow remote to replicate on When a CMC attempts to replicate data on the current CMC, its
this CMC Sync ID is shown in the corresponding text-field. This validates
that the CMC that is trying to replicate is really the one that you
intended to work with.
HA (High Availability) The High Availability mode allows the CMC to replicate its own
data on another CMC. In order to activate it, you have to insert
the other CMC Host and Sync Token.

Connecting Appliances
To start connecting an Appliance to a CMC open the web console of a CMC, go to Settings on page
168.
Copy the Sync Token: you will need it for configuring the Appliance.
To connect an Appliance to the CMC open the web console of the Appliance and go to
Administration > CMC connection.
| Central Management Console | 169

In this page you can enter the parameters to connect the Appliance:

Host The CMC host address (the protocol used will be https). If no CA-emitted
certificates are used you can make the verification of certificates optional.
Sync token The Synchronization token necessary to authenticate the connection, the
pair of tokens can be generated from the CMC.
Description [Optional] A description of the Appliance, it will be displayed in the
Appliances list.
Site [Optional] If at least two Appliances have the same site in the Appliances
map you can enable a grouping by site.

The Check connection button indicates if the pairing between the CMC and the appliance is valid.
After entering at least the endpoint and the Sync token, and save the configuration, open the web
console of the CMC and go to the Appliances

The table will list all the connected Appliances. When an Appliance is connected for the first time, it
will just notify its status and receive Firmware updates but it will not be allowed to perform additional
actions. To enable a complete integration of the Appliance you will need to "allow" it (see Appliances
List on page 171 for details).

Troubleshooting
In this section a list of the most useful troubleshooting tips for the CMC is given.
1. If the Appliance is not appearing at all in the CMC:
• Ensure that firewall(s) between the Appliance and the CMC allows traffic on TCP/443 port
(HTTPS), with the Appliance as Source and the CMC as the Target
| Central Management Console | 170

• Check that the tokens are correctly configured both in the Appliance and the CMC
• Check in the /data/log/n2os/n2osjobs.log file for connection errors.
2. The Appliance ID is stored in the /data/cfg/.appliance-uuid file. Please do not edit this file
after the appliance is connected to the CMC, since it is the unique identifier of the Appliance inside
the CMC. In case a forceful change of the Appliance ID is needed, you will need to remove the old
data from the CMC by removing the old Appliance ID entry.
3. If something goes wrong during the setup of a Appliance, follow the instructions at Appliances List
on page 171 to completely delete an Appliance or just to clear its data from the CMC.

Propagation of users and user groups


As described in Users Management on page 21, both CMC and appliances can have their own users
and user groups. In order to simplify the management of all the possible appliances connected to
a CMC, a new synchronization feature has been added: users and user groups of the CMC can be
propagated to all the appliances.
Admin users can specify which users and user groups will be propagated to connected appliances. As
shown in the figure below, in the create/edit user group popup there is a toggle button to enable this
property, which, by default, is set to false.

The synchronization comes with the following constraints:


• all the users and user groups that arrived in the Guardian from the CMC cannot be modified,
• all the users and user groups created in the Guardian will not be sync with the CMC,
• in case of name conflict, users and user groups in the Guardian will be overwritten with the ones
coming from the CMC.

CMC connected appliance - Date and Time


Note that when an appliance is attached to a CMC, its date and time cannot be manually set as
described in Date and time on page 95. Appliances connected to a CMC (and with no NTP configured)
will automatically get time synchronization from the parent CMC.
| Central Management Console | 171

Appliances List
The Appliances section shows the complete list of appliances connected to the current CMC. For each
appliance, you can see some information about its status (system, throughput, alerts, license and
running N2OS version).

Actions on appliances:

Allow/Disallow an Appliance

After allowing an Appliance (an allowed Appliance has the icon)


• Nodes, Links and Variables coming from the Appliance become part of the Environment of the
CMC.
• Alerts coming from the Appliance can be seen in the Alerts section.

Focus on appliance
Allows to filter out only the appliance chosen data, such as Alerts and Environment.

Remote connect to an appliance


Connect to a remote appliance directly from the CMC. Click on this action to open a new browser tab to
the appliance selected login page. The action is hidden if the CMC isn't configured to allow this type of
communication between Appliances and CMC; to enable it go to Settings on page 168 page.

Place an appliance on the map


| Central Management Console | 172

Click on this action if you need to place the appliance within the map (if you did not upload a map go to
Appliances Map on page 173), choose the right position of the selected appliance by clicking on the
map and Save.

Lock the appliance software version


When locked, the Appliance will not automatically update its software.

Force the software update of the appliance


Even if it is locked, the Appliance will automatically update its software, with the version installed on the
CMC.

Clear data of an appliance


Clear all data received from the selected appliance for restarting the synchronization from an empty
state.

Delete a appliance
Clear all data received from the selected Appliance and delete it from the list. If the Appliance tries to
sync with the CMC again, it appears disallowed in the list.
| Central Management Console | 173

Appliances Map
In this page you can upload the Appliances Map by clicking on Upload map and choosing a jpg file
from your computer.

You can inspect the appliances information in the Info pane. In the map each appliance is identified
by its own ID, the appliance marker color is related to the risk of its alerts and near the ID there is the
number of the alert in the last 5 minutes (if greater than 0). If the alerts in last 5 minutes grow, the
appliance marker will blink for 1 minute.

If the site has been specified in the CMC connection page of the appliance, it is possible to enable the
"group by site" option. The appliances with the same site will be grouped to deliver a simpler view of a
complex N2OS installation.

Figure 141: Appliances map with "group by site" enabled


| Central Management Console | 174

As you can see in the default dashboard, the Appliances Map is also available as a widget.
| Central Management Console | 175

HA (High Availability)
This feature allows a CMC to replicate all its data on another CMC called replica.
Note: In order to enable the highest level of resiliency, the two CMCs must be replicating each
other. In case a CMC stops working, the connected appliances will continue to send their data
to the replica CMC.
To connect another CMC as HA (High Availability) replica, go to Administration /
Synchronization settings page.

Enable the feature by clicking on ON button and then fill the form with the Host and the Sync Token
field of the endpoint you want to replicate with.
If the destination endpoint doesn't provide CA-Emitted TLS certificate, remember to click on
Optional, so the certificates will not be verified (this option isn't recommended).
The Sync token can be found in Administration / Synchronization settings page of the
destination endpoint.

After Save, in order to confirm the connection to the two CMCs, go to Administration /
Synchronization settings page of the destination endpoint and verify the Sync ID shown is the
one of the current machine, and click on Allow button .
| Central Management Console | 176

To verify that everything is configured correctly and it's working, see the Replication status in
the Administration / Health. Here, you can see if the various entities are synchronized, e.g.
AuditItems are elements generally with a low creation frequency, it will be 'In Sync'.
| Central Management Console | 177

Alerts
Alerts management in the centralized console is equivalent to alerts management in an appliance (for
more information about this go to Alerts on page 48) with the advantage of having in one place all the
alerts of your appliances.
Like in an appliance, you can create a query (Queries on page 67) and therefore an assertion (Custom
Checks: Assertions on page 115) that involves all the nodes/links/etc of your overall infrastructure.
In the centralized console you have the strength to create what we call a "Global Assertion": you can
make one or more groups of assertions that can be propagated to all the appliances. The appliances
cannot edit nor delete these assertions, only the CMC has control over them.
As mentioned before you can configure the centralized console to forward alerts to a SIEM without
having to configure each appliance (for more information on this topic, see Data Integration on page
89).
| Central Management Console | 178

Functionalities Overview
The unified experience offered by the CMC lacks some of the features found in the appliance user
interface.

As stated above, the Nodes table in a CMC offers only the Show alerts and Navigate actions (the
same table on a appliance has also Configure node, Show requested trace and Request a trace
actions).

Figure 142: Node actions on appliance (top) and CMC (bottom)

In the Environment Links table only the Show alerts and Navigate actions are available (the same
table on an appliance has also Configure link, Show requested trace, Request a trace and Show
events actions).

Figure 143: Link actions on appliance (top) and CMC (bottom)

In Process View Variables table the Configure variable action is not allowed, but the other actions
(Variable details, Add to favourites and Navigate) are. You have a detailed explanation in Process
Variables on page 63.

Figure 144: Variable actions on appliance (top) and CMC (bottom)

Generally speaking, configuration actions and trace request functionalities are available only in the
appliance user interface.
| Central Management Console | 179

Updating
In this section we will cover the release update and rollback operations of a Nozomi Networks Solution
architecture, comprised of a Central Management Console and one or more Appliance(s).
The Nozomi Networks Solution Software Update bundle is universal (except for the Container) -- it
applies to both the Guardian and the CMC, and will work for all the physical and virtual appliances to
make the update experience frictionless.
The alignment of versions is both a functionality and a default requirement of the Central Management
Console. A Guardian with a different version with respect to the Central Management Console will not
be allowed to send its updates until the new Software Update is received and applied -- this behavior
can be changed from the Synchronization settings, though.
Once an Appliance is connected to the Central Management Console, updates must be performed
only from there. Consequently, the Software Update section in the Web Console of the Appliance will
prohibit to install further updates.
The update process from the Central Management Console can proceed as explained in Software
Update and Rollback on page 161. After the Central Management Console is updated, each Appliance
will receive the new Software Update.
To Rollback, first rollback the Central Management Console, and then proceed to rollback all the
appliances as explained in Software Update and Rollback on page 161.

Single-Sign-On through the CMC


CMC machines offer a SAML identity provider endpoint to their connected appliances in order to permit
users to login into appliances by passing through the parent CMC.
This functionality is enabled by default on all CMCs and, at the moment, it can be used only with
appliances that can have an IP address (e.g. https://192.168.1.122) as value for the "Nozomi URL" field
present in the SAML integration page (see SAML Integration on page 30 for more information).
The identity provider endpoint is exposed only if a configuration rule indicating the external URL at
which the CMC is accessible is present in the n2os.conf.user file (see Configuration on page 191
for more information). For example, assuming the CMC can be accessed at https://192.168.1.8, the
configuration rule would be the following:

cmc identity_provider_url https://192.168.1.8

Note that to make the change effective you also have to reboot the machine or restart all the services.
Once the CMC has been configured, you should be able to obtain the identity provider metadata at
the /idp/saml/metadata endpoint of the CMC. Continuing with the CMC of the previous example, you
will find the metadata file at https://192.168.1.8/idp/saml/metadata. This file is important because it has
to be uploaded on all the appliances on which you want to have the SSO login.
The last remaining step is to configure the appliances to point to the CMC when performing SSO
operations. Specifically, you must use the following data and do what is described at SAML Integration
on page 30:
• SAML role: https://nozominetworks.com/saml/group-name
• Metadata XML: the file downloaded from the CMC in the previous step
At this point everything should be set and you should be able to perform SSO via the CMC on the
configured appliances.
Note that if you have a hierarchy of CMCs in your installation, you can also setup SSO in a composable
way in order to have a SSO chain. For example, let's consider the following scenario:
• the CMC1 is configured to perform SSO on ExternalIdP
• the CMC2 is attached to CMC2
• the Guardian G1 is attached to CMC2
We can have a SSO chain starting at G1, passing through CMC2, CMC1 and ending at ExternalIdP by
configuring each pair of machines as described above. In particular we want to have:
• CMC1 has an identity_provider_url specified in n2os.conf.user and it is configured to perform SSO
on ExternalIdP
• CMC2 has an identity_provider_url specified in n2os.conf.user and it is configured to perform SSO
on CMC1
• G1 is configured to perform SSO on CMC2
Assuming that you want to login into G1 by using ExternalIdP, you will have to click on the SSO
buttons three times (on G1, on CMC2 and finally on CMC1). Once passed through ExternalIdP, if the
authentication is successful you will be automatically redirected to G1.
Chapter

12
Remote Collector
Topics: In this section we will cover the Remote Collector product, an
appliance that is intended to be used to collect and forward traffic to
• Overview a Guardian.
• Deployment
A Remote Collector is a low-demanding and low-throughput
• Using a Guardian with appliance suitable for installation in isolated areas (e.g., windmills,
connected Remote Collectors solar power fields), where many small sites are to be monitored.
• Troubleshooting
• Updating
| Remote Collector | 182

Overview
The Remote Collector has been designed to be deployed in installations that require monitoring of
many isolated locations. Remote Collectors connect to a Guardian and act as "remote interfaces",
broadening its capture capability, and thus allowing a Guardian to be applied in simple but highly-
distributed scenarios.
A Remote Collector is an appliance meant to run on a less performant hardware than the Guardian or
the CMC, and its main task is that of just "forwarding" traffic to a Guardian. In some sense a Remote
Collector is to a Guardian as a Guardian is to a CMC. There are some key differences though. First of
all, a Remote Collector does not process sniffed traffic in any way, it just forwards it to the Guardian it
is attached to. Second, a Remote Collector has no graphical user interface. Finally, as it runs on less
performant hardware than the Guardian, a Remote Collector has a limitation on the bandwidth that it
can process.
A Guardian can be enabled to receive traffic from the Remote Collectors. When enabled it provides
an additional (virtual) network interface, called "remote-collector", which aggregates the traffic of the
Remote Collectors connected to it. The currently connected Remote Collectors can be inspected from
the "Appliances" tab.
Each Remote Collector is entitled to forward the traffic it sniffs to only one Guardian. Several Remote
Collectors can connect to a Guardian. Traffic is encrypted with high security measures over the
channel (TLS), so that it cannot be intercepted by a third-party. The Firmware of a Remote Collector
receives automatic updates from the Guardian it is connected to.
| Remote Collector | 183

Deployment
The first step to setup a Remote Collector is to deploy its Virtual Machine (VM).
The Remote Collector VM can be deployed following the steps provided in Installing the Virtual
Machine on page 12 for the Guardian edition. The main difference is that the Remote Collector version
of the image must be used in the installation.

Guardian configuration
The Guardian has to be configured via terminal (ssh or console). In the following assume that 1.1.1.1
is the ip address of the Remote Collector.
1. Run command n2os-enable-rc
This command will open port 6000 on the firewall, which is the one used by the Remote Collector to
send its traffic. Moreover, a new interface called "remote-collector" will appear in the list of "Network
Interfaces".

2. The synchronization of a Remote Collector towards the Guardian for the purpose of software update
is now enabled as shown in Administration / Synchronization settings. Note down the
Sync token.

3. Run command scp /etc/https_nozomi.crt admin@1.1.1.1:/tmp/


This command copies Guardian's certificate to the Remote Collector.
4. Run command ssh admin@1.1.1.1
This command connects to the Remote Collector.
| Remote Collector | 184

5. Run command enable-me


This command enables privileged mode.
6. Run command cat /tmp/https_nozomi.crt >> /data/ssl/trusted_nozomi.crt
This command adds the Guardian's certificate to the list of the Remote Collector's trusted
certificates. These last 4 steps enable TLS communication between Guardian and Remote
Collector. Repeat this procedure for each Remote Collector to be connected.

Remote Collector configuration


Each Remote Collector has to be configured via terminal (ssh or console). In the following assume that
1.2.3.4 is the ip address of the Guardian to connect to. The Remote Collector provides a TUI to help
with this setup phase; it can be started with the n2os-tui command (the command is available after
having elevated your privileges with the enable-me command).
1. Select the Remote Collector menu.

2. Select the "Set Guardian Endpoint" menu.

3. Insert the IP address of the Guardian you wish to connect to.


| Remote Collector | 185

4. From the previous menu, select the "Set Connection Sync Token" menu. Insert the token you have
noted down using the Guardian configuration step.

5. Optionally, a bpf-filter can be added by selecting the "Set BPF Filter" menu from the previous menu.
| Remote Collector | 186

6. Exit from the TUI.


7. Run command scp /etc/https_nozomi.crt admin@1.2.3.4:/tmp/
This command copies Remote Collector's certificate to the Guardian.
8. Run command ssh admin@1.2.3.4
This command connects to the Guardian.
9. Run command enable-me
This command enables privileged mode.
10.Run command cat /tmp/https_nozomi.crt >> /data/ssl/trusted_nozomi.crt
This command adds the Remote Collector's certificate to the list of the Guardian's trusted
certificates. These last 4 steps enable TLS communication between Remote Collector and
Guardian.

Configuration of CA-based certificates


The certificates installed by default in the Guardian and the Remote Collector are self-signed, but it
is also possible to use certificates signed with a CA authority, if your company policy demands such
requirement. Normally a "certificate chain" composed by the "Root CA" and several "Intermediate CA"s
are used to sign a "leaf" certificate. If you wish to follow this approach, then you may go through the
following steps, which have to be repeated for both the Guardian and the Remote Collector appliances.
1. Put a "leaf" certificate/key pairs under /etc/https_nozomi.crt and /etc/
https_nozomi.key.
This command installs your certificate in the appliance.
2. Put a the "certificate chain" under /data/ssl/trusted_nozomi.crt.
This command installs your certificate chain in the appliance. Any certificate signed with the chain
will be accepted as valid.

Final configuration
After all the appliances have been configured it is necessary to reboot them for the configuration to
take effect. Alternatively, it is sufficient to perform the following commands
1. service n2osrc stop
on the Guardian
2. service n2osrs stop
on each Remote Collector
| Remote Collector | 187

Using a Guardian with connected Remote Collectors


In this section we briefly outline some functionalities that a Guardian offers to monitor traffic with a set
of connected Remote Collectors.
The set of connected Remote Collectors can be inspected from the "Appliances" tab of a Guardian.

By selecting a Remote Collector an information pane appears on the right, showing some more
detailed information. The information includes the health status of the Remote Collector, and the
timestamp of the last received payload traffic.

The provenience of the packets is tracked internally by the Guardian and it is displayed in several
locations, such as in the the "Nodes" tab of "Network View",

in the "Asset view",


| Remote Collector | 188

and in the "Alerts" page.

Troubleshooting
In this section a list of the most useful troubleshooting tips for the RC is given.
1. If a Remote Collector is not appearing at all in the Appliances tab:
• Ensure that firewall(s) between the Guardian and the Remote Collector allows traffic on TCP/443
port (HTTPS), with the Remote Collector as Source and the Guardian as the Target
• Check that the tokens are correctly configured both in the Guardian and the Remote Collector
• Check the /data/log/n2os/n2osjobs.log file of the Remote Collector for connection
errors.
2. If a Remote Collector appears in the Appliances tab, but it sends no traffic (last seen packet is
empty or does not update its value):
• Ensure that firewall(s) between the Guardian and the Remote Collector allows traffic on
TCP/6000 port, with the Remote Collector as Source and the Guardian as the Target
• Check that the certificates have been correctly exchanged between the Guardian and the
Remote Collector, i.e., that the certificate at /etc/https_nozomi.crt of an appliance
appears listed in /data/ssl/trusted_nozomi.crt of the other appliance, or that the
certificate chain has been trusted
• Check the /data/log/n2os/n2os_rs.log file of the Remote Collector for connection
errors. In particular errors related to certificates are logged with the error code coming directly
from the openssl library. Once identified the code it is possible to check for the corresponding
explanation at the following page: https://www.openssl.org/docs/man1.1.0/man3/
X509_STORE_CTX_get_error.html
• Make sure to restart n2osrc and n2osrs services everytime a change in the config or the
certificates is performed
| Remote Collector | 189

Updating
In this section we will cover the release update and rollback operations of a Remote Collector.
Remote Collectors receive automatic updates from the Guardian they are attached to: as for the
Guardian to the CMC, the Remote Collector updates to the version of the Guardian if the current
firwmare version is older than the Guardian's.
A Remote Collector has no graphical interface. The only other method for changing the version of a
Remote Collector is to use the manual procedure described at Software Update and Rollback on page
161.
Chapter

13
Configuration
Topics: In this section we will cover the configuration of Nozomi Networks
Solution components in details.
• Editing Configuration files
Each configuration rule can be inserted in the custom
• Basic configuration rules
n2os.conf.user configuration file (see Editing Configuration files on
• Configuring nodes page 192) and/or directly in the CLI. To get it applied you may
• Configuring links need different actions. For each configuration rule we will cover all
• Configuring variables the required details.
• Configuring protocols The CLI can be run from a text console (run the cli command) or
• Configuring trace from the web console under Administration > CLI.
• Configuring Time Machine
• Configuring retention
• Configuring Bandwidth
Throttling
| Configuration | 192

Editing Configuration files


The Nozomi Networks Solution configuration relies on text-files located at /data/cfg.
In particular the /data/cfg/n2os.conf.user can be edited to fine-tune user-defined configuration
or mass-import rules from other systems. In this section we will see how to change and apply a
configuration rule.
Please log into the text-console, either directly or through SSH, and issue the following commands.
1. Use vi or nano to edit /data/cfg/n2os.conf.user
2. Edit a configuration rule with the text editor, see the next sections for some examples.
3. Write configuration changes to disk and exit the text editor.
| Configuration | 193

Basic configuration rules

Set traffic filter

Products Guardian, Remote Collector


Syntax bpf_filter <bpf_expression>
Description Set the BPF filter to apply on incoming traffic to limit the type and amount of
data processed by the appliance.
Parameters • bpf_expression: the Berkeley Packet Filter expression to apply on
incoming traffic. A BPF syntax reference can be accessed on the
appliance at https://<appliance_ip>/#/bpf_guide.

Where In the n2os.conf.user file.


To apply Reload with service n2osids restart

Enable or disable management filters

Products Guardian
Syntax mgmt_filters <on|off>
Description With this rule you can switch off the filters on packets that come from/to
N2OS itself.
Parameters • on|off: choose 'off' if you want to disable the management filters (default:
on).

Where In the n2os.conf.user file.


To apply Reload with service n2osids reload

Enable or disable TCP/UDP deduplication

Products Guardian
Syntax probe deduplication enabled <status>
Description It can enable or disable the deduplication analysis that N2OS does on TCP/
UDP packets.
Parameters • status: it can be either true, to enable the feature, or false, to disable it.
(default: true)

Where In the n2os.conf.user file.


To apply Reload with service n2osids reload

Set TCP deduplication time delta

Products Guardian
Syntax probe deduplication tcp_max_delta <delta>
Description Set the desired maximum time delta, in milliseconds, to consider duplicated
a TCP packet.
Parameters • delta: the value of the maximum time delta. (default: 1)

Where In the n2os.conf.user file.


To apply Reload with service n2osids reload
| Configuration | 194

Set UDP deduplication time delta

Products Guardian
Syntax probe deduplication udp_max_delta <delta>
Description Set the desired maximum time delta, in milliseconds, to consider duplicated
an UDP packet.
Parameters • delta: the value of the maximum time delta. (default: 1)

Where In the n2os.conf.user file.


To apply Reload with service n2osids reload

Set default Zone name

Products Guardian
Syntax ids configure vi zones add default <zone_name>
Description Set the default Zone name, for nodes not matching any of the custom
defined zones. Details on zones feature can be viewed in Network Graph on
page 54.
Remark: zones can be configured through the GUI, which is the preferred
way. Refer to Zone configurations on page 94

Parameters • zone_name: the name of the default zone

Where In the CLI


To apply It is applied automatically

Add Zone

Products Guardian
Syntax conf.user configure vi zones add <subnet>[,<subnet>,...]
<zone_name>
Description Add a new zone containing all the nodes in one or more specified
subnetworks. More subnetworks can be concatenated using commas. The
subnetworks can be specified using the CIDR notation (<ip>/<mask>) or
by indicating the end IPs of a range (both ends are included: <low_ip>-
<high_ip>).
Remark: zones can be configured through the GUI, which is the preferred
way. Refer to Zone configurations on page 94

Parameters • subnet: the subnetwork or subnetworks assigned to the zone; both IPv4
and IPv6 are supported
• zone_name: the name of the zone

Where In CLI.
To apply It is applied automatically

Assign a level to a zone

Products Guardian
Syntax conf.user configure vi zones setlevel <level>
<zone_name>.
| Configuration | 195

Description Assigns the specified level to a zone. All nodes pertaining to the given zone
will be assigned the level.
Remark: zones can be configured through the GUI, which is the preferred
way. Refer to Zone configurations on page 94.

Parameters • level: the level assigned to the zone


• zone_name: the name of the zone

Where In CLI.
To apply It is applied automatically

Assign a security profile to a zone

Products Guardian
Syntax conf.user configure vi zones setsecprofile
<security_profile> <zone_name>.
Description Assigns the specified security profile to a zone. The visibility of the alerts
generated within the zone will follow the configured security profile.
Refer to Security Profile .

Parameters • security_profile: the security profile assigned to the zone. Values: low,
medium, high, paranoid
• zone_name: the name of the zone

Where In CLI.
To apply It is applied automatically

Add custom protocol

Products Guardian
Syntax conf.user configure probe custom-protocol <name>
<transport> <port>
Description Add a new protocol specifying a port and a transport layer
Parameters • name: the name of the protocol, it will be displayed through the user
interface; DO NOT use a protocol name already used by SG. E.g. one
can use MySNMP, or Myhttp
• transport: the transport layer, choose "udp" or "tcp"
• port: the transport layer port used to identify the custom protocol

Where In CLI.
To apply It is applied automatically

Disabling a protocol

Products Guardian
Syntax conf.user configure probe protocol <name> enable false
Description Completely disables a protocol. This can be useful to fine tune the appliance
for specific needs. Any existing learned links will be deleted!
Parameters • name: the name of the protocol to disable

Where In CLI.
| Configuration | 196

To apply It is applied automatically

Set IP grouping

Products Guardian
Syntax probe ipgroup <ip>/<mask>
Description This command permits to group multiple ip addresses into one single
node. This command is particularly useful when a large network of clients
accesses the SCADA/ICS system. To provide a clearer view and get an
effective learning phase, you can map all clients to a unique node simply by
specifying the netmasks (one line for each netmask). All sections requiring
the raw IP will get the appropriate raw data. For instance, the Trace on page
36 will show the raw IP in the provided pcaps. WARNING: this command
merges all nodes information into one in an irreversible way, and the
information about original nodes is not kept.
Parameters • ip/mask: the subnetwork identifier used to group the IP addresses

Where In the n2os.conf.user file.


To apply Restart both n2osids and n2ostrace with: service n2osids restart
AND service n2ostrace restart

Set IP grouping for Public Nodes

Products Guardian
Syntax probe ipgroup public_ips <ip>
Description This command permits to group all public IP addresses into one single node
(for instance, use 0.0.0.0 as the 'ip' parameter). This command is particularly
useful when the monitored network includes nodes that have routing to the
Internet. However, all sections requiring the raw IP will get the appropriate
raw data. For instance the Trace on page 36, will show the raw IP in the
provided pcaps. WARNING: this command merges all nodes information
into one in an irreversible way, and the information about original nodes is
not kept.
Parameters • ip: the ip to map all Public Nodes to

Where In the n2os.conf.user file.


To apply Restart both n2osids and n2ostrace with: service n2osids restart
AND service n2ostrace restart

Skip Public Nodes Grouping for a subnet

Products Guardian
Syntax probe ipgroup public_ips_skip <ip>/<mask>
Description This is useful when the monitored network has a public addressing that has
to be monitored (i.e. public addressing used as private or public addresses
that are in security blacklists).
Parameters • ip/mask: the subnetwork identifier to skip

Where In the n2os.conf.user file.


To apply Restart both n2osids and n2ostrace with: service n2osids restart
AND service n2ostrace restart
| Configuration | 197

Set special Private Nodes whitelist

Products Guardian
Syntax vi private_ips <ip>/<mask>
Description This rule will set the is_public property of nodes matching the provided mask
to false. This is useful when the monitored network has a public addressing
used as private (e.g. violation of RFC 1918).
Parameters • ip/mask: the subnetwork identifier to treat as private; both IPv4 and IPv6
are supported

Where In the n2os.conf.user file.


To apply Restart n2osids with: service n2osids restart

Set GUI logout timeout

Products CMC, Guardian


Syntax conf.user configure user max_idle_minutes
<timeout_in_minutes>
Description Change the default inactivity timeout of the GUI. This timeout is used to
decide when to log out the current session when the user is not active.
Parameters • timeout_in_seconds: amount of minutes to wait before logging out.

Where In CLI.
To apply It is applied automatically

Enable Syslog capture feature

Products Guardian
Syntax conf.user configure probe protocol syslog capture_logs
<true | false>
Description With this configuration rule you can enable the passively capture of the
syslog events. It is useful when you want to forward them to a SIEM, for
further details see Syslog Forwarder on page 92
Parameters • <true | false>: true in case you want to enable it, false otherwise.

Where In CLI.
To apply It is applied automatically
| Configuration | 198

Configuring nodes

Set node label

Products Guardian
Syntax ids configure vi node <ip> label <label>
Description Set the label to a node in the Environment, the label will appear in the
Environment > Network View > Graph, in the Environment >
Network View > Nodes and in the Environment > Process View >
Variables
Parameters • ip: the IP address of the node
• label: the label that will be displayed in the user interface

Where In CLI.
To apply It is applied automatically

Enable or disable node

Products Guardian
Syntax ids configure vi node <ip> state <state_value>
Description This directive permits to disable a node. This setting has effect in the graph:
a disabled node will not be displayed.
Parameters • ip: the IP address of the node
• state_value: it can be either enabled or disabled

Where In CLI.
To apply It is applied automatically

Delete node

Products Guardian
Syntax ids configure vi node <ip> :delete
Description Delete a node from the Environment
Parameters • ip: the IP of the node to delete

Where In CLI.
To apply It is applied automatically

Define a cluster

Products Guardian
Syntax conf.user configure vi cluster <ip> <name>
Description This command permits to define an High Availability cluster of observed
nodes. In particular, this permits to: accelerate the learning phase by joining
the learning data of two sibling nodes, and to group nodes by cluster in the
graph.
Parameters • name: the name of the cluster
• ip: the ip address of a cluster node

Where In CLI.
| Configuration | 199

To apply It is applied automatically


| Configuration | 200

Configuring links

Set link last activity check

Products Guardian
Syntax conf.user configure vi link <ip1> <ip2>
<protocol> :check_last_activity <seconds>
Description Set the last activity check on a link, an alert will be raised if the link remains
inactive for more than the specified seconds
Parameters • ip1, ip2: the IPs of the two nodes involved in the communication
• protocol: the protocol
• seconds: the communication timeout

Where In CLI.
To apply It is applied automatically

Set link persistency check

Products Guardian
Syntax conf.user configure vi link <ip1> <ip2>
<protocol> :is_persistent
Description Set the persistency check on a link, if a new handshake is detected an alert
will be raised
Parameters • ip1, ip2: the IPs of the two nodes involved in the communication
• protocol: the protocol

Where In CLI.
To apply It is applied automatically

Delete link

Products Guardian
Syntax ids configure vi link <ip1> <ip2> :delete
Description Delete a link
Parameters • ip1, ip2: the IPs identifying the link

Where In CLI.
To apply It is applied automatically

Delete protocol

Products Guardian
Syntax ids configure vi link <ip1> <ip2> <protocol> :delete
Description Delete a protocol from a link
Parameters • ip1, ip2: the IPs identifying the link
• protocol: the protocol of the link to delete

Where In CLI.
To apply It is applied automatically
| Configuration | 201

Delete function code

Products Guardian
Syntax ids configure vi link <ip1> <ip2> <protocol> fc
<func_code> :delete
Description Delete a function code from a protocol
Parameters • ip1, ip2: the IPs identifying the link
• protocol: the protocol of the link
• func_code: the function code to delete

Where In CLI.
To apply It is applied automatically
| Configuration | 202

Configuring variables

Set default variable history <enabled | disabled>

Products Guardian
Syntax ids configure vi variable default history <enabled |
disabled>
Description Set if the variable history is enabled or not, when not set it's disabled.
The amount of the history maintained can be configured in "Variable history
retention" section in Configuring retention on page 211
NOTE: when "enabled" the Guardian performance can be affected
depending on the amount of variables and the update rate

Parameters • <enabled | disabled>: use "enabled" when you want to enable it,
"disabled" otherwise

Where In CLI.
To apply It is applied automatically

Set variable history <enabled | disabled>

Products Guardian
Syntax ids configure vi variable <var_key> history <enabled |
disabled>
Description <enabled | disabled> Define the amount of samples shown in the graphical
history of a variable.
Parameters • var_key: the variable identifier
• <enabled | disabled>: Set if the variable history is enabled or not, when
not set it's disabled.
The amount of the history maintained can be configured in "Variable
history retention" section in Configuring retention on page 211
NOTE: when "enabled" the Guardian performance can be affected
depending on the update rate of the variable

Where In CLI.
To apply It is applied automatically

Set variable label

Products Guardian
Syntax ids configure vi variable <var_key> label <label>
Description Set the label for a variable, the label will appear in the Environment >
Process View sections
Parameters • var_key: the variable identifier
• label: the label displayed in the user interface

Where In CLI.
To apply It is applied automatically
| Configuration | 203

Set variable unit of measure

Products Guardian
Syntax ids configure vi variable <var_key> unit <unit>
Description Set a unit of measure on a variable.
Parameters • var_key: the variable identifier
• unit: the unit of measure displayed in the user interface

Where In CLI.
To apply It is applied automatically

Set variable offset

Products Guardian
Syntax ids configure vi variable <var_key> offset <offset>
Description The offset of the variable that will be used to map the 0 value of the variable.
Parameters • var_key: the variable identifier
• offset: the offset value used to calculate the final value of the variable

Where In CLI.
To apply It is applied automatically

Set variable scale

Products Guardian
Syntax ids configure vi variable <var_key> scale <scale>
Description The scale of the variable that is used to define the full range of the variable.
Parameters • var_key: the variable identifier
• scale: the scale value used to calculate the final value of the variable

Where In CLI.
To apply It is applied automatically

Set variable last update check

Products Guardian
Syntax conf.user configure vi variable
<var_key> :check_last_update <seconds>
Description Set the last update check on a variable, if the variable value is not updated
for more than the specified seconds an alert is raised
Parameters • var_key: the variable identifier
• seconds: the timeout after which a stale variable alert will be raised

Where In CLI.
To apply It is applied automatically

Set variable quality check

Products Guardian
| Configuration | 204

Syntax conf.user configure vi variable <var_key> :check_quality


<seconds>
Description Set the quality check on a variable, if the value quality remains invalid for
more than the specified seconds an alert is raised
Parameters • var_key: the variable identifier
• seconds: the maximum amount of consecutive seconds the variable can
have an invalid quality

Where In CLI.
To apply It is applied automatically

Set a variable critical state

Products Guardian
Syntax conf.user configure cs variable <id> <var_key> [<|>|=]
<value>
Description Define a new custom critical state on a single variable that will raise on
violation of defined range.
Parameters • id: a unique ID for this critical state
• var_key: the variable identifier
• operator: the operand to evaluate for the critical state to rise. For
instance, if the > operator is specified, the variable will have to be higher
than value to trigger the critical state.
• value: the variable value to check for

Where In CLI.
To apply It is applied automatically

Set a multiple critical state

Products Guardian
Syntax conf.user configure cs multi <id> variable c1 <var_key>
[<|>|=] <value> ^ variable c2 <var_key> [<|>|=] <value>
[^ ...]
Description Creates a multi-valued critical state, that is an expression of "variable critical
states", described above. The syntax is and AND (^) expression of the
single-variable critical state.
Parameters • id: a unique ID for this critical state
• var_key: the variable identifier
• operator: the operand to evaluate for the critical state to rise. For
instance, if the > operator is specified, the variable will have to be higher
than value to trigger the critical state.
• value: the variable value to check for

Where In CLI.
To apply It is applied automatically
| Configuration | 205

Configuring protocols

Set CA size for iec101 protocol decoder

Products Guardian
Syntax conf.user configure probe protocol iec101 ca_size <size>
Description iec101 CA size can vary across implementations, with this configuration rule
the user can customize the setting for its own environment
Parameters • <size>: the size in bytes of the CA

Where In CLI.
To apply It is applied automatically

Set LA size for iec101 protocol decoder

Products Guardian
Syntax conf.user configure probe protocol iec101 la_size <size>
Description iec101 LA size can vary across implementations, with this configuration rule
the user can customize the setting for its own environment
Parameters • <size>: the size in bytes of the LA

Where In CLI.
To apply It is applied automatically

Set IOA size for iec101 protocol decoder

Products Guardian
Syntax conf.user configure probe protocol iec101 ioa_size
<size>
Description iec101 IOA size can vary across implementations, with this configuration
rule the user can customize the setting for its own environment
Parameters • <size>: the size in bytes of the IOA

Where In CLI.
To apply It is applied automatically

Set an arbitrary amount of bytes to skip before decoding iec101 protocol

Products Guardian
Syntax conf.user configure probe protocol iec101 bytes_to_skip
<amount>
Description Based on the hardware configuration iec101 can be prefixed with a fixed
amount of bytes, with this setting Guardian can be adapted to the peculiarity
of the environment.
Parameters • <amount>: the amount of bytes to skip

Where In CLI.
To apply It is applied automatically
| Configuration | 206

Enable the Red Electrica Espanola semantic for iec102 protocol

Products Guardian
Syntax conf.user configure probe protocol iec102 ree <enabled|
disabled>
Description There is a standard from Red Electrica Espan#ola which changes the
semantic of the iec102 protocol, after enabling this setting the iec102
protocol decoder will be compliant to the REE standard.
Parameters • <enabled|disabled>: specify enabled to enable the Red Electrica
Espan#ola semantic

Where In CLI.
To apply It is applied automatically

Set the subnet in which the iec102 protocol will be enabled

Products Guardian
Syntax conf.user configure probe protocol iec102 subnet
<subnet>
Description The detection of iec102 can lead to false positives, this rules give the
possibility to the user to enable the detection on a specific subnet
Parameters • <subnet>: a subnet in the CIDR notation

Where In CLI.
To apply It is applied automatically

Enable iec102 on the specified port

Products Guardian
Syntax conf.user configure probe protocol iec102 port <port>
Description The detection of iec102 can lead to false positives, this rules give the
possibility to the user to enable the detection on a specific port
Parameters • <port>: the TCP port

Where In CLI.
To apply It is applied automatically

Enable or disable the persistence of the connections for Ethernet/IP Implicit

Products Guardian
Syntax conf.user configure probe protocol ethernetip-implicit
persist-connection <true|false>
Description The Ethernet/IP Implicit decoder of Guardian is able to detect handshakes
that are then used to decode variables. In some scenarios these
handshakes are not common but it's very important to persist them so that
Guardian can continue to decode variables after a reboot or an upgrade.
By enabling this option Guardian will store on disk the data needed to
autonomously reproduce the handshake phase after a reboot.
Parameters • <true|false>: a boolean to enable or disable the feature

Where In CLI.
| Configuration | 207

To apply It is applied automatically

Set the subnet in which the tg102 protocol will be enabled

Products Guardian
Syntax conf.user configure probe protocol tg102 subnet <subnet>
Description The detection of tg102 can lead to false positives, this rules give the
possibility to the user to enable the detection on a specific subnet
Parameters • <subnet>: a subnet in the CIDR notation

Where In CLI.
To apply It is applied automatically

Set the port range in which the tg102 protocol will be enabled

Products Guardian
Syntax conf.user configure probe protocol tg102 port_range
<src_port>-<dst_port>
Description The detection of tg102 can lead to false positives, this rules give the
possibility to the user to enable the detection on a specific port range
Parameters • <src_port>: the starting port of the range
• <dst_port>: the ending port of the range

Where In CLI.
To apply It is applied automatically

Set the subnet in which the tg800 protocol will be enabled

Products Guardian
Syntax conf.user configure probe protocol tg800 subnet <subnet>
Description The detection of tg800 can lead to false positives, this rules give the
possibility to the user to enable the detection on a specific subnet
Parameters • <subnet>: a subnet in the CIDR notation

Where In CLI.
To apply It is applied automatically

Set the port range in which the tg800 protocol will be enabled

Products Guardian
Syntax conf.user configure probe protocol tg800 port_range
<src_port>-<dst_port>
Description The detection of tg800 can lead to false positives, this rules give the
possibility to the user to enable the detection on a specific port range
Parameters • <src_port>: the starting port of the range
• <dst_port>: the ending port of the range

Where In CLI.
To apply It is applied automatically
| Configuration | 208

Configuring trace

Trace size and timeout


A trace is a sequence of packets saved to the disk in the pcap format. The number of packets in a
trace is fixed, this way when a trace of N packets is triggered Guardian starts to write to disk the N/2
packets that were sniffed before the trace was triggered, after that it tries to save another N/2 packets
and then finalize the write operation, at this point the trace can be downloaded. To avoid a trace being
pending for too much time there is also a timeout, when the time expires the trace is saved also if the
desired number of packets has not been reached.

Retention
The number of traces the Guardian can keep is limited. It is possible to configure the maximum number
of traces saved on disk and the minimum percentage of disk free before the old traces will be deleted.

Figure 145: A schematic illustration of the trace saving process

Trace parameters
The parameters involved in the process of saving a trace can be configured in the file /data/cfg/
n2os.conf.user, here is an explanation of each parameter:

name default description


value
trace trace_size 5000 The maximum number of packets that will be stored
in the trace file.
trace trace_buffer_size 20000 The buffer used to keep the last sniffed packets,
should never be less than trace_size
trace 60 The time in seconds after which the trace will be
trace_request_timeout finalized also if the trace_size parameter is not
fulfilled
trace 100 The maximum number of pcap files to keep on disk,
max_pcaps_to_retain when this number is exceeded the oldest traces will
be deleted
trace min_disk_free 10 The minimum percentage of disk free under which
the oldest traces will be deleted

An example of trace configuration in the /data/cfg/n2os.conf.user file

trace trace_size 2000


trace trace_buffer_size 6000
trace trace_request_timeout 60
trace max_pcaps_to_retain 200
| Configuration | 209

trace min_disk_free 25
| Configuration | 210

Configuring Time Machine


In this section we will configure the Nozomi Networks Solution Time Machine functionality.

Set snapshot interval

Products CMC, Guardian


Syntax tm snap interval <interval_seconds>
Description Set the desired interval between snapshots, in seconds.
Parameters • interval_seconds: the amount of seconds between snapshots (default:
3600)

Where In the n2os.conf.user file.


To apply Restart with service n2osjobs restart

Set snapshot retention

Products CMC, Guardian


Syntax tm snap retention <snapshot_to_keep>
Description Set the desired amount of snapshots to keep. Older snapshots will be
deleted and overwritten.
Parameters • snapshot_to_keep: the overall amount of snapshots to keep (default: 50)

Where In the n2os.conf.user file.


To apply Restart with service n2osjobs restart

Enable or disable automatic snapshot for each alert

Products Guardian
Syntax tm snap on_alert <status>
Description It can enable or disable the possibility to take a snapshot for each alert.
Parameters • status: it can be either true, to enable the feature, or false, to disable it.
(default: false)

Where In the n2os.conf.user file.


To apply Restart with service n2osjobs restart
| Configuration | 211

Configuring retention
Retention of historical data is controlled for each persisted entity by a configuration entry. Modify it to
extend or reduce the default retention.

Alerts retention

Products CMC, Guardian


Syntax conf.user configure retention alert rows
<rows_to_retain>
Description Set the amount of alerts to retain
Parameters • rows_to_retain: the number of rows to keep (default: 500000)

Where In CLI.
To apply It is applied automatically

Trace requests retention

Products Guardian
Syntax confg.user configure retention trace_request rows
<rows_to_retain>
Description Set the amount of trace requests to retain
Parameters • rows_to_retain: the number of rows to keep (default: 10000)

Where In CLI.
To apply It is applied automatically

Link events retention

Products Guardian
Syntax conf.user configure retention link_event rows
<rows_to_retain>
Description Set the amount of link events to retain
Parameters • rows_to_retain: the number of rows to keep (default: 2500000)

Where In CLI.
To apply It is applied automatically

Captured urls retention

Products Guardian
Syntax conf.user configure retention captured_url rows
<rows_to_retain>
Description Set the amount of captured "urls" (http queries, dns queries, etc) to retain
Parameters • rows_to_retain: the number of rows to keep (default: 10000)

Where In CLI.
To apply It is applied automatically
| Configuration | 212

Variable history retention

Products Guardian
Syntax conf.user configure retention variable_history rows
<rows_to_retain>
Description Set the amount of variable historical values to retain
Parameters • rows_to_retain: the number of rows to keep (default: 1000000)

Where In CLI.
To apply It is applied automatically

Uploaded PCAPs retention

Products Guardian
Syntax conf.user configure retention input_pcap rows
<files_to_retain>
Description Set the amount of PCAP files to retain
Parameters • files_to_retain: the number of files to keep (default: 10)

Where In CLI.
To apply It is applied automatically
| Configuration | 213

Configuring Bandwidth Throttling


It is possible to limit the bandwidth that an appliance has at its disposal by specifying the maximum
amount of allowed traffic.

Products Guardian
Syntax system traffic_shaping bandwidth <max_bandwidth>
Description Set the maximum bandwidth that the appliance can use.
Parameters • max_bandwidth: the bandwidth limit (default: no limitation)

Where In the n2os.conf.user file.


To apply Reboot the machine

For example, we can set a limit of two megabytes with the following configuration command:

system traffic_shaping bandwidth 2Mb

Notice that this command affects only the appliance on which it is executed, its effects are not
propagated to other appliances.
Chapter

14
Compatibility reference
Topics: In this chapter you will receive compatibility information about
Nozomi Networks products.
• SSH compatibility
SSH compatibility

Supported SSH protocols (since 19.0.4)

Function Algorithms
Key exchange curve25519-sha256@libssh.org
diffie-hellman-group-exchange-sha256
diffie-hellman-group14-sha256
diffie-hellman-group16-sha512
diffie-hellman-group18-sha512

Ciphers chacha20-poly1305@openssh.com
aes128-gcm@openssh.com
aes256-gcm@openssh.com
aes128-ctr
aes192-ctr
aes256-ctr

MACs umac-128-etm@openssh.com
hmac-sha2-256-etm@openssh.com
hmac-sha2-512-etm@openssh.com
hmac-sha2-512@openssh.com

Host Key ssh-rsa


Algorithms
ssh-dss
ssh-ed25519
ecdsa-sha2-nistp384
ecdsa-sha2-nistp521

You might also like