Professional Documents
Culture Documents
Akamai Guardicore Segmentation Admin User Guide
Akamai Guardicore Segmentation Admin User Guide
Version 41
Administration Guide
Contents
CENTRA ADMINISTRATOR RESPONSIBILITIES............................................................................................................6
OVERVIEW ........................................................................................................................................................................................................ 7
GUARDICORE MANAGEMENT SERVER.......................................................................................................................................................... 8
Management Layer Services .................................................................................................................................................................. 8
AGGREGATORS ................................................................................................................................................................................................. 9
How Aggregators Connect to Agents ...............................................................................................................................................11
COLLECTORS...................................................................................................................................................................................................12
TYPES OF COLLECTORS ................................................................................................................................................................................12
ESX Collector ..............................................................................................................................................................................................12
SPAN Collector ..........................................................................................................................................................................................13
VPC Flow Logs Collector .......................................................................................................................................................................14
IPFix Collector ............................................................................................................................................................................................ 16
IP Flows Collector ..................................................................................................................................................................................... 16
AGENTS ...........................................................................................................................................................................................................18
Agent Connections ....................................................................................................................................................................................18
OVERVIEW ......................................................................................................................................................................................................82
WHEN SHOULD THE RETENTION POLICY BE CHANGED? .......................................................................................................................82
CONFIGURING A RETENTION POLICY FOR CENTRA ES ...........................................................................................................................83
Listing Existing Indices and Storage ................................................................................................................................................83
Formulating a Retention Policy..........................................................................................................................................................83
Using the CLI to Configure Retention...............................................................................................................................................83
OTHER ES DATA TYPES AND HOW TO CONTROL THEM ........................................................................................................................87
TROUBLESHOOTING.......................................................................................................................................................... 88
UPGRADING .......................................................................................................................................................................... 90
FUNCTIONALITY.............................................................................................................................................................................................91
HIGH LEVEL ARCHITECTURE OVERVIEW ..................................................................................................................................................91
COMPONENTS OVERVIEW ............................................................................................................................................................................92
• Ensure that components are properly integrated with your environment. This includes
making sure that the required ports and connections are open.
• Troubleshoot: There are many instances where the administrator can solve a problem.
Where this is not possible, the administrator should contact Guardicore support.
Overview
Guardicore Centra gathers data on flows in your system by deploying several types of software
components: Agents, Aggregators, and Collectors. All of the information is sent to the
Guardicore Management server which provides a single point of control for all data received by
the components. The Management server analyzes, enriches, and integrates the data so that it
can be used to provide a clear visualization of information flows in your system, as well as to
provide alerts and enforcement of security policies that regulate information flows.
• Agents are deployed on each device in your network and are capable of sending
information that reveals the source and destination of flows, rerouting suspicious flows to
the Deception server (honeypot), and enforcing security policies.
• Virtual machines called Aggregators gather and process the data gathered from Agents,
and communicate with the Guardicore Management server.
• A Management Server receives, analyzes, enriches, and manages the collected data.
For installations exceeding 500 Agents, the Management layer is deployed in a clustered manner
that supports high-availability and scalability.
Management Layer Services
The Management layer comprises the following main services:
Master service
The Master service is the orchestrator of all services running within the Management layer. In
addition, it exposes the system’s REST endpoint for UI and API usage, and is the communication
gateway for the system’s distributed components (Aggregators, Collectors and Deception
servers). Master instances are also Slaves, capable of fulfilling Slave duties as described below.
Slave
A slave instance executes the Management application workers such as data processing, policy
matching, alert triggering, healthcare control, etc. Multiple Slave instances provide the
application workers with HA. This component scales out linearly, according to the number of
Agents and Collectors.
RabbitMQ
The RabbitMQ service is a messaging mechanism for messages exchanged between the system’s
components, directing them to the right node and queue and storing them until consumed. For
automatic failover of this service, two RabbitMQ instances can be configured.
ElasticSearch
An ElasticSearch database is used to store the network flow data collected by the system’s
components (the Reveal map). This component is scaled out according to the number of Agents
and Collectors and the data retention policy requirement. Within an ElasticSearch cluster, HA of
both nodes and data redundancy can be configured.
InfluxDB
The InfluxDB database is used to store the recent healthcare data collected from the system’s
components. The availability of this service affects the healthcare control functionality only.
Aggregators
An Aggregator is a VM that aggregates and de-duplicates data it receives from its associated
Agents and then sends it to the Management Server. To support scaling, a single Aggregator can
be deployed per hundreds of Agents. In addition to gathering and sending the data to the
Management Server, the Aggregator manages the configuration of associated Agents.
Both Aggregators and Collectors integrate with various orchestration layers such as VMWare,
AWS, Kubernetes, etc. This enables the automated pulling of asset information, labels and more
into the Centra™ platform.
Depending on allocated compute resources, a single Aggregator can support between an average
of 200 and 2000 agents with Micro-Segmentation feature-set (Reveal + Enforcement +
Each Agent connects to a Guardicore Aggregator server over SSL, with a certain SNI (Server
Name Indication). The connection is always initiated by the Agent. The Aggregator and the
Management server differentiate between Agents by a unique ID generated on the Agent.
The Aggregator handles new incoming connections with HaProxy, which determines the Agent
type by the SNI and forwards the connection to the relevant service, depending on the type of
Agent (see the section on Agents for a description of the types of Agents and their associated
services).
The Aggregator sends commands and requests to the Agents and gets responses. For example, in
the case of Reveal modules, the Aggregator sends a start-monitoring command that starts a
monitoring thread. Deception and Enforcement modules can push messages to the Aggregator
as well.
Aggregators can be configured either globally or individually.
Collectors integrate with the switching infrastructure to perform the following functionalities:
Reporting layer 4 network-flow information to Management. This data is used to gain wide
visibility across the network, visualizing and alerting on traffic with which Guardicore Agents are
not associated. It also allows IP Reputation analysis.
In installations that do not enable the Agent Deception module, Collectors detect failed flows,
forwarding those for Deception investigation.
Detecting network scanning activity.
Logging DNS traffic and allowing DNS Reputation analysis.
On physical infrastructure, Collectors connect to SPAN/TAP ports or integrate with 3rd party
NPB solutions. On VMware ESXi, Collectors use a promiscuous port group to receive a copy of
all traffic traversing the selected vSwitch(es).
Collectors relay data to the Guardicore Management server for further analysis and integration
into Guardicore’s Reveal charts. Collectors are also able to detect suspicious flows, redirect them
to a SPAN port for further analysis, and, where warranted, divert them to the Deception server
(honeypot).
You deploy Collectors during the installation of Guardicore Centra. During Installation you can
choose to deploy Collectors in two ways:
Use Guardicore’s deployment tool (GuarDeployer) to automatically deploy multiple Collectors.
–OR–
Manually deploy and configure each Collector separately.
During installation, Wizards guide you through the steps of deploying the various types of
Collectors. As of release 40, you can choose to deploy five types of Guardicore Collectors: ESX
Collector, SPAN Collector, AWS VPC Flow Logs Collector, IPFix Collector and IP Flows Collector:
The ESX Collector is a VM that integrates with ESX hosts and should be deployed as a VM on
each protected hypervisor and fixed to the host (make sure vMotion is disabled). The standard
ESX Collector analyzes communication flows sent to a SPAN port by a VSS (Virtual Standard
The SPAN Collector is deployed as a VM for physical networks and analyzes communication
flows sent by a switch to a SPAN port. More specifically, it receives traffic for inspection from
SPAN ports, network taps or Network Packet Brokers (NPBs). This Collector requires a return
port back to the network to be able to perform packet redirection.
Guardicore’s AWS VPC Flow Logs Collector provides a way to inspect all the flows between the
different cloud assets within an Agentless cloud network such as AWS. The Collector gathers
logs from the AWS VPC Flow Logs feature (which publishes the information to Amazon
CloudWatch Logs and Amazon S3) and sends it to the Guardicore Management server. The
Management server then integrates the log information into Reveal, Guardicore’s Visibility
module, where it provides a clear view of the flows within the cloud environment.
Troubleshoot why specific traffic is not reaching a destination, which in turn helps you diagnose
overly restrictive security group rules.
Use flow logs as a security tool to monitor the traffic that is reaching your environment.
Policy-wise, only alerts are supported without enforcement.
To allow VPC flow logs, install a dedicated Collector during installation and configure VPC flow
logs in AWS orchestration.
The information from the IP Flow collector enables administrators to better understand traffic
flows and develop better security policies for allowing or blocking. The IP Flow collector
provides valuable information that is unavailable from other types of collectors. For example, it
injects switch and port information as metadata on identified assets.
Centra’s IP Flow Collector can be deployed during Centra installation and supports 3 protocols:
Netflow 5 and 9, IPFi, and sFlow.
An Agent can include up to four separate modules: Reveal, Deception, Detection, and
Enforcement:
Reveal Agents provide process-level visibility and file reputation. They collect process-
level information on all connections including protocols, ports, and corresponding
processes (path, user, command line, hash, etc.).
Enforcement Agents block traffic based on network-level and/or process-level policy, and
process DNS requests and replies.
Deception Agents detect failed connection attempts and redirect them to a Deception Server for
further investigation. (The Deception Server manages a farm of multiple honeypots of different
flavors, Windows and Linux.)
Note: Deception Agents have several roles parallel to those of an ESX Collector, and must not be
installed on virtual servers that are hosted on ESXi hypervisors already protected by ESX
Collectors. When installing the system, Guardicore Solution Center decides which is the optimal
deployment for the client.
In addition, there is a Controller module, and two channels, Reveal and Enforcement, that
connect the Agent to the Aggregator as explained in the next section.
Agent Connections
Each Agent connects to a GuardiCore Aggregator server over SSL. The Aggregator and the
Management server differentiate between Agents by a unique ID generated on the Agent. The
Aggregator handles new incoming connections using HaProxy, which determines the Agent type
by the SNI and forwards the connection to the relevant service.
The interface to the Aggregator is implemented using two channels, gc-channel Reveal and gc-
channel Enforcement, which are responsible for communication with the Aggregator. The
In case the Aggregator is disconnected from the Agents, the channels will try to reconnect, and,
if not successful, move to the next Aggregator in the list (if there is one).
In cases where Agents cannot be deployed, you can deploy Collectors. Although Collectors
cannot enforce policies, Collectors and Aggregators perform essentially the same functions: they
gather data on information flows in the system (from Agents, in the case of Aggregators, or from
switches and logs, in the case of Collectors) and send the data to the Management server for
further analysis and integration.
This section explains the way Centra derives policy rules to Agents.
Input chain: the Agent is the destination of the incoming flow. If a specific Agent is included in
the destination of a rule, the rule will be derived to the Input chain.
Output chain: the Agent is the source of the outbound flow. If a specific Agent is included in the
source of a rule, the rule will be derived to the Output chain.
Note: A rule can be derived both to the Input and the Output chains: for example, Any → Label
that includes the Agent’s asset (in this case, the Agent is part of the source and also a part of the
destination).
Implicit rules: these are default rules that control traffic to Agents before any policies are
published. In general, the default rule is Allow.
Published policy: these are the rules that users publish to segment the network and include
Allow, Alert, and Block rules, as well as Overrides.
Inventory updates: rules affecting an asset may change if a label is associated or removed from
an asset, or if the asset’s IP has changed.
1 - IMPLICIT-INPUT IMPLICIT-OUTPUT
Allow traffic from: Allow traffic to:
Loopback Loopback
Multicast Multicast
Broadcast (by subnet mask) Broadcast (by subnet mask)
Local IP Local IP
Aggregator over TCP 443
3 Override Alert
OVERRIDE-ALERT (action: Allow) OVERRIDE-ALERT (action: Allow)
5 Allow
ALLOW (action: Allow) ALLOW (action: Allow)
6 - IMPLICIT-PRE-DEFAULT
Allow traffic to:
DNS server over UDP 53
7 Alert
DEFAULT-ALERT (action: Allow) DEFAULT-ALERT (action: Allow)
INPUT
IMPLICIT-INPUT
implicit-local-rule: protocol: ALL, IPs: [127.0.0.0/255.0.0.0], subnets: [], ports: [],
applications: [] --> IPs: [127.0.0.0/255.0.0.0], subnets: [], ports: [], applications: [] = ALLOW
implicit-multicast-rule: protocol: ALL, IPs: [], subnets: [], ports: [], applications: [] --> IPs:
[224.0.0.0/240.0.0.0], subnets: [], ports: [], applications: [] = ALLOW
implicit-broadcast-rule: protocol: ALL, IPs: [], subnets: [], ports: [], applications: [] --> IPs:
[172.16.255.255], subnets: [], ports: [], applications: [] = ALLOW
implicit-local-ip-rule: protocol: ALL, IPs: [172.16.6.101], subnets: [], ports: [],
applications: [] --> IPs: [172.16.6.101], subnets: [], ports: [], applications: [] = ALLOW
DEFAULT-ALLOW
default: protocol: ALL, IPs: [], subnets: [], ports: [], applications: [] --> IPs: [], subnets: [],
ports: [], applications: [] = ALLOW
OUTPUT
IMPLICIT-OUTPUT
implicit-local-rule: protocol: ALL, IPs: [127.0.0.0/255.0.0.0], subnets: [], ports: [],
applications: [] --> IPs: [127.0.0.0/255.0.0.0], subnets: [], ports: [], applications: [] = ALLOW
implicit-multicast-rule: protocol: ALL, IPs: [], subnets: [], ports: [], applications: [] --> IPs:
[224.0.0.0/240.0.0.0], subnets: [], ports: [], applications: [] = ALLOW
implicit-broadcast-rule: protocol: ALL, IPs: [], subnets: [], ports: [], applications: [] --> IPs:
[172.16.255.255], subnets: [], ports: [], applications: [] = ALLOW
implicit-local-ip-rule: protocol: ALL, IPs: [172.16.6.101], subnets: [], ports: [],
applications: [] --> IPs: [172.16.6.101], subnets: [], ports: [], applications: [] = ALLOW
implicit-server-rule: protocol: ALL, IPs: [], subnets: [], ports: [], applications: [] --> IPs:
[172.16.8.1], subnets: [], ports: [443], applications: [] = ALLOW
IMPLICIT-PRE-DEFAULT
implicit-dns-rule: protocol: UDP, IPs: [], subnets: [], ports: [], applications: [] --> IPs:
[8.8.8.8], subnets: [], ports: [53], applications: [] = ALLOW
DEFAULT-ALLOW
INPUT
IMPLICIT-INPUT
...Default policy…
ALLOW
490cfe09-6c2c-4895-992c-a242e7b92be2 (ALLOW / Production Accounting):
protocol: ALL, IPs: [172.16.1.101,172.16.1.111,172.16.1.112,172.16.1.121,172.16.1.122],
subnets: [], ports: [], applications: [] --> IPs: [], subnets: [], ports: [], applications: [] = ALLOW
e5d41481-4d3b-4f8e-9e3a-0b303d8204fe (ALLOW / Production Accounting):
protocol: TCP, IPs: [], subnets:
[33.22.33.22/32,138.201.72.66/32,138.201.72.76/32,138.201.72.77/32,172.16.0.100/32,172.
16.0.254/32,172.16.1.1/32,172.16.8.1/32,172.16.100.101/32,172.16.100.102/32,172.16.100.
103/32,172.16.100.104/32,172.16.100.105/32,172.16.100.106/32,172.16.100.107/32,172.16
.100.108/32,172.16.100.109/32,172.16.100.110/32,172.16.100.111/32,172.16.100.112/32,1
72.16.100.113/32,172.16.100.114/32,172.16.100.115/32,172.16.100.116/32,192.168.0.1/32,
192.168.0.3/32,192.168.0.4/32], ports: [], applications: [] --> IPs: [], subnets: [], ports: [80],
applications: [/usr/sbin/nginx] = ALLOW
e7321b7c-d6ef-4175-8ecc-a971f9b3de5c (ALLOW / Production Accounting):
protocol: TCP, IPs: [172.16.1.40,192.168.0.100], subnets: [], ports: [], applications: [] --> IPs: [],
subnets: [], ports: [22], applications: [] = ALLOW
DEFAULT-ALERT
2e217c8-538f-47ef-b1ee-0dc3062d351f (ALERT / Production Accounting):
protocol: ALL, IPs: [], subnets: [], ports: [], applications: [] --> IPs: [], subnets: [], ports: [],
applications: [] = ALLOW
DEFAULT-ALLOW
...Default policy…
OUTPUT
Aggregator Administration
Aggregator Screen
The user interface for Aggregators is accessible from the Administration panel (Components,
Aggregators). The screen displays all of the Aggregators deployed in the system:
Column Description
Operation The current operation mode of the Aggregator: On, Off, or Monitor. The
operation modes refer to the functionality of the Aggregator.
On = the Aggregator’s functions are turned on.
Off = the Aggregator’s functions are turned off (i.e. it is not performing the
functions of communicating with Agents or relaying data to the
Management server).
Status This column displays information pertaining to the health of the Aggregator.
Guardicore periodically checks the status of Aggregators. The full list of the
status (health) of Aggregator services is displayed by hovering the mouse
cursor over the column. A plus sign next to an item in the list can be clicked
to display further items. The column also uses the following to indicate the
status of an Aggregator:
Up = All of the Aggregator’s services are functioning.
Partially Up = Some of the Aggregator’s services are functioning.
Down = The Aggregator is not functioning.
Error = Problem with some of the Aggregator’s services. Hovering over the
Error icon will display a list with the problematic services marked with an
Error icon.
Connecting = the Aggregator is trying to connect.
Initializing = the Aggregator services are initializing.
Stopped = the Aggregator was intentionally stopped. None of the services
are functioning.
Last Seen The time and date when the Aggregator was last visible.
First Seen The time and date when the Aggregator was first visible.
Option Explanation
Change Operation Mode This refers to the operation of the Aggregator’s services:
On, Off, Monitor: See the previous section on the Aggregator
screen for an explanation of these options.
Get debug logs This downloads a compressed tar.gz file that contains detailed
debug information in several files.
Machine Details | Include Guardicore uses unique hardware IDs to identify the
hardware UUID machine on which an Aggregator is deployed. The Include
hardware UUID option can solve the following problem:
Aggregator | Cluster | cluster-id Occasionally there is a need to change the ID of the cluster
of which the Aggregator/Collector is a part. This usually
accompanies some network reorganization or segmentation.
Datapath | General | TCP Typically, these ports are left untouched. However, if there
Service Ports is a special need to define a port for redirection to the
Deception Server, it’s done here.
Aggregator | Aggregator Check If you want this Aggregator to serve Agents in a load
Features | Agents Load balanced arrangement together with other Aggregators in
Balancer the cluster.
Aggregator | Aggregator Check the modules that you want this Aggregator to serve.
Features | [Enforcement,
Reveal, Detection, Deception]
Agents Server
Aggregator CLI
Administrators can use CLI commands to access detailed information on Aggregators.
gc-upper-hatop
(for communication pathways upward in the
direction of the Management server)
Collectors Screen
The user interface for Collectors is accessible from the Administration panel (Components,
Collectors). The screen displays all of the Collectors deployed in the system:
Column Description
Operation The current operation mode of the Collector: On, Off, or Monitor. The
operation modes refer to the functionality of the Collector.
On = the Collector is relaying data to the Management server.
Off = the Collector is not relaying data to the Management server.
Monitor = the Collector is gathering information, but is not rerouting
suspicious traffic to the Deception server.
Status This column displays information pertaining to the health of the Collector.
Guardicore periodically checks the status of Collectors. The full list of the
status (health) of Collector services is displayed by hovering the mouse over
the column. A plus sign next to an item in the list can be clicked to display
further items. The column also uses the following to indicate the status of
an Collector:
Cluster The cluster to which the Collector belongs. Collectors belong to a cluster
where they form a Zookeeper leader and quorum.
Last Seen The time and date when the Collector was last visible.
First Seen The time and date when the Collector was first visible.
Option Explanation
Restart Reboot the Collector. This is an actual reboot which means that the
component begins functioning anew.
Get debug logs This downloads a compressed tar.gz file that contains detailed debug
information in several files.
The Override Configuration option enables you to specify important settings for Collectors.
Make sure to check Show Advanced Options for a full list. Some of the most important options
are listed in the following table.
Note: Because Aggegators and Collectors share the same OVA, the options use the term
Aggregator, even though the option is being applied to the Collector that you selected.
Datapath | General | TCP Service Ports Typically, these ports are left untouched.
However, if there is a special need to define a
port for redirection to the Deception Server, it’s
done here.
CentOS 5.2-5.11 ✔ ✔ ✘ ✔
Polling mode Network level (L4
only)
CentOS 6.2-6.10 ✔ ✔ ✔ ✔
7.0-7.6
CentOS 6.0-6.1 ✔ ✔ ✘ ✔
Amazon 2012+ ✔ ✘ ✔ ✔
Debian 7.8.9 ✔ ✔ ✔ ✔
SUSE 11 SP2-SP4 ✔ ✔ ✔ ✔
SUSE 12.15 ✔ ✔ ✔ ✔
Process Purpose
gc-agents-service The main service of the Agent. Provides the local administration of the
Agent. Creates, monitors and restarts the other subcomponents of the
Agent.
gc-channel The communication channel for the Enforcement and Reveal modules.
An instance of the process is run for each module simultaneously:
gc-enforcement-agent The Enforcement module of the Agent. Gets the policy from the
Aggregator and loads it into the driver. Persistent storage is used to
store the latest received policy and configuration that is used after
machine restart until a new policy is received.
gc-detection The Detection module of the Agent. Used for File Integrity Monitoring
capabilities.
Solaris Version
10 (exc. U8,U9) ✔ ✔ ✘ ✘
(SPARC, Polling mode Network level
X86_64) only
11 .0-11.4 ✔ ✔ ✘ ✘
(SPARC, Polling mode Network level
X86_64) only
Process Purpose
gc-agents-service The main service of the Agent. Provides the local administration of the
Agent. Creates, monitors and restarts the other subcomponents of the
Agent.
gc-channel The communication channel for the Enforcement and Reveal modules.
An instance of the process is run for each module simultaneously:
gc-enforcement-agent The Enforcement module of the Agent. Gets the policy from the
Aggregator and loads it into the driver. In versions v29 and up,
persistent storage is used to store the latest received policy and
configuration that is used after machine restart until a new policy is
received.
gc-guest-agent The Reveal module of the Agent. Responsible for reporting visibility
data collected by Agents and also the enforcement log (blocked and
allowed connections).
Global Zone
An Agent can run on a global zone with shared IP to provide L4 visibility and enforcement. In this
case, the global zone and all its non-global zones will be treated as a single entity ("Asset") in the
system. This is not true for an exclusive-ip global zone.
NOTE: Solaris 11.4 no longer uses IPF as its enforcement utility. Instead, it uses the PF firewall
which also deals with NAT. However, because Agents override all PF rules, this may cause
HP-UX Version
11.23, 11.31 ✔ ✘ ✘ ✘
(Itanium) Polling mode
Network Level only
AIX Version
2008 (32bit) ✔ ✔ ✘ ✔
Polling mode Network level
2008 (64bit) ✔ ✔ ✘ ✔
Polling mode
2008R2 (64bit) ✔ ✔ ✔ ✔
2012, 2012R2 ✔ ✔ ✔ ✔
(64bit)
2019 (64bit) ✔ ✔ ✔ ✔
7 (64bit) ✔ ✔ ✔ ✔
8 (64bit) ✔ ✔ ✔ ✔
10 (64bit) ✔ ✔ ✔ ✔
Process Purpose
gc-agents-service.exe The main service of the Agent. Provides the local administration of the Agent.
Creates, monitors and restarts the other subcomponents of the Agent.
gc-channel.exe The communication channel for the Enforcement and Reveal modules. An
instance of the process is run for each module simultaneously:
gc-channel (Reveal) Communication channel to the Aggregator for reporting of visibility data as
well as reporting for the enforcement log. The Aggregator polls the Agent
through the channel for new data. The connection is encrypted and
authenticated on TCP/443, using TLS1.2.
gc-channel (Enforcement) Communication channel for applying policy updates. The connection is
encrypted and authenticated on TCP/443, using TLS1.2.
gc-enforcement-agent.exe The Enforcement module of the Agent. Gets the policy from the Aggregator
and loads it into the driver. In versions v29 and up, persistent storage is used
to store the latest received policy and configuration that is used after machine
restart until a new policy is received.
Responsible for reporting the visibility data collected by the Agents and also
the enforcement log (blocked and allowed connections).
Loads either:
gc-windig which implements process-level visibility connection reporting by
registering to ETW providers (Windows).
–OR–
gc-digger which enables alternative visibility data collection based on netstat
polling.
gc-windig.exe The utility that integrates into the Windows ETW provider to collect network
flows information.
gc-detection.exe The Detection module of the Agent. Used for File Integrity Monitoring
capabilities.
gc-cert-client Standalone utility for automatic certificate enrollment and renewal using the
SCEP interface.
After installation, the Agent binaries require 50MB of system's disk space. By default,
Additional 220MB of disk space are required for log files storage.
The log rotation and retention configuration can be changed either:
During installation, by changing the log rotation profiles,
- OR -
This section provides an explanation of how Centra works with Windows Firewall and Linux
iptables.
Conflicts between WFP verdicts and Centra policy are resolved as follows:
Allow verdict: When making an Allow verdict, Guardicore Centra vetoes any conflicting verdict
from WFP. Therefore, an Allow verdict from Centra will not be inspected by Windows Firewall
and any Windows Firewall block rules will not apply. This includes the following cases:
Some Allow rule in the policy is matched.
No rule in the policy is matched (because we have an implicit default-allow rule).
Alert verdict: In terms of its effect on communication flow, this is an Allow verdict and is treated
the same way as Allow as explained above. Its function as an Alert within Centra is not affected
by the Windows Firewall.
Linux uses iptables together with Netfilter (NF) to enforce a policy. When there is a conflict
between the Centra verdict and the iptables verdict, Block takes precedence, regardless of
whether the Block verdict is from Centra or iptables.
Agent Administration
Agents Screen
As an administrator you use the Agents screen to monitor the health and functioning of Agents
and to perform any necessary operations. The Agents screen looks like this:
Column Description
Asset Status Whether the device on which the Agent is deployed is on or off line.
Labels The label assigned to the asset on which the Agent is deployed.
Flags Flags indicate problems of which the administrator should be aware. Hovering over
the notice in this column provides more details. The complete list of flags is listed in
the table below.
Kernel The version of the kernel on which the Agent is deployed. This affects particular
Agent modules that work from the kernel.
Last Seen The most recent time that the Agent was detected by the Management server.
First Seen The first time that the Agent was detected by the Management server.
Agent Flags
The following table lists the various flags that can be displayed in the Flags column on the Agents
screen:
Flag Description
No Reveal Reported Agent didn’t report reveal data in the last hour.
Limited Policy Agent’s policy was modified to meet Agent limited functionality. See Older
Agents Rule Limitations for more details.
Outdated Configuration Agent’s configuration isn’t updated to the latest configuration. See Agent
Configuration for more details.
Partial Configuration Some configuration attributes are not supported by the Agent.
No Reveal Received The Aggregator didn’t report reveal data to the Management in the last
hour.
Reveal Offline The Reveal module is running, but there is no connectivity between the
module and the Aggregator.
Enforcement Offline The Enforcement module is running, but there is no connectivity between
the module and the Aggregator.
Memory Limit Reached The Agent/Module memory consumption reached the predefined
threshold. See Resource Usage Management for more details.
For example, to view only Windows Agents, select Windows in the OS filter option to display the
following screen:
To save a filter for future use, use the Save filter button. To remove the filter, click the Discard
button.
On the Agents screen, select the Agent(s) that you want to delete.
Click the Remove from database button. The Agent is removed from the database but as long as
its certificate is not revoked, it can still function and it will attempt to reconnect to the system
and re-register. After a successful connection, the Agent will appear in the system with a default
Agent configuration.
To fully remove an Agent and prevent it from reconnecting to the system, the administrator must
install it and optionally revoke its certificate. When an external Public Key Infrastructure (PKI) is
being used, the Agent certificate will be marked as “pending for revocation”. The system
administrator can revoke the Agent’s certificate which ensures that the Agent is fully removed
and cannot reconnect.
In Windows only: The Centra Administrator installs the Agent with the Lock enabled.
The Centra Administrator uses the Override Configuration option on the Agents screen in Centra
Administration to change an Agent’s state to Locked, as explained below.
Alternatively, if the Guardicore Agent Setup screen is used to install the Agent, under Specific
Module Configuration, select the Enable Administration Lock option:
In the left pane of the Agents Configuration dialog box, select Agent Controller and in the right
pane scroll down to Set admin lock state and select Locked/Unlocked:
NOTE: Set admin lock displays three states: Unlocked, Locked, and Unset. When Unlocked or
Locked is set, it takes precedence over any installation configuration of the Agent. On the other
hand, when the Agent is in Unset mode, the locked or unlocked state can be determined by the
configured Installation settings.
Run the following command with administrative privileges to lock the Agent:
c:\Program Files\Guardicore>gc-agents-service.exe --ctrl set-adminlock-state --args LOCKED
Use the following command with administrative privileges to unlock the Agent:
c:\Program Files\Guardicore>gc-agents-service.exe --ctrl set-adminlock-state --args
UNLOCKED:<enter_your_password>
DNS rule reject policy reject policy reject policy reject policy works
works (only
Rule with a label group that has Linux &
excluded labels ("NOT rules") reject policy reject policy reject policy Windows) works
* Note: labels and assets expand to IPs which may cause limits to be exceeded (e.g. a label
containing 5,000 assets can cross the limit if each asset has three IPs).
• Unix Solaris 10 (exc U8 and U9) (SPARC x86_64), 11.0 and 11.3 (SPARC
x86_64), AIX 6.1, 7.1, 7.2
Behavior Consequences
Reject policy Agent will not get the latest policy; an "outdated policy" flag will be raised.
Ignore rule Policy will be derived to the agent, without the specific rule.
Ignore path Rule will be derived, ignoring the full path and using process name only.
Ignore process Rule will be derived as if the user did not write any process on the affected source/destination.
Rule Derivations
In addition to the rule derivations noted in the previous section, Centra V.31 performs the
following derivations for policy rules for Agents:
Internet Rules
Internet rules are converted to the complement of IANA's private networks list and exclude what
is configured as private under IP classification.
Example:
"match_multiple_subnets": [
"0.0.0.0/0",
"!10.0.0.0/8",
"!169.254.0.0/16",
"!172.16.0.0/12",
"!192.0.0.0/29",
"!192.0.0.170/31",
"!192.0.2.0/24",
"!192.168.0.0/16",
"!198.18.0.0/15",
"!198.51.100.0/24",
"!203.0.113.0/24",
"!240.0.0.0/4"
]
The private networks list can be updated through the UI.
During the initialization of the modules, after the installation or after system reboot, a
“Validating” state is expected. This is an initial state that monitors the startup of the module and
validates its functionality. Validation should not take more than 60 seconds.
The modules should be in the “Running” state. In case there is an issue preventing the module
from running as it should, the status will be changed to “Running with Errors”.
Selecting “All”, “Incoming” or “Outgoing” filters between inbound and outbound rules.
Selecting Show Implicit Rules displays administrative rules that ensure the Agent doesn’t lose the
ability to communicate with the management system. See the section below for details.
Refresh button: this pulls the most recent policy from the kernel module.
Export to CSV button: this generates a CSV file with all the applied rules.
NOTE: the policy table will not appear if the Agent is not connected to an Aggregator.
Implicit rules
Implicit rules are administrative rules that make sure the Agent doesn’t lose the ability to
communicate with the management system. These rules cannot be modified and do not appear
as part of the Segmentation Policy in Centra UI.
The default implicit rules are the following:
Allow local host communication (127.0.0.1 and local interface IP addresses)
Allow multicast communication
Allow broadcast communication
Allow TCP port 443 communication to the Aggregator server
Allow outbound DNS
pausingSuspending the Agent
If needed, you can temporarily suspend Agent operation. There are two options:
Stop for a specified time or until system restart: the Agent will resume its operation after the
specified time period or after system restart, whichever is earlier.
–OR–
Stop until system restart: The Agent will resume its operation after system restart.
To suspend the Agent:
Do either of the following:
On the Windows Agent Administration Main screen, click the Menu button and then select
Suspend Agent.
–OR–
Right-click the Guardicore Tray icon and select Suspend Agent.
The System Information box on the right of the screen displays information about the selected
Agent such as Operating System, Aggregator IP, etc.
To report an issue, on the left side of the screen, select one of the following delivery methods:
Send agent diagnostics to Guardicore - Centra automatically creates the package and attaches it
to a new support ticket in the Guardicore Support Portal.
Save agent diagnostics package locally - The package is saved locally as a file on the system. The
following information is collected:
Command Description
gc-agent start Start the Agent service with all the installed modules.
gc-agent stop Stop the Agent service with all the installed modules.
gc-agent system-status Get general info about the Agent deployment including the IP of the
Aggregator, uptime, and more useful info for debugging.
gc-agent start-all Start all the Agent modules, assuming the Agent service is up.
gc-agent stop-all Stop all the Agent modules, without shutting down the Agent service
gc-agent module-start Start a specific module, assuming the Agent service is up.
<module name>
gc-agent module-stop Stop a specific module, assuming the Agent service is up.
<module name>
gc-agent module-restart Restart a specific module, assuming the Agent service is up.
<module name>
gc-agent module-status Get the status of the module, assuming the Agent service is up.
<module name>
gc-agent collect- Collect local diagnostics on the machine and create a report to be sent to
diagnostics the Guardicore support center.
The following additional commands are supported to control the Linux Agent’s Enforcement
module:
gc-agent dump-policy Print the current enforcement policies and revision set for the
module.
Available Profiles
During Agent installation, you can choose one of the following three Log Rotation profiles: “min”,
“medium”, “max” for allocating storage space for Agent logs. The “medium” profile is considered
to be the default profile.
The type of profile determines the amount of debugging information that is collected and the
time span over which it is collected. The Min profile collects the least amount of information,
while the Max collects the most. Thus, the choice of profile may affect troubleshooting.
The following table describes the different log sizes configurations for each profile:
Agents service 10 3 13 10 5 15 50 8 90
Channel 1 3 1.3 6 5 9 15 8 27
Controller 1 3 1.3 10 5 15 50 8 90
Deception 2 3 2.6 10 8 18 50 8 90
Detection 2 3 2.6 6 5 9 15 8 27
Enforcement 30 5 45 50 8 90 80 10 160
Dig 10 3 13 10 8 18 50 8 90
Reveal 5 3 6.5 20 3 26 50 8 90
90 MB 220 MB 700 MB
The following sections provide tables for Windows and Linux with hard and soft limits for CPU,
memory, and IO usage for the three pre-configured resource limitation packages. The packages
set resource usage limits for each of the Agent modules: Deception, Reveal, Reveal Channel,
Enforcement, Enforcement Channel, Detection, Controller, and Agent Service.
Low: less than 2G RAM Medium: 2G <= RAM <= 32G High: RAM > 32G
Key: (L=Low, M=Medium, H=High) Values for CPU are %, Values for Memory are MB.
L M H L M H L M H
CPU
Soft 2 2 2 2 2 2 2 2 2
Hard 20 20 20 20 20 20 20 20 20
Memor
y
L M H L M H L M H L M H
CPU
L M H L M H L M H L M H
Soft 2 2 2 2 2 2 2 2 2 2 2 2
Hard 20 20 20 20 20 20 20 20 20 20 20 20
Memory
Agent Service
L M H
CPU
Soft 2 2 2
Hard 20 20 20
Memory
L M H
• CPU Limitations Support: Windows Server 2012, Windows 8 and newer (only 64 bit
architecture)
All modules share the same limitations Virtual memory Virtual memory CPU Rate %
Job (hard mem) Process (soft mem)
Instructions for using these packages are provided in the following section. There is also an
option for changing individual settings within a configuration package.
For advanced users only: To override specific resource limitation values, you can use the
following installation attributes:
For advanced users only: To override specific resource limitation values, you can use the
following installation attributes:
Select Override configuration, then select Agent Controller to display the Agents Configuration
dialog box:
Scroll down to the Resource limitations values box for the module whose limits you want to
change, select the desired resource limitation package, and choose Save changes. For example:
Overview
All data saved by Management to Elasticsearch comes with a default retention policy. In this
case, retention actually means deletion – once the stored data reaches the retention point, it is
deleted without backup or recovery options.
Centra’s data deletion logic is executed using a celery task called archive_and_delete_old_data.
The task runs once an hour as part of the celery-long-run worker, and its logs can be found
under /var/log/guardicore/celery-long-run.log. For every data type stored in Elasticsearch, the
task lists all the existing indices. An index whose index timestamp is older than “today minus
retention_policy (in days)” is deleted.
Configuration name Default Elastic Indices and their Data type Affected Screens and
value Logic
incidents__* and Incidents and This data is stored both to Mongo and to Elastic. There is
incident_groups__* incident currently no automatic retention handling for incidents and
groups incident groups. To remove them from Elastic you can use
the Elastic DELETE API (this might leave dead pointers here
and there). Deleting from Mongo is not dealt with here.
saved_connection__* Saved maps Delete the saved maps through the saved maps screen or
and storage API. The index names match the map ID, which you can use
saved_processes__* (connections to understand which maps you need to delete to clear the
and space.
processes)
gc-mgmtctrl list_aggregators
This command lists all of the Aggregators and Collectors (recall that Aggregators and Collectors
are essentially the same) in the environment. Here is an example of the listing as compared with
the listing in the GUI:
Using the monicore-ctrl command can help you return a component to health, as in the following
case:
Suppose you wanted to check why the status of the ESX Collector listed in the previous example
was PARTIALLY UP. To do this, connect to the Collector via SSH, and issue the following
command:
monicore-ctrl status
The status of all of the Collector’s subcomponents are listed and we now see the problem – gc-
mitigation has been forced down.
For example, you could restart all of the ESX Collector’s services by issuing the following
command:
monicore-ctrl restart all
You can use the monicore-ctrl status command again to check the status of all of the Collector’s
services.
Alternatively, you can use the Reboot option in the component’s GUI to restart all of a
component’s services:
Upgrading
Scheduled upgrades of Guardicore Centra and/or any of its components are performed regularly
by Guardicore Support. Contact Guardicore Support for details.
Functionality
The Guardicore Agent is designed to track all network connections of a protected server,
coupled with information on the processes involved in the connection. The Agent validates each
connection against a segmentation policy to allow / alert / block the connection. The connection
metadata and the applied action are reported to Guardicore Centra.
What it does
Guardicore’s Backup and Restore feature backs up Management into a tar.gz file that includes
the configuration for Agents, policies, etc. The full list of items that are backed up is included in
the section What is Backed Up below.
Backup Procedure
gc-backup-cli backup
Optional parameters
The backup may take a few minutes. The backup file will be saved to
/storage/disaster_recovery/backup
Logs for the backup and restore process are located in the following:
/var/log/guardicore/backup_control.log
Restore Procedure
Note: The control node must be able to access all infra nodes via SSH prior to running the
restore procedure. All infra nodes must be up and running.
cd /storage/disaster_recovery/restore
2. Make sure that the backup tar.gz file to be restored is in the folder:
/storage/disaster_recovery/restore
Optional parameters
Note: After restore, the status of agents/assets/orchestrations will be incorrect until the reports
are received from the aggregators. Restore is similar to the DR process, i.e., the for agents will
not be re-sent and the policy will not be updated. The policy will be the same as it was when the
system was backed up. To update the policy, a restart of the enforcement modules on the
Aggregators is required. See the next step.
4. After the restore, on the Aggregator, restart the enforcement modules via this command:
Notes
● The Backup file is a file on Management and can stay forever -- it can be exported and
saved wherever the user desires.
Limitations
The following table specifies the items that are backed up by the Backup command.
The following table specifies the items that are backed up by the Backup command.
agent data Data about the agents. Includes status, error flags, installation profile, and
current expected configuration
assets Data regarding assets in the system. Can be agents/ agentless assets.
labels all label data including dynamic criteria, label groups, label suggestions, etc.
user data all of the users in the system, user groups, user directories, user permission
schemes, etc.
dashboard
A package can be selected during installation, or modified from local or central configuration.
There is also an option for changing individual settings within a configuration package.
The limits set resource usage limits for each of the Agent modules. All modules share the same
limitation values:
Hard limits restrict resource usage to an absolute ceiling that cannot be exceeded.
Soft limits, on the other hand, restrict usage for the current process, but may be exceeded in
situations when the resource is not requested by concurrent processes.
The CPU Rate limit in all the 3 pre-configured resource usage limitation packages is similar:
Memory
When installing Agents for Linux without any explicit configuration settings, Guardicore employs
an auto-detection logic for determining the optimal package as follows:
Low profile: less than 2GB RAM is detected
Medium profile : between 2GB to 32GB RAM is detected
High profile: more than 32GB RAM is detected
The Memory limits in the pre-configured resource usage limitation packages are as follows:
Profile Soft Hard
(10+cores
Reveal 10+cores*20 (10+cores*20)*2 *20)*4 250+cores*32 (250+cores*32)*2 (250+cores*32)*4
Storage
Agents service 10 3 13 10 5 15 50 8 27
Controller 1 3 1.3 10 5 15 50 8 90
Deception 2 3 2.6 10 8 18 50 8 27
Enforcement 30 5 45 50 8 90 80 8 90
Reveal 5 3 6.5 20 5 26 50 8 90
The kernel modules create the following device: /dev/endr in order to communicate with the
user mode part of the Agent modules.
In case the kernel module is missing, the Agent will default to “polling mode” where the events
collection is done in user-space, and enforcement is not supported. An appropriate flag is raised
in Management so the user can identify the problem and solve it with Guardicore support.
1
The frequency of checking for new KOs is ~1 hour
The following commands are supported to control the Agent in a Linux environment:
Command Description
gc-agent start Start the Agent service with all the installed modules.
gc-agent stop Stop the Agent service with all the installed modules.
gc-agent system-status Get general info about the Agent deployment including the IP of
the Aggregator, uptime, and more useful info for debugging.
gc-agent start-all Start all the Agent modules, assuming the Agent service is up.
gc-agent stop-all Stop all the Agent modules, without shutting down the Agent
service.
gc-agent module-start Start a specific module, assuming the Agent service is up.
<module name>
gc-agent module-stop Stop a specific module, assuming the Agent service is up.
<module name>
gc-agent start Start the Agent service with all the installed modules.
gc-agent stop Stop the Agent service with all the installed modules.
gc-agent module-restart Restart a specific module, assuming the Agent service is up.
<module name>
gc-agent module-status Get the status of the module, assuming the Agent service is up.
<module name>
gc-agent collect- Collect local diagnostics on the machine and create a report to be
diagnostics sent to Guardicore support center.
In addition, the following commands are supported to control the Enforcement module:
gc-agent dump-policy Print the current enforcement policies and revision set for
the module.
Agents Uninstall
The following command needs to be run as root:
gc-agent uninstall
Directory Description
It is possible to modify the binaries and configuration paths - see customization options.
KO Cloud
What is KO Cloud?
KO Cloud is a hosted environment that contains the .ko object of gc_enforcement kernel
module for all supported Linux distributions for all existing and supported kernel versions.
Management polls the KO Cloud for new .ko modules, and updates its internal cache.
Although the KO Cloud feature was designed to facilitate the functioning of Agents in their
specific OS environment, some customers may require more control over the downloading of
Kernel Objects. To enable this, Centra now allows manually enabling or disabling the automatic
downloading of Kernel Objects from the KO Cloud per Agent.
Users who choose to disable the automatic KO download for an Agent can then manually
determine whether to download a KO for the Agent. Since an Agent requests a KO every 45
seconds, the user can simply select the Agent, then click the More button and under Control
select Enable automatic download of kernel module when available
To enable or disable automatic download of KO for an Agent perform the following steps:
1. On the Agents screen, select the Agent that you want to configure.
In case the matching ko file is not found on the management or on the KO Cloud - a rare
situation typically associated with custom-built kernels which are not available on public
repositories - the Agent will move to “polling mode” and will provide limited Reveal service. A
flag will be raised in Management so the issue can be detected by the administrator and handled
with Guardicore support. When the supported KOs are added locally to Management and/or to
the KO Cloud and through it to Management and Aggregators, the wrapper automatically
detects the added .KOs and installs them and the Agent returns to its normal operation without
any need for operator involvement.
In case the KO Cloud is inaccessible, for instance due to internal customer policy:
• Each system patch done as part of routine maintenance also refreshes the KO repository
stored on Management, replacing it with the complete repository supported by
Guardicore at that time.
• A new KO repository can be acquired from Guardicore support and a short, simple
manual procedure is executed to upload it into Management. This is commonly used by
customers to resolve specific events of missing KOs.
Both clusters can be active or backup, but only one can be active or backup at any given time.
If the primary cluster fails, you can initiate a failover on the standby cluster to continue system
operations on the standby cluster. When the primary cluster becomes available, it returns to
active and the standby cluster goes back to being the backup cluster.
Centra ensures there is an ongoing sync between the two clusters. For example, all segmentation
rules and labels written to the primary cluster are replicated to the backup cluster, and the other
way around.
What's synced
● Configuration (information)
● Inventory (list of assets, aggregators etc’)
● Segmentation policy
● Reveal data
● Incidents data
C fi
Configuration
DB is auto-replicated
into the Standby
Management
Centra DR Scheme
Instructions for Configuring the System for Disaster Recovery
Before you can initiate a failover, you must first configure the system so that it is capable of
switching between a primary management cluster and a secondary management cluster.
Install two different management clusters. These are referred to as Primary Management
Master/Cluster and Standby Management Master/Cluster.
Allow SSH communication between the management master within each cluster (i.e. by doing
ssh-copy-id <standby-IP>).
Sync the certificates between the primary management master and the standby management
master:
Add the following in /etc/guardicore/hosts at the end of the file on the primary:
1 …
2 [peer_master]
3 [standby_master_ip]
1 gc-dr-cli sync-standby-certs
This copies all certificates from the primary management master to the standby one.
Notes:
components-standby-ip and components-primary-ip should be different then standby-ip and
primary-ip in case the component facing subnet differs from the inter-management subnet.
sleep-interval-seconds determines the interval between the collection of configuration to be
fetched by the standby in order to keep them synchronized.
Enable the primary management cluster by running the following:
1 gc-dr-cli enable
Notes:
sleep-interval-seconds determines the interval between each time the standby management
cluster attempts to fetch the new backup configuration.
Initiating Failover
In case the primary cluster fails for any reason, the administrator can initiate the failover. This
will cause the standby management cluster to take over as the primary management cluster. All
management operations will then be available on the new active cluster.
To initiate the failover, perform the following:
1. Run the following on the standby management master: gc-dr-cli failover
The process takes around 10 minutes, including the shifting of the components to the
standby management master which now acts as the primary management master.
To initiate the failback and return the system to the primary cluster:
1. Run the following: gc-dr-cli generate-config
This triggers unscheduled configuration collection and archiving to speed up the failback
process.
2. Initiate fetch and load configuration from the designated standby (current "active"
management). Run the following on the primary management master:
gc-dr-cli pull-and-load-config
The designated primary management master will now pull the file created on the standby
management master and load it. The designated primary management master is now
ready for use.
3. Return the standby management master to its original standby role by running the
following on the standby management master:
gc-dr-cli standby
The designated standby is stopped, and the primary management cluster becomes the
active one. This returns us to the original state: the designated master cluster is the
"active" cluster, while the designated standby cluster is the "backup" one.
Circuit Breaker
It may happen that a Rule will cause Centra to be flooded with multiple incidents. This can cause
too much load on the DBs and the system in general. To avoid this situation, Centra implements
a Circuit breaker that stops creating new incidents that exceed the threshold. The circuit breaker
causes the system to drop all new incidents beyond the limit which is defined as follows:
Note: the circuit breaker is only relevant for new incidents, not rules. The System log may issue a
warning about a rule, but this is meant to indicate to the administrator which rule might be
causing the incident "flood".
• the standby cluster that acts as backup in case the primary cluster fails.
Both clusters can be active or backup, but only one can be active or backup at any given time. If
the primary cluster fails, you can initiate a failover on the standby cluster to continue system
operations on the standby cluster. When the primary cluster becomes available, it returns to
active and the standby cluster goes back to being the backup cluster.
Centra ensures there is an ongoing sync between the two clusters. For example, all segmentation
rules and labels written to the primary cluster are replicated to the backup cluster, and the other
way around.
What's synced
• Configuration (information)
• Segmentation policy
• Incidents data
2. Allow SSH communication between the management master within each cluster (i.e. by
doing ssh-copy-id <standby-IP>).
3. Sync the certificates between the primary management master and the standby
management master:
Add the following in /etc/guardicore/hosts at the end of the file on the primary:
1 ...
2 [peer_master]
3] [standby_master_ip]
4. To synchronize the certificates, run the following on the primary management cluster:
1 gc-dr-cli sync-standby-certs
This copies all certificates from the primary management master to the standby one.
Notes:
components-standby-ip and components-primary-ip should be different then standby-ip and
primary-ip in case the component facing subnet differs from the inter-management subnet.
sleep-interval-seconds determines the interval between the collection of configuration to be
fetched by the standby in order to keep them synchronized.
6. Enable the primary management cluster by running the following on the primary:
1 gc-dr-cli enable
Note:
sleep-interval-seconds determines the interval between each time the standby management
cluster attempts to fetch the new backup configuration.
8. Enable the standby management cluster by running the following on the standby:
1 gc-dr-cli enable
Initiating Failover
In case the primary cluster fails for any reason, the administrator can initiate the failover. This
will cause the standby management cluster to take over as the primary management cluster. All
management operations will then be available on the new active cluster.
The process takes around 10 minutes, including the shifting of the components to the standby
management master which now acts as the primary management master.
1. To initiate the failback and return the system to the primary cluster, run the following on
the standby:
gc-dr-cli generate-config
This triggers unscheduled configuration collection and archiving to speed up the failback
process.
2. Initiate fetch and load configuration from the designated standby (current "active"
management). Run the following on the primary management master:
gc-dr-cli pull-and-load-config
The designated primary management master will now pull the file created on the standby
management master and load it. The designated primary management master is now ready for
use.
Return the standby management master to its original standby role by running the following on
the standby management master:
gc-dr-cli standby
The designated standby is stopped, and the primary management cluster becomes the active
one. This returns us to the original state: the designated master cluster is the "active" cluster,
while the designated standby cluster is the "backup" one.
Failback Steps
ssh-copy-id <standby-IP>
ssh-copy-id <primary-IP>
1. On the primary, in the file /etc/guardicore/hosts, at the bottom of the file, add the
following:
...
[peer_master]
<standby_control_node_ip>
Note: You do not need to do this on the standby as it should already be present.
3. Copy all of the subdirs from the standby under /var/lib/guardicore/storage/certs/tls to the
new primary control node (this must be done manually as there is currently no script for
this)::
- "aggregator"
- "disaster_recovery"
- "gcca"
- "mesos_master"
- "mitigation_ca"
- "mongodbclient"
- "mongodbserver"
- "mitigation_cas_chain.pem"
- "rabbitmq"
- "rabbitmqserver"
- "remote_ssl_proxy"
- "remote_ssl_proxy_server"
gc-dr-cli propograte-certificates
5. Restart dr-ssl-proxy
6. On the new primary run the following to load the new certificates
gc-dr-cli enable
gc-dr-cli pull-and-load-config
gc-dr-cli standby
• Network logs
• Reveal Explore map
• Saved maps
• Audit/system logs
• Labels log
How it works
Backup and Restore is accomplished by running the following scripts from the command line:
• Backup Script
• Restore Script
The Administrator can edit the script to determine the repository for the backup.
Backup Script
Restore Script
✔
4.* , 5.0-5.1 (64bit) Polling ✘ ✘ ✘ ✘
mode
Red Hat ✔
✔
5.2-5.11 (64bit) Network level (L4 ✘ ✔ ✘
only)
6-8 (64bit) ✔ ✔ ✔ ✔ ✔
✔ ✔
5.2 (32bit) Polling Network level (L4 ✘ ✘ ✘
mode only)
CentOS ✔
✔
5.2-5.11 (64bit) Network level (L4 ✘ ✘ ✘
only)
6-8 (64bit) ✔ ✔ ✔ ✔ ✔
Linux ✔
✔
5.2-5.11 (64bit) Network level (L4 ✘ ✔ ✘
Oracle Linux[1]
only)
6-8 (64bit) ✔ ✔ ✔ ✔ ✔
12.04-20.04 LTS
Ubuntu ✔ ✔ ✔ ✔ ✔
(64bit)
✔
11 SP0- SP1 (64bit) Polling ✘ ✘ ✔ ✘
mode
SUSE
11 SP2-SP4 (64bit) ✔ ✔ ✔ ✔ ✔
12, 15 (64bit) ✔ ✔ ✔ ✔ ✔
✔
2000 SP4 (32bit) Polling ✘ ✘ ✘ ✘
mode
Microsoft Windows Server
✔
✔
2003 SP2(32bit, 64bit) Network level (L4 ✘ ✔ ✘
only)
2008 (32bit) ✔ ✔ ✘ ✔ ✘
2008R2 ✔ ✔ ✔ ✔ ✘
2019 ✔ ✔ ✔ ✔ ✔
✔
✔
XP SP3 (32bit, 64bit) Network level (L4 ✘ ✔ ✘
only)
7, 8, 8.1, 10 (32bit) ✔ ✔ ✔ ✔ ✘
7 SP1(64bit) ✔ ✔ ✔ ✔ ✘
8, 8.1, 10 (64bit) ✔ ✔ ✔ ✔ ✔
✔
11.23 Polling ✘ ✘ ✘ ✘
mode
HP-UX[2]
✔ ✔
11.31 Polling Network level (L4 ✘ ✘ ✘
mode only)
✔ ✔
10 U8-U9(x86_64) Polling Network level (L4 ✘ ✘ ✘
mode only)
✔ ✔
UNIX 10 U10+
Polling Network level (L4 ✘ ✘ ✘
Solaris[3] ( x86_64, SPARC)
mode only)
✔
✔
11.0-11.4 Polling
Network level (L4 ✘ ✘ ✘
(SPARC, x86_64) mode
only)
✔ ✔
AIX 6.1, 7.1, 7.2 Polling Network level (L4 ✘ ✘ ✘
mode only)