Download as pdf or txt
Download as pdf or txt
You are on page 1of 124

6/26/2023

Implementation Guide for SAP Enterprise


Threat Detection
Generated on: 2023-06-26 13:58:35 GMT+0000

SAP Enterprise Threat Detection | 2.0 SP06 (Support Package Stack 32)

CONFIDENTIAL

Original content: https://help.sap.com/docs/SAP_ENTERPRISE_THREAT_DETECTION/eb42e48f5e9c4c9ab58a7ad73ff3bc66?


locale=en-US&state=PRODUCTION&version=2.6.1.0

Warning

This document has been generated from the SAP Help Portal and is an incomplete version of the official SAP product
documentation. The information included in custom documentation may not re ect the arrangement of topics in the SAP Help
Portal, and may be missing important aspects and/or correlations to other topics. For this reason, it is not for productive use.

For more information, please visit the https://help.sap.com/docs/disclaimer.

This is custom documentation. For more information, please visit the SAP Help Portal 1
6/26/2023

Installing SAP Enterprise Threat Detection


After planning for the installation, install the SAP Enterprise Threat Detection software component on SAP HANA.

Context
The following is an overview of the installation procedure. For more information, see the sections that follow.

 Note
If you want to upgrade from an older release, please follow the Upgrade Guide for SAP Enterprise Threat Detection.

Procedure
1. Plan your installation.

In this phase of the installation, make sure that your hardware and landscape meet the requirements of the system.

For more information, see Planning Your Installation.

 Note
If you want to upgrade from an older release, please follow the Upgrade Guide for SAP Enterprise Threat Detection at
http://help.sap.com/sapetd.

2. Install SAP HANA database and client.

3. Install Kafka.

For more information, see Installing Kafka.

4. Install the delivery unit for SAP Enterprise Threat Detection on SAP HANA Database.

Download SAP Enterprise Threat Detection from the Software Download Center and install the delivery unit on the host
SAP HANA platform.

For more information, see Installing SAP Enterprise Threat Detection on SAP HANA.

5. Install SAP Enterprise Threat Detection Streaming.

a. Check out the content from the SAP Enterprise Threat Detection delivery unit installed on SAP HANA.

b. Perform the general preparation steps.

c. Perform the application-speci c installation steps.

For more information, see Installing SAP Enterprise Threat Detection Streaming.

Planning Your Installation


Carefully review the system requirements for your landscape. Ensure that you have adequate licensing for your installation.

This is custom documentation. For more information, please visit the SAP Help Portal 2
6/26/2023

System Requirements
Before installation, familiarize yourself with the requirements and recommendations for installing the software components of
SAP Enterprise Threat Detection.

For the current release note and other SAP Notes about SAP Enterprise Threat Detection, go to https://support.sap.com
and check the entries for the component BC-SEC-ETD.

For more information about compatibility between software components, see SAP Note 2137018 .

For more information about our recommendations for sizing host systems and for an easy-to-use tool for calculating your sizing
requirements, see the Sizing Guide for SAP Enterprise Threat Detection.

To use SAP Enterprise Threat Detection Streaming, the following requirements need to be ful lled:

You need one of the following operating systems with the mentioned version:

SuSE Linux Enterprise Server 11 or higher

RedHat Enterprise Linux 7.9 or higher

Ubuntu Server 18.04 or higher

You need the following Java version: Java 11 (OpenJDK or sapmachine)

SAP HANA Platform

 Note
SAP is strongly committed to supporting all of its customers by shipping regular corrections and updates for the SAP HANA
platform and all of its components. With the availability of SAP HANA revisions, SAP HANA maintenance revisions, and the
SAP HANA datacenter service points, SAP provides several options to maintain or upgrade to a new release of SAP HANA.

For more information, see SAP Note 2021789

Multi-tenant Database Support


SAP Enterprise Threat Detection provides support for multi-tenant databases from version 1.0 SP07 onward.

Web Browser Support


SAP Enterprise Threat Detection supports the latest version of the following browsers:

Google Chrome

Mozilla Firefox

Microsoft Edge (Chromium)

This is custom documentation. For more information, please visit the SAP Help Portal 3
6/26/2023

Licensing
SAP Enterprise Threat Detection does not require a license key, but you need a license key for SAP HANA where SAP Enterprise
Threat Detection runs. Install a permanent SAP license. When you install your SAP system, a temporary license is automatically
installed.

 Caution
Before the temporary SAP HANA license expires, apply for a permanent license key. We recommend that you apply for a
permanent SAP HANA license key as soon as possible after installing your system.

For more information about SAP license keys and how to obtain them, see Request Keys on the SAP Support Portal at
https://support.sap.com .

For more information, see https://support.sap.com/licensekey and Managing SAP HANA Licenses in the SAP HANA
Administration Guide for SAP HANA Platform.

License Measurement
All non-technical users found in logs within the last 90 days are considered as monitored users and counted for licensing. The
user measurement takes place on SAP HANA.

This number of users is stored in metric H082. The metric is lled with results when the SAP HANA
jobsap.secmon.framework.usagemeasurement::usageMeasurement is activated. This metric is evaluated by Solution
Manager for License Measurement.

For more information about the metric details, see the engine measurement information for SAP Enterprise Threat Detection
under Engine Measurement On-Premise on SAP Support Portal.

Installing SAP HANA


Installing SAP HANA for SAP Enterprise Threat Detection.

Context

Procedure
1. Install a multi-tenant SAP HANA platform edition with SAP HANA Database.

2. For more information, see the documentation of SAP HANA on SAP Help Portal, for example the Master Guide for SAP
HANA.

Installing Kafka

Context
This is custom documentation. For more information, please visit the SAP Help Portal 4
6/26/2023

You can install a Kafka cluster in two different ways:

As a non-high-availability, non-secured cluster consisting of one Kafka broker and one ZooKeeper node only

As a high-availability, secured cluster using TLS with basic authentication and consisting of two Kafka brokers and three
ZooKeeper nodes

You can also combine both con gurations and install a non-high-availability, secured cluster as well as a high-availability, non-
secured one.

For information about the supported Kafka versions, see SAP Note 2137018 .

Related Information
Non-high-availability, Non-secured Kafka Installation
High-availability, Secured Kafka Installation

Non-high-availability, Non-secured Kafka Installation

Prerequisites
Java Runtime Environment 8 or 11 installed

Context
In this setup, you install one Kafka broker and one ZooKeeper node on the same host.

Procedure
1. Choose the directory where you want to install Kafka.

There should be enough disk space to store logs. For some proof-of-concept installations, as little as 10 GB of disk space
might be enough. The space required depends on the volume of logs and the retention time. As a general rule, it's best
to have 100 GB or more. Please refer to the SAP Enterprise Threat Detection Sizing Guide to determine the required
disk size for your installation.

Let's suppose you have enough disk space on the “root” volume, so we'll install Kafka there.

2. Download the latest Kafka version which is compatible with your SAP Enterprise Threat Detection release from the
official Apache Kafka website at https://kafka.apache.org/downloads (for compatibility information refer to SAP Note
2137018 ).

The examples mentioned below are valid for version 2.8.1.

a. Log on to Secure Shell (SSH).

b. Go to the root directory:

cd /

c. Download the archive:

This is custom documentation. For more information, please visit the SAP Help Portal 5
6/26/2023
wget http://ftp.fau.de/apache/kafka/2.8.1/kafka_2.12-2.8.1.tgz

3. Extract the archive and rename the Kafka directory:

a. Extract the archive:

tar -zxf kafka_2.12-2.8.1.tgz

b. Rename the extracted directory:

mv kafka_2.12-2.8.1 kafka

Kafka is now installed in the /kafka directory.

4. Change the ZooKeeper and Kafka con guration.

a. In /kafka/config/zookeeper.properties, change the dataDir parameter:

dataDir=/kafka/zookeeper

b. In /kafka/config/server.properties, change the listeners, advertised.listeners, log.dirs,


and log.retention.hours parameters. For advertised.listeners, use your server's hostname instead
of kafka.example.com

listeners=PLAINTEXT://0.0.0.0:9092
advertised.listeners=PLAINTEXT://kafka.example.com:9092
log.dirs=/kafka/kafka-logs
log.retention.hours=24

 Note
Adjust the log.retention.hours value according to your sizing.

Kafka and ZooKeeper have now been installed and con gured, and you can start them.

5. (Optional) Make the system more secure and use a dedicated user to run ZooKeeper and Kafka. Also con gure systemd
services to automate ZooKeeper and Kafka startup and make managing services simpler.

a. Add the system user “kafka” and the group “kafka”, and set permissions for the user and group to the /kafka
directory:

groupadd -r kafka
useradd -r kafka -g kafka
chown -R kafka:kafka /kafka

b. Create the le /etc/systemd/system/zookeeper.service with the following content:

[Unit]
Description=zookeeper
After=syslog.target network.target

[Service]
Type=simple
SyslogIdentifier = zookeeper

User=kafka
Group=kafka
Restart=always
ExecStart=/kafka/bin/zookeeper-server-start.sh /kafka/config/zookeeper.properties
ExecStop=/kafka/bin/zookeeper-server-stop.sh

[Install]
WantedBy=multi-user.target
This is custom documentation. For more information, please visit the SAP Help Portal 6
6/26/2023

c. Create the le /etc/systemd/system/kafka.service with the following content:

[Unit]
Description=Apache Kafka
Requires=zookeeper.service
After=zookeeper.service

[Service]
Type=simple
SyslogIdentifier = kafka

User=kafka
Group=kafka
LimitNOFILE=100000
Environment="KAFKA_HEAP_OPTS=-Xmx4G -Xms1G"
Restart=always
ExecStart=/kafka/bin/kafka-server-start.sh /kafka/config/server.properties
ExecStop=/kafka/bin/kafka-server-stop.sh

[Install]
WantedBy=multi-user.target

d. Reload systemd services:

systemctl daemon-reload

e. Enable autostart of the ZooKeeper service with system boot up and start this service immediately:

systemctl start zookeeper && systemctl enable zookeeper

f. Enable autostart of the Kafka service with system boot up and start this service immediately:

systemctl start kafka && systemctl enable kafka

g. Now you can start, stop, and restart Kafka and ZooKeeper using systemctl:

systemctl start zookeeper


systemctl start kafka
systemctl stop kafka
systemctl stop zookeeper
systemctl restart kafka
systemctl restart zookeeper

High-availability, Secured Kafka Installation

Prerequisites
Java Runtime Environment 8 or 11 installed (for all hosts).

Context
For this type of installation, you need ve hosts (servers or virtual machines) to achieve high availability – two of the hosts are
for Kafka brokers, and the remaining three are for ZooKeeper.

Installations of this type support username-based and password-based authentication between consumers/producers and
Kafka brokers and also include con gured TLS for secured data transfer between them.

Example con guration (The following are merely examples. In your case, hostnames and IP addresses may differ.)

Three ZooKeeper hosts with the following hostnames and IP addresses:


This is custom documentation. For more information, please visit the SAP Help Portal 7
6/26/2023
zk1.example.com 192.168.0.1

zk2.example.com 192.168.0.2

zk3.example.com 192.168.0.3

Two Kafka hosts with the following hostnames and IP addresses:

kafka1.example.com 192.168.0.4

kafka2.example.com 192.168.0.5

 Note
Kafka consumers with the same consumer group share the same data. Kafka consumers with different consumer groups
read the entire data set, that is, the data is copied. For more information about Kafka consumers, see the introduction at
https://kafka.apache.org/intro#intro_consumers

Procedure
Con gure all ZooKeeper hosts the same way.

1. Choose the directory where you want to install Kafka.

2. Download the latest Kafka version which is compatible with your SAP Enterprise Threat Detection release from the
official Apache Kafka website at https://kafka.apache.org/downloads (for compatibility information refer to SAP Note
2137018 ).

The examples mentioned below are valid for version 2.8.1.

a. Log on to Secure Shell (SSH).

b. Go to the root directory:

cd /

c. Download the archive:

wget http://ftp.fau.de/apache/kafka/2.8.1/kafka_2.12-2.8.1.tgz

3. Extract the archive and rename the Kafka directory:

a. Extract the archive:

tar -zxf kafka_2.12-2.8.1.tgz

b. Rename the extracted directory:

mv kafka_2.12-2.8.1 kafka

Kafka is now installed in the /kafka directory.

4. Change the ZooKeeper con guration in /kafka/config/zookeeper.properties

dataDir=/kafka/zookeeper
clientPort=2181
maxClientCnxns=0
tickTime=2000
initLimit=5
syncLimit=2
server.0=zk1.example.com:2888:3888
server.1=zk2.example.com:2888:3888
server.2=zk3.example.com:2888:3888

This is custom documentation. For more information, please visit the SAP Help Portal 8
6/26/2023

5. Create a /kafka/zookeeper/myid le for each ZooKeeper host.

The myid le identi es the server that corresponds to the given data directory.

Execute the command below to create a le with the id for zk1.example.com:

echo "1" > /kafka/zookeeper/myid

Command for zk2.example.com:

echo "2" > /kafka/zookeeper/myid

Command for zk3.example.com:

echo "3" > /kafka/zookeeper/myid

6. Make the system more secure and use a dedicated user to run ZooKeeper. Con gure systemd services to automate
ZooKeeper startup and make managing ZooKeeper services simpler.

a. Add the system user “kafka” and the group “kafka”, and set permissions for user and group to the /kafka
directory:

groupadd -r kafka
useradd -r kafka -g kafka
chown -R kafka:kafka /kafka

b. Create the le /etc/systemd/system/zookeeper.service with the following content:

[Unit]
Description=zookeeper
After=syslog.target network.target

[Service]
Type=simple
SyslogIdentifier = zookeeper

User=kafka
Group=kafka
Restart=always
ExecStart=/kafka/bin/zookeeper-server-start.sh /kafka/config/zookeeper.properties
ExecStop=/kafka/bin/zookeeper-server-stop.sh

[Install]
WantedBy=multi-user.target

c. Reload systemd services:

systemctl daemon-reload

d. Enable ZooKeeper service autostart with system boot up and start this service immediately:

systemctl start zookeeper && systemctl enable zookeeper

Con gure all Kafka hosts.

7. Choose the directory where you want to install Kafka. There should be enough disk space to store logs.

This is custom documentation. For more information, please visit the SAP Help Portal 9
6/26/2023
Let's suppose you have enough disk space on the “root” directory , so we’ll install Kafka there.

8. Download the archive with the latest Kafka version from the official Apache Kafka website:

a. Log on to SSH.

b. Go to the root directory:

cd /

c. Download the archive:

http://ftp.fau.de/apache/kafka/2.8.1/kafka_2.12-2.8.1.tgz

9. Extract the archive and rename the Kafka directory:

a. Extract the archive tar -zxf kafka_2.12-2.8.1.tgz

b. Rename the extracted directory mv kafka_2.12-2.8.1 kafka

Kafka is now installed in the /kafka directory.

10. Con gure SASL authentication

a. Create the le /kafka/config/kafka_server_jaas.conf with the following content:

KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="kafkaadmin"
password="kafkaadmin_password"
user_kafkaadmin="kafkaadmin_password"
user_kafkadata="kafkadata_password";
};

11. Create SSL keys and certi cates.

For a production environment, it is usually best to use a certi cate that is signed by your internal certi cate authority
(CA). Please contact your security team for details.

For a non-production environment, you create a self-signed certi cate as follows:

a. Generate the SSL key and certi cate for each Kafka broker (ensure that the common name (CN) matches the
Kafka broker’s hostname)

keytool -keystore keystore -alias localhost -validity 3650 -genkey -keyalg RSA -keysize

b. Create your own CA:

openssl req -new -x509 -keyout ca-key -out ca-cert -days 3650

c. Add the generated CA to the clients truststore so that the clients can trust this CA:

keytool -keystore truststore -alias CARoot -import -file ca-cert

d. Export the certi cate from the keystore:

keytool -keystore keystore -alias localhost -certreq -file cert-file

e. Sign it with the CA:

openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days 3650 -C

f. Import both the certi cate of the CA and the signed certi cate into the keystore:

This is custom documentation. For more information, please visit the SAP Help Portal 10
6/26/2023

keytool -keystore keystore -alias CARoot -import -file ca-cert


keytool -keystore keystore -alias localhost -import -file cert-signed

g. Copy the keystore to the /kafka/config/ directory on both Kafka servers

h. Copy the truststore on the ETD Streaming host.

For more information, see the respective section under Application-Speci c Installation Steps.

12. Con gure Kafka.

a. Edit /kafka/config/server.properties. To do so, refer to the example below. You should change the
values of some existing parameters to the values from this example. If values are passwords, you should change
them to the ones set earlier when keys and certi cates were created and authorization was con gured.

 Note
Adjust the log.retention.hours value according to your sizing.

broker.id=1
listeners=SASL_SSL://0.0.0.0:9092
advertised.listeners=SASL_SSL://kafka1.example.com:9092
log.dirs=/kafka/kafka-logs
num.partitions=4
default.replication.factor=2
offsets.topic.replication.factor=2
transaction.state.log.replication.factor=2
transaction.state.log.min.isr=1

log.retention.hours=24

zookeeper.connect=zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181
zookeeper.connection.timeout.ms=6000

group.initial.rebalance.delay.ms=3

ssl.keystore.location=/kafka/config/keystore
ssl.keystore.password=keystore_password
ssl.key.password=key_password
ssl.truststore.location=/kafka/config/truststore
ssl.truststore.password=truststore_password
ssl.enabled.protocols=TLSv1.2
ssl.keystore.type=JKS
ssl.truststore.type=JKS
ssl.client.auth=none
security.inter.broker.protocol=SASL_SSL
sasl.enabled.mechanisms=PLAIN
sasl.mechanism.inter.broker.protocol=PLAIN
ssl.endpoint.identification.algorithm=HTTPS

 Note
For the second Kafka server you should change the following parameters:

broker.id=2
advertised.listeners=SASL_SSL://kafka2.example.com:9092

This is custom documentation. For more information, please visit the SAP Help Portal 11
6/26/2023
13. Make the system more secure and use a dedicated user to run Kafka. Con gure systemd services to automate Kafka
startup and make managing Kafka services simpler.

a. Add the system user “kafka” and group “kafka”, and set permissions for the user and group to the /kafka
directory:

groupadd -r kafka
useradd -r kafka -g kafka
chown -R kafka:kafka /kafka

b. Create the le /etc/systemd/system/kafka.service with the following content:

[Unit]
Description=Apache Kafka
After=syslog.target network.target

[Service]
Type=simple
SyslogIdentifier = kafka

User=kafka
Group=kafka
LimitNOFILE=100000
Environment="KAFKA_HEAP_OPTS=-Xmx4G -Xms1G"
Environment="EXTRA_ARGS=-Djava.security.auth.login.config=/kafka/config/kafka_server_jaa
Restart=always
ExecStart=/kafka/bin/kafka-server-start.sh /kafka/config/server.properties
ExecStop=/kafka/bin/kafka-server-stop.sh

[Install]
WantedBy=multi-user.target

c. Reload systemd services:

systemctl daemon-reload

d. Enable Kafka service autostart with system boot up and start this service immediately:

systemctl start kafka && systemctl enable kafka

Installing SAP Enterprise Threat Detection on SAP HANA


Installing SAP Enterprise Threat Detection on SAP HANA primarily involves importing delivery units.

Prerequisites
You have installed the SAP HANA platform on a host server according to the system requirements.

You have logged on with a user on the SAP HANA platform with the role sap.hana.xs.lm.roles::Administrator.

Context

This is custom documentation. For more information, please visit the SAP Help Portal 12
6/26/2023

Procedure
1. Grant the following additional privilege to the _SYS_REPO user in SAP HANA using SQL statement:

GRANT EXECUTE ON SYS.STORE_LICENSE_MEASUREMENT_DEV TO "_SYS_REPO" WITH GRANT OPTION

This statement must be executed with the SYSTEM user.

2. Download the product SAP Enterprise Threat Detection from the SAP Software Download Center at
https://support.sap.com/swdc .

SAP Enterprise Threat Detection consists of the core delivery unit “ENTERPRISE THREAT DETECT”.

3. Use SAP HANA application lifecyle management to deploy SAP Enterprise Threat Detection.

 Note
Make sure to install SAP Enterprise Threat Detection on the tenant database and not on the system database.

For more information, see Installing and Updating SAP HANA Products in the documentation for the SAP HANA platform
on SAP Help Portal.

4. Create users and assign authorizations.

For more information, see Creating Users and Assigning Authorizations.

5. Activate the SQL connection for the technical user.

For more information, see Activating the SQL Connection for the Technical User.

6. Finish the installation.

For more information, see Finishing the Installation.

7. Schedule the required background jobs.

 Note
Make sure that the mandatory background jobs are scheduled and run successfully before you load any log data into
SAP Enterprise Threat Detection.

 Note
For more information, see Starting Background Jobs for SAP Enterprise Threat Detection.

Related Information
Upgrading SAP Enterprise Threat Detection

Creating Users and Assigning Authorizations


After installing the software you are ready to assign authorizations to users on SAP HANA.

This is custom documentation. For more information, please visit the SAP Help Portal 13
6/26/2023

Prerequisites
You have logged on with a user on the SAP HANA platform with sufficient authorizations to perform user and role management.
For more information, see Recommendations for Database Users, Roles and Privileges in the SAP HANA Platform
documentation.

Procedure
1. Create the following users with the relevant authorizations:

User Authorizations Recommended User Name Sample

A <communication> user for We provide an example role ETD_DATA_COMMITTER


CREA
the HANA Writer. This user sap.secmon.db::EtdDataCommitter to base
ALTE
writes data from the HANA this role on.  Note CALL
Writer into the SAP HANA
It is important that this user
database.
has exactly this name. The
User should be used in etd- workload management of SAP
kafka_2_hana_con g.xml and Enterprise Threat Detection
etd-coldstorage_con g.xml. using SAP HANA workload
We support the use of X.509 classes relies on this user
authorization for the HANA name and will not work
Writer and Cold Storage Writer. properly if it is incorrect.
For more information, see
Con guring HANA Writer and
Cold Storage Writer for Secure
JDBC Connections in the
Security Guide for SAP
Enterprise Threat Detection .

<ETD batch> user to run We provide the example role ETD_BATCH


CREA
background jobs. sap.secmon.db::EtdBatch for the <ETD
ALTE
batch> user. CALL

A We provide an example role sap.secmon.db:: ETD_STREAMING_NORMALIZER


CREA
<normalizer_communication> ETDStreamingNormalizer to base this role on. ALTE
user for REST API services for
CALL
the normalizer application that
reads data from HANA DB
tables.

User should be used in


con guration of
etd_normalizer_con g.xml le.

We support the use of X.509


authorization for the
normalizer. For more
information, see Con guring
Normalizer and Log Learner for
Secure HTTPS Connections in
the Security Guide for SAP
Enterprise Threat Detection .

This is custom documentation. For more information, please visit the SAP Help Portal 14
6/26/2023

User Authorizations Recommended User Name Sample

A We provide an example role ETD_STREAMING_LOGLEARNER


CREA
<log_learner_communication> sap.secmon.db::ETDStreamingLogLearner
ALTE
user for REST API services for to base this role on.
CALL
the log learner application that
reads data from HANA tables
like the
<normalizer_communication>
user above, but it also writes
data to SAP HANA DB tables.

User should be used in


con guration of
etd_loglearner_con g.xml le.

We support the use of X.509


authorization for the log
learner. For more information,
see Con guring Normalizer
and Log Learner for Secure
HTTPS Connections in the
Security Guide for SAP
Enterprise Threat Detection
.

2. Assign business users of SAP Enterprise Threat Detection privileges appropriate to their business role.

SAP Enterprise Threat Detection identi es the roles listed in the table below. The table also lists the example roles
delivered with the software.

Business Roles of SAP Enterprise Threat Detection

Role Tasks Example Role

Monitoring Agent The monitoring agents view events, sap.secmon.db::EtdUser


alerts, and incidents and manage their
status.
The monitoring agents monitor the
system landscape in a security
monitoring center at all times. When an
alert is shown, the monitoring agent must
immediately react according to the
process de ned in the organization. If
they consider an alert suspicious enough
to require further analysis, they might
have to hand it over to a security expert.
If they nd a lot of false positives, they
can also send this information to the
security expert.

This is custom documentation. For more information, please visit the SAP Help Portal 15
6/26/2023

Role Tasks Example Role

Security Expert The security expert is an administrator sap.secmon.db::EtdSecExpert


who con gures attack detection patterns
and maintains any other con gurations of
SAP Enterprise Threat Detection. They
can also perform all operator tasks.

A security expert handles possible


incidents and performs forensic research
in order to nd the root cause. They
check the attack detection patterns and
charts in the forensic lab of SAP
Enterprise Threat Detection and possibly
modi es them or creates new ones for
better alert detection in the future. If they
learn about many false positive alerts
from the monitoring agent, they will also
modify the patterns accordingly.

Special role for resolving user identity, for By default, all user information is sap.secmon.db::EtdResolveUser
example from HR department replaced by a pseudonym in the user
interface. This role enables the identity
of the person behind the pseudonym to
be revealed. Who can resolve
pseudonyms is governed by local
regulations and by the data privacy
policy of your organization.

For more information about the authorizations delivered with SAP Enterprise Threat Detection, see Authorizations of
SAP Enterprise Threat Detection in SAP HANA in the Security Guide for SAP Enterprise Threat Detection

Activating the SQL Connection for the Technical User


Con gure this connection for the technical user to access SAP HANA database.

Prerequisites
You have an administrator user for SAP HANA with at least the following roles:

sap.hana.xs.admin.roles::JobAdministrator

sap.hana.xs.admin.roles::SQLCCAdministrator

Procedure
1. Start the SAP HANA XS Administration Tool.

Enter the following URL in a browser:

<protocol>://<host>:<port>/sap/hana/xs/admin and search for etd_connection.

You can start this application directly at <protocol>://<host>:


<port>/sap/hana/xs/admin/#/package/sap.secmon/sqlcc/etd_connection

2. Select the etd_connection.xssqlcc and choose Activate.

The technical user is created with the role sap.secmon.db::ETDTechnicalUser.

This is custom documentation. For more information, please visit the SAP Help Portal 16
6/26/2023

Finishing the Installation


Finish the installation by calling a URL that will initialize your version of SAP Enterprise Threat Detection.

Prerequisites
You have a user with role sap.secmon.db::EtdAdmin.

Procedure
Open the following URL in order to nish the installation: https://<host>:
<port>/sap/secmon/services/install/finish.xsjs.

Related Information
Creating Users and Assigning Authorizations

Starting Background Jobs for SAP Enterprise Threat Detection


SAP Enterprise Threat Detection has a number of background jobs that must run on SAP HANA, for example the job that
executes the attack detection patterns. Other jobs are optional. For performance reasons, we recommend that you only
activate those optional jobs that you actually need.

Prerequisites
You have logged on with a user that has the two following roles:

sap.hana.xs.admin.roles::JobAdministrator

sap.secmon.db::EtdAdmin

sap.hana.xs.admin.roles::JobSchedulerAdministrator

You have created the ETD batch users in SAP HANA to run the jobs.

For more information, see Creating Users and Assigning Authorizations.

You have enabled the job scheduler for SAP HANA XS. For example, you can do so in SAP HANA studio's Administration
perspective by setting the con guration variable xsengine.ini scheduler enabled . Alternatively, you can open the
XS Job dashboard by using the link http(s)://<HANA-Host>:<Port>/sap/hana/xs/admin/jobs and set the
Scheduler enabled switch to YES.

For more information, see The XS Job Dashboard in the documentation for SAP HANA platform on SAP Help Portal.

Procedure
1. Start the XS Job Dashboard in the SAP HANA XS Administration Tool.

Enter the following URL in a browser:

http(s)://<hana-host>:<port>/sap/hana/xs/admin/jobs

2. Activate all mandatory and optional jobs relevant for your case.

For more information, see Background Jobs of SAP Enterprise Threat Detection.

This is custom documentation. For more information, please visit the SAP Help Portal 17
6/26/2023
a. For each job, navigate to the job con guration tab. Enter the data as required.

Required Job Parameters

Field Entry

User Enter the user ID of the <ETD_BATCH> user as described


under Creating Users and Assigning Authorizations.

Locale Enter English (en).

Active Select the checkbox.

Password Enter the password of the created ETD_BATCH user.

 Note
Do not enter a start time or end time.

b. Save your entries.

Repeat these steps until you have con gured all the jobs.

Related Information
Background Jobs of SAP Enterprise Threat Detection

Background Jobs of SAP Enterprise Threat Detection


SAP Enterprise Threat Detection runs the following jobs in the background. The frequency is either hard coded or the job is
started on demand. You can nd more information about each job in the table below.

Background Jobs of SAP Enterprise Threat Detec

Job Name Frequency

sap.secmon.framework.anomalydetection.jobs::statisticsJob Once per hour

sap.secmon.framework.pattern.jobs::patternExecutionResultJob Once per day

sap.secmon.framework.pattern.jobs::patternjob Once per minute

This is custom documentation. For more information, please visit the SAP Help Portal 18
6/26/2023

Job Name Frequency

sap.secmon.services.healthcheck::healthcheck Once per minute

sap.secmon.services.ui.m.alerts.job::investigation On demand

No schedules need to be provided


created automatically via UI reque

sap.secmon.services.partitioning::clearData Every 2 hours

sap.secmon.services.partitioning::partitioning Every 6 hours (one hour before the

sap.secmon.framework.user.pseudonymization.jobs::regeneratePseudonyms Every 10 minutes

This is custom documentation. For more information, please visit the SAP Help Portal 19
6/26/2023

Job Name Frequency

sap.secmon.trigger.jobs::dispatcher Every 5 seconds

sap.secmon.trigger.jobs::thread On demand

sap.secmon.ui.browse.services2.jobs::rawdata Once per day

sap.secmon.framework.pattern.publishalerts.jobs::alertPublishingJob Once per minute

sap.secmon.services.cleanjoblog::cleanjoblog Once per day

sap.secmon.services.healthcheck::cleanhealthchecklog Once per day

sap.secmon.services.performance.jobs::perf Every 10 seconds

sap.secmon.services.performance.jobs::perf_stat Every 5 minutes

sap.secmon.framework.user::UserContext Once per minute

job sap.secmon.framework.user.migration::userContextMigration No schedules need to be provided

sap.secmon.services.util::masterDataInterface Once per minute

sap.secmon.ssm::PatternExecutionSSM Once per minute

sap.secmon.services.replication::exportImport Once per minute

This is custom documentation. For more information, please visit the SAP Help Portal 20
6/26/2023

Job Name Frequency

sap.secmon.trigger.jobs::thread Will be scheduled by


sap.secmon.trigger.jobs::

sap.secmon.services.util::systemInterface Every 5 minutes

sap.secmon.framework.user.pseudonymization.history::cleanPseudonymHistory Once per day

sap.secmon.framework.usagemeasurement::usageMeasurement Once per day

sap.secmon.ssm.cache::SMCache Every 15 minutes

sap.secmon.ssm.cache::NoteCache Every 15 minutes

Installing SAP Enterprise Threat Detection Streaming


Log preprocessing is an essential part of SAP Enterprise Threat Detection and requires a streaming solution. With SAP
Enterprise Threat Detection 2.0 SP03, the usage of SAP HANA Streaming Analytics as streaming solution for SAP Enterprise
Threat Detection is deprecated. To install SAP Enterprise Threat Detection Streaming, you have to perform a number of
installation steps.

Context

Procedure
1. Decide which streaming applications you want to install according to your needs.

For more information, see SAP Enterprise Threat Detection Streaming: Application Overview.

2. Check out the content from the delivery unit.

Fore more information, see Checking Out Content from Delivery Unit.

3. Decide if you want to execute the installation script (recommended) or do a manual installation.

The installation script allows you to select the applications that you want to install, con gure security related
parameters and all placeholders that are needed for these applications. It’s the recommended way to install in a semi-
automated way which still allows you to adapt the installation to your speci c environment.

For more information, see Using the Installation Script.

This is custom documentation. For more information, please visit the SAP Help Portal 21
6/26/2023
The manual installation allows you to change all aspects of the installed applications and is recommended if you have
special requirements, want to run on an OS that is not supported by the installer or want to integrate the installation
into infrastructure automation.

For more information, see Installing SAP Enterprise Threat Detection Streaming Manually.

SAP Enterprise Threat Detection Streaming: Application


Overview
SAP Enterprise Threat Detection Streaming is a streaming solution for SAP Enterprise Threat Detection that receives logs
from log providers, pre-processes the logs and stores them in the SAP HANA database and other storage locations.

SAP Enterprise Threat Detection Streaming consists of four mandatory and three optional Java applications. The applications
are Java Archives that can easily be integrated into the operating system as background services. These services can be
monitored and restarted automatically if a process has crashed.

Applications of SAP Enterprise Threat Detection Streaming

 Note
The Kafka cluster for the log collector and for the log preprocessor is usually the same Kafka cluster, but it is also possible to
use two separate Kafka clusters to meet special requirements like network segmentation

Communication with SAP HANA Platform


SAP HANA Platform is a critical component within the SAP Enterprise Threat Detection solution as most of SAP Enterprise
Threat Detection application require a stable connection to HANA either JDBC or HTTP(S).

If the connection to SAP HANA is down or unstable, the applications notice this and temporarily interrupt their interaction with
SAP HANA (and other related processes if necessary). All applications will resume work automatically when the connection is
stable again. However, there are some situations, in which the application cannot proceed further (for example in case of an

This is custom documentation. For more information, please visit the SAP Help Portal 22
6/26/2023
authentication error). In that case the application will be stopped. That's why we recommend that you regularly monitor the log
les written by each component to make sure that everything is working correctly.

Streaming Applications and their Interaction

Application Role in the Interaction Mandatory?

Log Collector The Log Collector is the entry point for all Yes
logs and Master Data sent from the log
providers. Its main purpose is to buffer the
received data and write it into the rst
Kafka cluster, the Log Collector Kafka
cluster. The Log Collector can store logs in a
backlog on the le system. In case the
Kafka broker isn’t reachable or cannot
process new logs, these logs can be stored
in a backlog, so that they can be sent later
when the Kafka brokers are available again.

Normalizer The Normalizer reads the logs from the Log Yes
Collector Kafka cluster to normalize logs,
that is the process of converting raw
(unstructured) log data to normalized
(structured) events assigned to semantic
events.

Transporter The Transporter reads data from the Log Yes


Collector Kafka cluster and stores it in the
Log Pre-Processor Kafka cluster. Its job is to
process data such as ABAP master data or
pings.

HANA Writer The HANA Writer reads all relevant data Yes
from the Log Pre-Processor Kafka cluster
and writes it into SAP HANA database
tables to make the logs and master data
available for SAP Enterprise Threat
Detection.

 Note
Please be aware that the technical name
of the HANA Writer application is
kafka_2_hana.

Log Learner The Log Learner works together with the Log No
Learning application. It is responsible for
analyzing the sample data uploaded in new
Log Learning runs in order to create log
entry types and markups. Furthermore it is
needed to test the Log Learning runs. It
connects to HANA via a REST API in order to
interact with the Log Learning application.
The application is optional, it is only
required when the Log Learning application
is used.

Cold Storage Writer You can use the Cold Storage Writer to No
archive log data by writing it to the le
system. The data can then be used to
restore logs if needed.

This is custom documentation. For more information, please visit the SAP Help Portal 23
6/26/2023
We recommend to regularly monitor the log les written by each component to guarantee that everything is working correctly.

Checking Out Content from Delivery Unit


Check out the SAP Enterprise Threat Detection Streaming installation and con guration les using the statements described in
the procedure.

Prerequisites
You are logged on as <sid>adm user on the operating system of your SAP HANA system where the SAP Enterprise Threat
Detection delivery unit is installed on.

Procedure
1. Go to the home directory.

cd ~

2. Create a userstore entry to connect to HANA repository.

The command will interactively ask for the passsword of the given user.

hdbuserstore -i SET ETD <HANAHostname:SQLPort>@<TenantDBName> <USERNAME>

# e.g hdbuserstore -i SET ETD localhost:30015@ETD SYSTEM

3. Check that the user store entry is saved correctly.

hdbuserstore list

4. Check that the user store entry works by connecting via SQL cli.

This command should return the con gured username as result.

hdbsql -U ETD SELECT SESSION_USER FROM DUMMY

5. Create a workspace to check out content from HANA repository (if it doesn't already exist).

regi create workspace ETD --key=ETD --force

6. Go to the created workspace folder.

cd ETD/

7. Check out the streaming applications.

regi checkout package sap.secmon.streaming --active --force --key=ETD

To ensure that all les checked out correctly, you can execute a command below. It should show you 9 .tar.gz les and the
etd_streaming_install.sh script.

ls -l sap/secmon/streaming

8. Copy the les to the host where you install SAP Enterprise Threat Detection Streaming. We refer to this host in the
following chapters as ETD Streaming host.

This is custom documentation. For more information, please visit the SAP Help Portal 24
6/26/2023

Using the Installation Script


We recommend to use the installation script to install SAP Enterprise Threat Detection Streaming. The script will guide you
through the installation and ask you for all relevant information. If you have special requirements, you might prefer manual
installation to using the installation script.

Procedure
1. Add execute authorizations to the installation script:

chmod +x etd_streaming_install.sh

2. Execute the etd_streaming_install.sh script as root user.

3. The script asks for all relevant information. Please note the folllowing hints:

For a fresh installation the installation directory that you provide must be empty or non-existent.

If the users that you provide for the different applications do not yet exist, they will be automatically created. You
can also create them manually, the script will detect if they already exist und skip the creation of the user in this
case.

The script asks you if you want to use SSL and/or SASL for your connections to the HANA database and Kafka. If
you disable them, the respective con guration sections will be commented out from the con guration, but can
later be enabled manually when you are ready for going into production.

Only the selected applications will be installed and con gured.

4. After this initial selection, the system shows an overview page and you can start the actual installation.

5. The system requests the necessary placeholders (depending on your selection of applications) and transforms the
con guration templates into the nal con guration.

For more information, see Placeholders.

6. The systemd units are added to the system and the installation is nalized.

7. After verifying the installation you can start the applications using

systemctl start <application>

8. Add all users which should be able to administer the streaming applications to the etdadmins group.

 Sample Code

usermod -a -G etdadmins <name of the user>

If you want to encrypt passwords then you need to add all authorized users to the etdsecadmins group. For more
information, see Encrypting Sensitive Con guration Data in the Streaming Applications in the Security Guide for SAP
Enterprise Threat Detection.

9. Continue with the application-speci c installation steps.

In case you need to recon gure the system (add applications, remove applications, and so on) you should backup your old
installation directory, wipe the directory and create a fresh installation. Any changes that you have made manually after the
installation are lost and need to be reimplemented.

This is custom documentation. For more information, please visit the SAP Help Portal 25
6/26/2023

Using the Installation Script for Updating an Existing Installation


Before doing an update, create a backup of your existing installation.

Procedure
1. As root user execute the etd_streaming_install.sh script.

2. Enter the path to the existing installation when asked for the installation directory.

The script will ask for a con rmation if this is an update and will automatically execute the following steps:

a. Back up the old installation.

b. Update the existing installation to the new version.

c. Apply the old con guration to the installation.

d. Ask for additional placeholders, in case they have been added in the new version.

e. Apply the new placeholders to the installation.

f. Create new con guration les based on the placeholders. Their lename will get a “.new” suffix.

Please note that these les will not contain any manual changes that you made to your con guration after the
initial installation. However, your existing con guration les from the old version will not be changed. In most
versions, there won’t be incompatible adaptions to the con guration le layout so that you can simply continue
using your existing con guration les without any changes.

g. Reload systemd units.

3. After you have checked the installation you need to restart each application using

systemctl restart <application>

Installing SAP Enterprise Threat Detection Streaming Manually


If you have special requirements, you might prefer manual installation to using the installation script. The manual installation
allows you to change all aspects of the installed applications and is recommended if you for example want to run on an OS that
is not supported by the installer or want to integrate the installation into infrastructure automation. Before you can start with
the application-speci c installation, you have to extract and apply some general con guration for all streaming applications
from the delivery unit.

Prerequisites
You have checked out the streaming folder from the delivery unit. For more information see Checking Out Content from Delivery
Unit.

You have copied the Streaming tar.gz les you have checked out from the delivery unit to the ETD Streaming host.

You are logged on as root user on the ETD Streaming host.

You need to have Java installed on all systems where you plan to run at least one of the SAP Enterprise Threat Detection
Streaming applications. It's required to use OpenJDK Version 11.

Procedure
This is custom documentation. For more information, please visit the SAP Help Portal 26
6/26/2023
1. Create a directory for each SAP Enterprise Threat Detection Streaming application that you want to install:

 Note
Please note that /opt/etd/ is just an example used in the documentation. You can also install the applications in a
different location.

mkdir -p /opt/etd/logcollector/libs/private
mkdir -p /opt/etd/normalizer/libs/private

mkdir -p /opt/etd/transporter/libs/private

mkdir -p /opt/etd/kafka_2_hana/libs/private

mkdir -p /opt/etd/kafka_2_warm/libs/private

mkdir -p /opt/etd/coldstorage/libs/private

mkdir -p /opt/etd/loglearner/libs/private

2. Unarchive common-<version>.tar.gz and <application>-<version>.tar.gz to newly created folders and replace <SID> with
your sid value.

#logcollector:
tar zxf common-*.tar.gz -C /opt/etd/logcollector/libs
tar zxf logcollector-*.tar.gz -C /opt/etd/logcollector
mv /opt/etd/logcollector/etd_logcollector-*.jar /opt/etd/logcollector/libs

#normalizer:
tar zxf common-*.tar.gz -C /opt/etd/normalizer/libs
tar zxf normalizer-*.tar.gz -C /opt/etd/normalizer
mv /opt/etd/normalizer/etd_normalizer-*.jar /opt/etd/normalizer/libs

#transporter:
tar zxf common-*.tar.gz -C /opt/etd/transporter/libs
tar zxf transporter-*.tar.gz -C /opt/etd/transporter
mv /opt/etd/transporter/etd_transporter-*.jar /opt/etd/transporter/libs

#kafka_2_hana:
tar zxf common-*.tar.gz -C /opt/etd/kafka_2_hana/libs
tar zxf kafka_2_hana-*.tar.gz -C /opt/etd/kafka_2_hana
mv /opt/etd/kafka_2_hana/etd_kafka_2_hana-*.jar /opt/etd/kafka_2_hana/libs

#coldstorage:
tar zxf common-*.tar.gz -C /opt/etd/coldstorage/libs
tar zxf coldstorage-*.tar.gz -C /opt/etd/coldstorage
mv /opt/etd/coldstorage/etd_coldstorage-*.jar /opt/etd/coldstorage/libs

#loglearner:
tar zxf common-*.tar.gz -C /opt/etd/loglearner/libs
tar zxf loglearner-*.tar.gz -C /opt/etd/loglearner
mv /opt/etd/loglearner/etd_loglearner-*.jar /opt/etd/loglearner/libs

3. Unarchive install-<version>.tar.gz to /opt/etd/.

tar zxf install-*.tar.gz -C /opt/etd/

This is custom documentation. For more information, please visit the SAP Help Portal 27
6/26/2023

4. (Optional) You can verify the integrity of the jar les.

The jar les are signed by SAP by running the following command:

jarsigner -verify <name of jar file>

The output must be "jar veri ed".

If you also want to see the signing certi cate, use the command:

jarsigner -verify -verbose <name of jar file>

Example to verify the log collector jar:

 Sample Code

jarsigner -verify -verbose /opt/etd/logcollector/libs/etd_logcollector-2.8.0.jar

5. Create operating system users:

groupadd -r etdadmins
groupadd -r etdsecadmins
useradd -r etdlogcollector -g etdadmins
useradd -r etdnormalizer-g etdadmins
useradd -r etdtransporter -g etdadmins
useradd -r etdkafka2hana -g etdadmins
useradd -r etdloglearner -g etdadmins
useradd -r etdcoldstorage -g etdadmins

6. Add all users which should be able to administer the streaming applications to the etdadmins group.

 Sample Code

usermod -a -G etdadmins <name of the user>

If you want to encrypt passwords then you need to add all authorized users to the etdsecadmins group. For more
information, see Encrypting Sensitive Con guration Data in the Streaming Applications in the Security Guide for SAP
Enterprise Threat Detection.

7. Make the scripts executable:

chmod ugo+x /opt/etd/replaceplaceholders.sh

chmod ugo+x /opt/etd/replacer.sh

8. Overwrite default values in defaultplaceholders.txt with your values in placeholders.txt.

You need to at least overwrite the properties marked as mandatory. For more information about which properties are
mandatory, see Placeholders.

9. Execute the replaceplaceholder.sh script.

 Note
After you have made changes to xml con guration les, you shouldn't run replaceplaceholders.sh again. A
rerun of the script will overwrite the changes you have made to the xml les.

This is custom documentation. For more information, please visit the SAP Help Portal 28
6/26/2023
10. Change permissions:

chown -R etdlogcollector.etdadmins /opt/etd/logcollector


chmod 550 $(find /opt/etd/logcollector/ -type d)
chown -R etdnormalizer.etdadmins /opt/etd/normalizer
chmod 550 $(find /opt/etd/normalizer/ -type d)
chown -R etdtransporter.etdadmins /opt/etd/transporter
chmod 550 $(find /opt/etd/transporter/ -type d)
chown -R etdkafka2hana.etdadmins /opt/etd/kafka_2_hana
chmod 550 $(find /opt/etd/kafka_2_hana/ -type d)
chown -R etdloglearner.etdadmins /opt/etd/loglearner
chmod 550 $(find /opt/etd/loglearner/ -type d)
chown -R etdcoldstorage.etdadmins /opt/etd/coldstorage
chmod 550 $(find /opt/etd/coldstorage/ -type d)

Next Steps
1. Make sure that everything is complete by verifying the following:

Under /opt/etd, you have one subfolder for each application.

Values in defaultplaceholders.txt are replaced with your values in placeholders.txt.

You have unarchived les in each subfolder.

For every *.tpl le within the subfolders you have a le with the same name without .tpl extension.

2. Continue with the application-speci c installation steps for the applications that you need.

Application-Speci c Installation Steps

Log Collector
SAP Enterprise Threat Detection log collector is an on-premise component that you install in your system landscape to collect
log data and master data from your log provider systems and forward them to SAP Enterprise Threat Detection.

SAP Enterprise Threat Detection log collector can receive data via different protocols, such as UDP, TCP, TLS, and HTTP/S. It
can also pull data from various sources, such as le, database, SAP Business Technology Platform, OData, and Splunk.

The log collector supports two working modes and is able to work in them simultaneously:

As a processor. In this case, the log collector writes logs into a Kafka using various con gurable topics. In that mode, logs
will reach the normalizer and will be recognized. For more information, see Kafka Ingestor Settings for the Log Collector.

As a proxy. In this case, the log collector will forward logs to another log collector situated in the on-premise landscape.
For more information, see HTTP Sender Settings for the Log Collector.

Finalizing Installation for the Log Collector

Prerequisites
Checking Out Content from Delivery Unit

This is custom documentation. For more information, please visit the SAP Help Portal 29
6/26/2023
If you use manual installation, you have performed the steps under Installing SAP Enterprise Threat Detection Streaming
Manually

Procedure
1. Log in to operating system under root user.

2. Adapt Kafka con guration.

To do so, go to /opt/etd/logcollector/config and make the necessary con guration in the following les:

lc.properties (this le contains both consumer and producer properties for the log collector)

lpp.properties (this le contains both consumer and producer properties for the log pre-processor)

a. If you want to use SSL, create a corresponding truststore with the CA certi cate of your Kafka brokers

b. If you don’t use passwords for truststore: comment ssl.truststore.password property.

3. If you have installed SAP Enterprise Threat Detection Streaming manually, create etd-logcollector systemd unit:

cp /opt/etd/logcollector/systemd/etd-logcollector.service /etc/systemd/system/
systemctl daemon-reload
systemctl enable etd-logcollector

If you have used the installation script, this has already been done by the system.

4. If you have installed SAP Enterprise Threat Detection Streaming manually, add execute authorizations to the start script
of the application:

chmod +x /opt/etd/logcollector/etd-logcollector.sh

If you have used the installation script, this has already been done by the system.

5. Start the logcollector application.

systemctl start etd-logcollector.service

6. Verify the installation:

a. Check the status of systemd unit. The correct status is Running.

systemctl status etd-logcollector.service

b. Check the logs for etd-logcollector.service. The correct response is "-- No entries –".

journalctl -u etd-logcollector.service

c. Check the application logs (default location is /opt/etd/logcollector/logs). The correct result is that you
don’t get any entries.

grep ERROR /opt/etd/logcollector/logs/etd-logcollector.log

7. Adapt Log Collector con guration to open ports (example HTTPS port for connecting SAP ABAP systems).

For more information, see HTTP Settings for the Log Collector.

8. Restart the log collector application.

Con guring SAP Enterprise Threat Detection Log Collector

This is custom documentation. For more information, please visit the SAP Help Portal 30
6/26/2023
You can use SAP Enterprise Threat Detection log collector with the default con guration provided by the installation script. If
you want to adapt the default con guration to the speci c needs of your landscape, you can con gure the various input and
output channels of the log collector via an XML le.

Most importantly, you need to specify all credentials that you want to use in your log provider systems for log collector
authentication. When providing logs from SAP NetWeaver AS for ABAP or SAP NetWeaver AS for Java, this needs to be
con gured in HTTP Settings for the Log Collector .

The con guration must be adapted if you want to use UDP or you want to use multiple ports for HTTP Listener or TCP Listener.

In addition, you might for example want to adjust the con guration in the following cases:

If you want to use SSL, you need to install certi cates. This is only relevant for TLS Listener and HTTP Listener.

You want to con gure subscribers such as Kafka subscriber or OData subscriber.

If you want to adapt the size limits used to slow down clients that send more data than expected, adapt the
con guration of the rate limiter. For more information, see Rate Limiter Settings for the Log Collector.

If you want to adapt the maximum disk space volume used for the backup of logs in the le system, adapt the
con uration of the backlog queue settings. For more information, see BacklogQueue Settings for the Log Collector.

If you want to use Prometheus for monitoring, con gure the monitoring settings.

You want to add additional users for the HTTP Listener.

Related Information
Monitoring Settings
Placeholders

UDP Settings for the Log Collector


Listen to incoming logs on a UDP port according to https://tools.ietf.org/html/rfc5426 . You can de ne multiple ports to
listen to by repeating the XML section

UDP has a lot of limitation (see RFC) and is therefore only recommended if no other transport method is available.

The UDP listener is integrated into the rate limiter, so that the incoming data is counted and excessive data is not processed.
However, there is no method to notify the sender to slow down.

Settings Type Value Mandatory Default

Enabled Boolean true or false x

Port Integer Port number 5514

ThreadCount Integer Number of parallel 3


threads listening to the
port

Reference Con guration for the UDP Settings


The following example shows a possible con guration for the UDP settings with the associated values. You can adapt this
con guration in line with your speci c needs when con guring the log collector.

This is custom documentation. For more information, please visit the SAP Help Portal 31
6/26/2023

<LogCollectorConfiguration>

<UDPPorts>
<UDPPort>
<Enabled>true</Enabled>
<Port>5514</Port>
<ThreadCount>10</ThreadCount>
</UDPPort>
</UDPPorts>

</LogCollectorConfiguration>

TCP Settings for the Log Collector


Listen to incoming logs on a TCP port according to https://tools.ietf.org/html/rfc6587 . You can de ne multiple ports to listen
to by repeating the XML section.

The TCP setting doesn't allow you to con gure encryption, please see the TLS settings in this case. It allows sending logs using
Non-Transparent Framing with a ASCII LF as separator (TcpFraming=LineBreak). We recommend using Octed Counted
(TcpFraming=OctedCounted), if possible. A connection is kept open and consumes a thread as log as data is sent. It is
automatically closed after ConnectionTimeOutInSeconds of idle time. Therefore you need to consider the ThreadCount
accordingly.

Settings Type Value Mandatory Default

Enabled Boolean true or false x

Port Integer Port number 10514

ThreadCount Integer Maximum number of 100


parallel connections for
the port, before new
clients are rejected

ThreadCountPerClient Integer Maximum number of ThreadCount / 10 (min


parallel connections for 1)
the port from the same
client (identi ed by its
IP address), before
new connections are
rejected.

If set to 0, there is no
client-speci c limit
applied.

TcpFraming String LineBreak or LineBreak


OctetCounted

LineBreak assumes
each LogEvent in a
single line (separated
by /n).

This is custom documentation. For more information, please visit the SAP Help Portal 32
6/26/2023

Settings Type Value Mandatory Default

ConnectionTimeoutInSeconds Integer Maximum amount of 90


time keeping the
connection open
without receiving any
log.

To deactivate the
timeout, set the value
to 0 and the log
collector will never
close open
connections.

Reference Con guration for the TCP Settings


The following example shows a possible con guration for the TCP settings with the associated values. You can adapt this
con guration in line with your speci c needs when con guring the log collector.

<LogCollectorConfiguration>

<TCPPorts>
<TCPPort>
<Enabled>false</Enabled>
<Port>10514</Port>
<ThreadCount>100</ThreadCount>
<ThreadCountPerClient>8</ThreadCountPerClient>
<TcpFraming>OctetCounted</TcpFraming>
<ConnectionTimeoutInSeconds>90</ConnectionTimeoutInSeconds>
</TCPPort>
</TCPPorts>

</LogCollectorConfiguration>

TLS Settings for the Log Collector


Listen to incoming logs on a TCP port according to https://tools.ietf.org/html/rfc5425 . You can de ne multiple ports to listen
to by repeating the XML section.

All the information from the TCP settings applies here as well. In addition this allows to encrypt the data while in transit and is
therefore the recommended way to transfer data.

Settings Type Value Mandatory Default

Enabled Boolean true or false x

Port Integer Port number 10443

ThreadCount Integer Maximum number 100


of parallel
connections for the
port, before new
clients are rejected

This is custom documentation. For more information, please visit the SAP Help Portal 33
6/26/2023

Settings Type Value Mandatory Default

ThreadCountPerClient Integer Maximum number ThreadCount / 10


of parallel (min 1)
connections for the
port from the same
client (identi ed by
its IP address),
before new
connections are
rejected.

If set to 0, there is
no client-speci c
limit applied.

TcpFraming String LineBreak or LineBreak


OctetCounted

LineBreak
assumes each
LogEvent in a
single line.

Keystore String Path to the Java x


keystore containing
the private key. The
keystore must be
readable by the
application user.

KeystorePass String Password for the x


Java keystore

KeystoreAlias String Alias of the private x


key entry in the
Java keystore

Truststore String Path to the Java (x)


truststore only with client
containing trusted authorization
certi cates. The
truststore must be
readable by the
application user.

TruststorePass String Password of the


truststore

ClientAuth Boolean Request SSL clients false


to send an x.509
client certi cate for
validation

This is custom documentation. For more information, please visit the SAP Help Portal 34
6/26/2023

Settings Type Value Mandatory Default

AllowedClientCertificates.Certificate.DN Distinguished
names for allowed
client certi cates.
The DN tag can be
repeated in order to
specify multiple
distinguished
names. Commas in
the DN must be
escaped using the
backslash
character.

ConnectionTimeoutInSeconds Integer Maximum amount 90


of time keeping the
connection open
without receiving
any log.

To deactivate the
timeout, set the
value to 0 and the
log collector will
never close open
connections.

Reference Con guration for the TLS Settings


The following example shows a possible con guration for the TLS settings with the associated values. You can adapt this
con guration in line with your speci c needs when con guring the log collector.

<LogCollectorConfiguration>

<TLSPorts>
<TLSPort>
<Enabled>false</Enabled>
<Port>10443</Port>
<ThreadCountPerClient>8</ThreadCountPerClient>
<ThreadCount>100</ThreadCount>
<TcpFraming>LineBreak</TcpFraming>
<Keystore>keystore.p12</Keystore>
<KeystorePass>changeit</KeystorePass>
<KeystoreAlias>alias</KeystoreAlias>
<ClientAuth>true</ClientAuth>
<Truststore>truststore.p12</Truststore>
<AllowedClientCertificates>
<Certificate>
<DN>CN=client1.test.de\,OU=ETD\,O=SAP\,C=DE</DN>
</Certificate>
<Certificate>
<DN>CN=client2.test.de\,OU=ETD\,O=SAP\,C=DE</DN>
</Certificate>
</AllowedClientCertificates>
<ConnectionTimeoutInSeconds>90</ConnectionTimeoutInSeconds>
</TLSPort>
</TLSPorts>

</LogCollectorConfiguration>

HTTP Settings for the Log Collector


This is custom documentation. For more information, please visit the SAP Help Portal 35
6/26/2023
Listen on an HTTP port for incoming logs. You can de ne multiple ports to listen to by repeating the XML section.

The HTTP endpoint is used to receive data from the log providers (such as SAP NetWeaver AS for ABAP or SAP NetWeaver AS
for Java) or from another log collector. It can also be used as a general endpoint to send arbitrary data. Connections should be
encrypted using SSL, but unencrypted connections are also possible if necessary.

If you use multiple log collectors within your landscape together with a load balancer that randomly distributes incoming HTTP
requests to them, you need to enable shared JSON Web Tokens between them. For more information, see Enabling JSON Web
Token Sharing Between Separate Log Collector Instances in the Security Guide for SAP Enterprise Threat Detection.

The HTTP service provides the following endpoints:

Endpoint Requires Authentication Description

/ Yes Returns a JSON le with the current status of the log


collector when called without any subpath.

/1 Yes This URI contains version 1 of the endpoint. Returns a JSON


le with the current status of the log collector when called
without any subpath.

/1/version No Returns the version of the log collector as a JSON reply. Can
be used to check connectivity.

/1/authorization Yes Returns a token that needs to be used to access


/1/workspaces.

/1/workspaces Yes Endpoint that accepts actual log data and master data as
payload.
(Using the token that has been
acquired from /1/authorization) The actual workspace needs to be speci ed as a sub-path in
the form
projects/<projectname>/streams/<streamname>.

/2 Yes This URI contains version 2 of the endpoint, which is used by


the new ABAP Log Extractor.

Returns a JSON le with the current status of the log


collector when called without any subpath.

/2/health No Returns a JSON le with the current status of the log


collector.

/2/info Yes Returns the currently running version of the log collector and
the maximum allowed request size in bytes.

/2/ping Yes Endpoint that accepts ping data in JSON format which is
used for system health checks.

/2/JSONLogEvents Yes Endpoint that accepts log events in JSON format.

/2/MasterData Yes Endpoint that accepts master data in JSON format.

Settings Type Value

Enabled Boolean true or false

Port Integer Port number

ThreadCount Integer Number of parallel threads listening to the port

MaximumRequestSizeInMegabyte Integer Maximum request size for an individual HTTP POST

This is custom documentation. For more information, please visit the SAP Help Portal 36
6/26/2023

Settings Type Value

RetryAfterInSeconds Integer Waiting time in seconds to be returned in the respons

RequestHandlerTimeoutInSeconds Integer Waiting time in seconds until the HTTP request handl
consumption

TokenValidity Integer Authorization token issued is valid for x seconds

Authenticator String Authentication method. Currently basic authentication


are supported.

Client certi cate authentication ("X.509") is only sup


con guration.

UseSSL Boolean Use HTTPS or plain HTTP. If UseSSL is true, the Keys

Keystore String Path to the Java keystore containing the private key. T

KeystorePass String Password for the Java keystore

KeystoreAlias String Alias of the private key entry in the Java keystore

Truststore String Path to the Java truststore containing trusted certi c


user.

TruststorePass String Password of the truststore

Credentials.Credential The allowed credentials for basic authentication. At le


combinations of user name and password can be spe

Credentials.Credential.Username String User name of the sender

Credentials.Credential.PasswordHash String SHA-256 hashed password of the sender.

To generate the SHA256 hash for the password, you c

 Sample Code
java -jar /opt/etd/logcollector/libs/e

AllowedClientCertificates.Certificate The allowed client certi cates for X.509 authenticatio


can be de ned by repeating the Certificate XML

AllowedClientCertificates.Certificate.DN String Distinguished name of the certi cate. Commas in the

BruteForceSlowDown.Enabled Boolean Activates the function of blocking an IP if there is an u

BruteForceSlowDown.MaxFailedAuthenticationsPerClient Integer Sets the amount of unsuccessful sign-ins for each IP

BruteForceSlowDown.AdditionalBlockingTimeForClients Integer For each unsuccessful sign-in, the respective IP is blo


unsuccessful sign-ins is already above the threshold.

Reference Con guration for the HTTP Settings


The following example shows a possible con guration for the HTTP settings with the associated values. You can adapt this
con guration in line with your speci c needs when con guring the log collector.

Example Using Basic Authentication:

<LogCollectorConfiguration>

This is custom documentation. For more information, please visit the SAP Help Portal 37
6/26/2023
<HTTPPorts>
<HTTPPort>
<Enabled>true</Enabled>
<Port>9093</Port>
<ThreadCount>25</ThreadCount>
<TokenValidity>250</TokenValidity>
<MaximumRequestSizeInMegabyte>10</MaximumRequestSizeInMegabyte>
<RetryAfterInSeconds>10</RetryAfterInSeconds>
<RequestHandlerTimeoutInSeconds>30</RequestHandlerTimeoutInSeconds>
<UseSSL>true</UseSSL>
<Keystore>keystore.jks</Keystore>
<KeystorePass>changeit</KeystorePass>
<KeystoreAlias>alias</KeystoreAlias>
<Credentials>
<Credential>
<Username>user</Username>
<PasswordHash>7d0e… </PasswordHash>
</Credential>
<Credential>
<Username>ADMIN</Username>
<PasswordHash>72fe… </PasswordHash>
</Credential>
</Credentials>
</HTTPPort>
</HTTPPorts>

</LogCollectorConfiguration>

Example Using X.509 Authentication:

<LogCollectorConfiguration>

<HTTPPorts>
<HTTPPort>
<Enabled>true</Enabled>
<Port>9093</Port>
<Authenticator>X.509</Authenticator>
<ThreadCount>25</ThreadCount>
<TokenValidity>250</TokenValidity>
<MaximumRequestSizeInMegabyte>10</MaximumRequestSizeInMegabyte>
<RetryAfterInSeconds>10</RetryAfterInSeconds>
<RequestHandlerTimeoutInSeconds>30</RequestHandlerTimeoutInSeconds>
<UseSSL>true</UseSSL>
<Keystore>keystore.jks</Keystore>
<KeystorePass>changeit</KeystorePass>
<KeystoreAlias>alias</KeystoreAlias>
<Truststore>truststore</Truststore>
<AllowedClientCertificates>
<Certificate>
<DN>CN=client1.test.de\,OU=SEC\,O=TEST\,C=DE</DN>
</Certificate>
</AllowedClientCertificates>
</HTTPPort>
</HTTPPorts>

</LogCollectorConfiguration>

Related Information
Enabling JSON Web Token Sharing Between Separate Log Collector Instances

Kafka Subscriber Settings for the Log Collector


A Kafka consumer which will poll a list of topics for log messages. You can de ne multiple Kafka subscribers by repeating the
XML section.

This is custom documentation. For more information, please visit the SAP Help Portal 38
6/26/2023
The Kafka subscriber expects one log entry per Kafka message. All Kafka related options have to be con gured in a
consumer.properties le, especially the bootstrap servers, topic names and consumer group.

Detailed information about the consumer.properties can be found at


https://kafka.apache.org/documentation/#consumercon gs .

Settings Type Value Mandatory Default

Enabled Boolean true or false false

ConfigFile String Absolute path to Kafka consumer x


properties le where you need to
specify at least the
bootstrap.servers and the
topic.pattern or topics
properties. topic.pattern is a
regex and topics is a comma-
separated list of topics to consume
messages from. The le must be
readable by the application user.

LogCollectorName String Name of the log collector as it will x


appear in the semantic attribute
TechnicalLogCollectorName.
This indicates that the data was
read from this Kafka and can be
useful if you want to distinguish the
different log sources.

Reference Con guration for the Kafka Subscriber Settings


The following example shows a possible con guration for the Kafka Subscriber settings with the associated values. You can
adapt this con guration in line with your speci c needs when con guring the log collector.

<LogCollectorConfiguration>

<KafkaSubscribers>
<Kafka>
<Enabled>true</Enabled>
<ConfigFile>./kafkaSubscriber/config.properties</ConfigFile>
<LogCollectorName>ETD_logCollector</LogCollectorName>
</Kafka>
</KafkaSubscribers>

</LogCollectorConfiguration>

Database Subscriber Settings for the Log Collector


A connector that connects to a database management system via JDBC in order to regularly poll log messages from a database
table. You can de ne multiple database subscribers by repeating the internal XML section 'DatabaseSubscriber' with all its
properties.

As of SAP Enterprise Threat Detection 2.0 SP04, the Java class path does no longer include the libs/* folder but only the
delivered HANA JDBC driver. If you want to connect other database management systems, you therefore don't have to put the
relevant JAR les into the libs folder, but place them somewhere else and mention this location as absolute path in the
database subscriber con guration using the new setting JDBCDriverJARPath.

This is custom documentation. For more information, please visit the SAP Help Portal 39
6/26/2023
The database subscriber expects a table with at least two columns. One column contains the timestamp of the log message,
the other column contains the actual log message. The table may contain additional elds that are ignored. The timestamp eld
is used to detect the log lines that have been added or changed since the last execution. Therefore the following query is
executed:

 Sample Code

{SELECTStatement} WHERE {TimestampColumn} > [lastTimeStamp] ORDER BY {TimestampColumn} asc

Therefore the SELECTStatement must not include a where clause. The lastTimeStamp is automatically stored and contains the
latest timestamp from the previous query with nanoseconds precision. When the query is executed for the rst time, the
system selects the data that has been written in the last ve minutes.

The Id is used to store the timestamp of the last read log record. Therefore, you must not reuse an Id for another database
connection.

Settings Type Value Mandatory Default

DatabaseSubscriber.Id Integer Unique number to distinguish x


between the different JDBC
instances.

WorkingDirectory String Path to the directory that is used by x


the database subscriber to store the
LastTimestamp.txt le. This le
contains the timestamp up to which
the data has been read from the
database. The directory must be
readable, writeable and executable
by the application user.

DatabaseSubscriber.Enabled Boolean true or false x

DatabaseSubscriber.JDBCConnectionString String JDBC Connection string needed to x


connect to the database

Example:

jdbc:sap://myhanahost:30015

DatabaseSubscriber.JDBCDriverClassName String Full name of the Java JDBC driver x


class. For example:
com.sap.db.jdbc.Driver

DatabaseSubscriber.JDBCDriverJARPath String Absolute or relative path to external


jar le. If the property is not speci ed
or empty, JVM tries to load the driver
from jar les available in the class
path. The le must be readable by
the application user.

Example: ./private/mcsql.jar

This is custom documentation. For more information, please visit the SAP Help Portal 40
6/26/2023

Settings Type Value Mandatory Default

DatabaseSubscriber.DatabasePropertiesFile String Path to le where username,


password and other properties of the
JDBC connection are stored. If
username and password are provided
in the con guration and in the le,
username and password are taken
from the con guration. The le must
be readable by the application user.

For more information about available


properties, see the JDBC driver
documentation of your database
management system.

Example: For HANA database, see


JDBC Connection Properties in the
documentation for SAP HANA Client
Interface Programming Reference.

DatabaseSubscriber.Username String Username is optional. If you provide


it, make sure to also provide
password.

DatabaseSubscriber.Password String Password is optional. If you provide


it, make sure to also provide
username.

DatabaseSubscriber.SELECTStatement String SELECT statement to query the x


needed logs from a certain column of
a certain table.

Commas should be escaped by "\".

The SELECT statement must return


at least two columns. The rst
column of the result set of the SQL
statement must contain the log
message and must be of datatype
STRING and the second column must
be a timestamp column which
contains the creation date and time of
the particular log message. All other
columns of the result set are ignored.
It must be ensured that the name of
the Timestamp column matches with
the name con gured in the
TimestampColumn property (see
below).

DatabaseSubscriber.TimestampColumn String Name of the column that contains the x


creation time of the record. It is
appended as a WHERE condition to
the above de ned SQL SELECT
statement.

This is custom documentation. For more information, please visit the SAP Help Portal 41
6/26/2023

Settings Type Value Mandatory Default

DatabaseSubscriber.DelayBetweenQueries Integer The polling interval in ms 10000

 Note
This parameter is deprecated.
Please use
PollingIntervalInSeconds
instead.

DatabaseSubscriber.PollingIntervalInSeconds Integer The polling interval in seconds 60

DatabaseSubscriber.LogCollectorName String Name of the log collector as it will x


appear in the semantic attribute
TechnicalLogCollectorName.
This indicates that the data was read
from this database and can be useful
if you want to distinguish the different
log sources.

Reference Con guration for the Database Subscriber Settings


The following example shows a possible con guration for the database subscriber elds with the associated values. You can
adapt this con guration in line with your speci c needs when con guring the log collector.

<LogCollectorConfiguration>

<DatabaseSubscribers>
<WorkingDirectory>./dbWorkingDirectory</WorkingDirectory>
<DatabaseSubscriber>
<Id>1</Id>
<Enabled>true</Enabled>
<JDBCConnectionString>jdbc:sqlserver://dbServerName:4711;databaseName=db</JDBCConne
<JDBCDriverClassName>com.microsoft.sqlserver.jdbc.SQLServerDriver</JDBCDriverClassN
<JDBCDriverJARPath>./private/mcsql.jar</JDBCDriverJARPath>
<DatabasePropertiesFile>./jdbc.properties</DatabasePropertiesFile>
<SELECTStatement>SELECT message\, timestamp FROM table</SELECTStatement>
<TimestampColumn>timestamp</TimestampColumn>
<PollingIntervalInSeconds>30</PollingIntervalInSeconds>
<LogCollectorName>ETD_logCollector</LogCollectorName>

</DatabaseSubscriber>
</DatabaseSubscribers>

</LogCollectorConfiguration>

Reference Database Properties File for the Database Subscriber Settings


The following example shows a possible database properties le which can be used in the database subscriber con guration:

 Sample Code

user=admin
password=password

Splunk Subscriber Settings for the Log Collector


This is custom documentation. For more information, please visit the SAP Help Portal 42
6/26/2023
An adapter that regularly queries log messages from Splunk.

For more information about alert exchange between Splunk and SAP Enterprise Threat Detection, see SAP Enterprise Threat
Detection Integration with Splunk.

Settings Type Value Mandatory

Enabled Boolean true or false

LogCollectorName String Name of the log collector as it will appear in the


semantic attribute
TechnicalLogCollectorName. This
indicates that the data was read using the Splunk
subscriber and can be useful if you want to
distinguish the different log sources.

InstanceID Integer Unique ID (number) of the subscriber instance. If x


you have con gured multiple Splunk subscribers,
the instances need to keep track of the point in
time up to which they have already read the data
from Splunk. For this reason, each instance
creates les to store its state. To make sure that
the subscribers do not interfere with each other,
they need this unique number, which will be part
of the le names.

SplunkHost String Host name of the Splunk search head x

SplunkPort Integer Port of the REST endpoint of the Splunk search x


head

SplunkQuery String Splunk query to fetch the required logs x

SplunkUserName String User name x

SplunkPassword String Password x

WorkingDirectory String Directory where the subscriber instance stores


its les to keep track of the point in time up to
which it has already read the data from Splunk.
The directory must be readable, writeable and
executable by the application user.

Truststore String Path to the Java truststore containing trusted


certi cates. The truststore must be readable by
the application user.

TruststorePass String Password of the truststore

PollingIntervalInMilliseconds Integer The polling interval in ms de nes how often the


Splunk search head is asked for the status of a
created search job in order to nd out whether
the search job has nished.

 Note
This parameter is deprecated. Please use
SearchJobPollingIntervalInSeconds
instead.

This is custom documentation. For more information, please visit the SAP Help Portal 43
6/26/2023

Settings Type Value Mandatory

SearchJobPollingIntervalInSeconds Integer The polling interval in seconds de nes how often


the Splunk search head is asked for the status of
a created search job in order to nd out whether
the search job has nished.

MaximumNumberOfSimultaneousRequests Integer Number of concurrent threads to query data from


Splunk faster

RequestDelayInMilliseconds Integer The interval in ms between two queries in order x


to de ne the schedule for how often Splunk
should be queried (for example, for fetching data
every 10 minutes, specify 600000).

 Note
This parameter is deprecated. Please use
PollingIntervalInSeconds instead.

PollingIntervalInSeconds Integer The interval in seconds between two queries in


order to de ne the schedule for how often Splunk
should be queried (for example, for fetching data
every 10 minutes, specify 600000).

RetroactiveIntervalWhenNoJobsFoundInSeconds Integer De nes what time frame to read from Splunk


when the subscriber starts for the very rst time
(for example, last one hour)

MinimumSlowdownBetweenErrorsInMilliseconds Integer When an error occurs, the minimum number of


ms to wait until retry

MaximumSlowdownBetweenErrorsInMinutes Integer When an error occurs, the maximum number of


ms to wait until retry

RefreshSessionAfterXConsecutiveErrors Integer After this number of consecutive errors, try to log


in to Splunk search head again before next retry.

JobRequestTimeoutInMinutes Integer Timeout for Splunk search job

OnlyProcessJobsFoundInWorkingDirectory Boolean If you need to fetch a certain amount of data from


Splunk again (for example, the last two days),
you can manually create a job le in the working
directory to de ne the time range to be read and
set this property to true. Then the subscriber will
only read the data from Splunk de ned in that job
le and will stop processing after it has nished.
It will not create new job les for the current data
like it does normally.

MaximumResultsPerRequestPage Integer Batch size when fetching the search job results
from Splunk

ConnectTimeoutInMilliseconds Integer TCP connection timeout in milliseconds

Reference Con guration for the Splunk Subscriber Settings


The following example shows a possible con guration for the Splunk Subscriber settings with the associated values. You can
adapt this con guration in line with your speci c needs when con guring the log collector.

<LogCollectorConfiguration>

This is custom documentation. For more information, please visit the SAP Help Portal 44
6/26/2023
<SplunkSubscribers>
<SplunkSubscriber>
<InstanceID>234</InstanceID>
<SplunkHost>splunkServer</SplunkHost>
<SplunkPort>123</SplunkPort>
<SplunkQuery>search x > 5</SplunkQuery>
<SplunkUserName>admin</SplunkUserName>
<SplunkPassword>password</SplunkPassword>
<WorkingDirectory>/opt/etd/lc/ConfigurationFiles</WorkingDirectory>
<Truststore>/opt/etd/lc/ConfigurationFiles/truststore</Truststore>
<SearchJobPollingIntervalInSeconds>5</SearchJobPollingIntervalInSeconds>
<MaximumNumberOfSimultaneousRequests>5</MaximumNumberOfSimultaneousRequests>
<PollingIntervalInSeconds>5</PollingIntervalInSeconds>
<RetroactiveIntervalWhenNoJobsFoundInSeconds>10</RetroactiveIntervalWhenNoJobsFound
<MinimumSlowdownBetweenErrorsInMilliseconds>1000</MinimumSlowdownBetweenErrorsInMil
<MaximumSlowdownBetweenErrorsInMinutes>6</MaximumSlowdownBetweenErrorsInMinutes>
<RefreshSessionAfterXConsecutiveErrors>5</RefreshSessionAfterXConsecutiveErrors>
<JobRequestTimeoutInMinutes>3</JobRequestTimeoutInMinutes>
<OnlyProcessJobsFoundInWorkingDirectory>false</OnlyProcessJobsFoundInWorkingDirecto
<MaximumResultsPerRequestPage>500</MaximumResultsPerRequestPage>
<ConnectTimeoutInMilliseconds>600</ConnectTimeoutInMilliseconds>
</SplunkSubscriber>
</SplunkSubscribers>

</LogCollectorConfiguration>

SCP Audit Log Subscriber Settings for the Log Collector


A subscriber that connects to an SAP Business Technology Platform subaccount and regularly queries audit log messages. You
can de ne multiple SCP Audit Log subscribers for different subaccounts by repeating the XML section.

The subscriber will regularly poll logs from the con gured source. It will get all logs since the last polling. For the very rst
execution, the logs from the last ve minutes are read.

The con guration depends on whether you connect to SAP BTP, Cloud Foundry environment or SAP BTP, Neo environment.

For SAP BTP, Cloud Foundry environment, the parameters need to be con gured as follows:

Settings Type Value Mandatory Default

WorkingDirectory String Path to directory that is used by the x


SCP Audit Log subscriber to store
the LastTimestamp.txt le which
contains the timestamp up to which
the data has been read from the
EntitySet. The directory must be
readable, writeable and executable
by the application user.

Example: etdlogcollector.

SCPSubAccount.Type String CF x

SCPSubAccount.Enabled Boolean True or false x

SCPSubAccount.UaaUrl String Value uaa.url from service key x


created for auditlog-
management service

SCPSubAccount.ClientId String Value uaa.clientid from service x


key created for auditlog-
management service

This is custom documentation. For more information, please visit the SAP Help Portal 45
6/26/2023

Settings Type Value Mandatory Default

SCPSubAccount.ClientSecret String Value uaa.clientsecret from x


service key created for auditlog-
management service

SCPSubAccount.AuditLogUrl String Value url from service key created x


for auditlog-management
service

SCPSubAccount.Truststore String Path to the Java truststore System


containing trusted certi cates. The Truststore
truststore must be readable by the
application user.

SCPSubAccount.TruststorePass String Password of the truststore

SCPSubAccount.DelayBetweenRequests Integer The polling interval in minutes 5

 Note
This parameter is deprecated.
Please use
PollingIntervalInSeconds
instead.

SCPSubAccount.PollingIntervalInSeconds Integer The polling interval in seconds 60

SCPSubAccount.Proxy Proxy settings

The xml snippet for the proxy


settings is optional. For more
information, see Proxy Settings.

SCPSubAccount.Proxy.Enabled String True or false

SCPSubAccount.Proxy.Host String Proxy host x

SCPSubAccount.Proxy.Port Integer Proxy port x

SCPSubAccount.Proxy.Username String Username

SCPSubAccount.Proxy.Password String Password

For SAP BTP, Neo environment the parameters need to be con gured as follows:

Settings Type Value Mandatory Default

WorkingDirectory String Path to directory that is used by the x


SCP Audit Log subscriber to store the
LastTimestamp.txt le which contains
the timestamp up to which the data has
been read from the EntitySet. The
directory must be readable, writeable
and executable by the application user.

Example: etdlogcollector.

SCPSubAccount.Type String NEO x

SCPSubAccount.Enabled Boolean True or false x

This is custom documentation. For more information, please visit the SAP Help Portal 46
6/26/2023

Settings Type Value Mandatory Default

SCPSubAccount.UaaUrl String https://api.<region_host> x

For more information about the list of


regions, see Regions and Hosts
Available for the Neo Environment.

 Example
https://api.eu2.hana.ondemand.com

SCPSubAccount.ClientId String Client ID from Audit Log Service API x


Client

SCPSubAccount.ClientSecret String Client Secret from Audit Log Service x


API Client

SCPSubAccount.AuditLogUrl String https://api.<region_host> x

For more information about the list of


regions, see Regions and Hosts
Available for the Neo Environment.

 Example
https://api.eu2.hana.ondemand.com

SCPSubAccount.Truststore String Path to the Java truststore containing System


trusted certi cates. The truststore Truststore
must be readable by the application
user.

SCPSubAccount.TruststorePass String Password of the truststore

SCPSubAccount.DelayBetweenRequests Integer The polling interval in minutes 5

 Note
This parameter is deprecated.
Please use
PollingIntervalInSeconds
instead.

SCPSubAccount.PollingIntervalInSeconds Integer The polling interval in seconds 60

SCPSubAccount.AccountId String The Account ID of the NEO Account x

SCPSubAccount.Proxy Proxy settings

The xml snippet for the proxy settings


is optional. For more information, see
Proxy Settings.

SCPSubAccount.Proxy.Enabled String True or false

SCPSubAccount.Proxy.Host String Proxy host x

SCPSubAccount.Proxy.Port Integer Proxy port x

SCPSubAccount.Proxy.Username String Username

SCPSubAccount.Proxy.Password String Password

This is custom documentation. For more information, please visit the SAP Help Portal 47
6/26/2023

Reference Con guration for the SCP Audit Log Subscriber Settings
The following example shows a possible con guration for the SCP Audit Log Subscriber settings with the associated values. You
can adapt this con guration in line with your speci c needs when con guring the log collector.

<LogCollectorConfiguration>

<SCPAuditLogs>
<WorkingDirectory>./scpAuditLogWorkingDirectory</WorkingDirectory>
<SCPSubAccount>
<Enabled>false</Enabled>
<Type>CF</Type>
<UaaUrl>https://p2354.authentication….</UaaUrl>
<ClientId>sb-622124a!b16|auditlog-manament!b66</ClientId>
<ClientSecret>VgnYOXAUPlm1f4urss=</ClientSecret>
<AuditLogUrl>https://auditlog…</AuditLogUrl>
<Truststore>truststore</Truststore>
<PollingIntervalInSeconds>30</PollingIntervalInSeconds>
<!-- optional proxy if you want to selectively use a dedicated prox
<Proxy>
<Enabled>true</Enabled>
<Host>proxy.localdomain</Host>
<Port>3128</Port>
</Proxy>
</SCPSubAccount>
</SCPAuditLogs>

</LogCollectorConfiguration>

OData Subscriber Settings for the Log Collector


A connector that connects to an OData service in order to regularly poll log messages from an EntitySet. You can de ne
multiple OData subscribers by repeating the internal XML section 'ODataSubscriber' with all its properties.

Currently, SAP Enterprise Threat Detection log collector supports the OData versions 2 and 4.

The connector fetches logs created after the previous run or, in the case of a very rst execution, from ve minutes ago.

Settings Type Value Mandatory

WorkingDirectory String Path to directory that is used by the OData x


subscriber to store the LastTimestamp.txt le
which contains the timestamp up to which the
data has been read from the EntitySet. The
directory must be readable, writeable and
executable by the application user.

Example: etdlogcollector.

ODataSubscriber.Id String Any unique identi cator of the OData x


subscriber instance to distinguish between the
different ODataSubscriber instances.

ODataSubscriber.Enabled Boolean True or False.

If false, the subscriber will not fetch data from


this instance.

ODataSubscriber.ODataVersion String Version of the OData service

Allowed values: V2, V4

This is custom documentation. For more information, please visit the SAP Help Portal 48
6/26/2023

Settings Type Value Mandatory

ODataSubscriber.ServiceUrl String Fully quali ed URL to OData service x

Example:
https://odata.server/relative/path

ODataSubscriber.EntitySet String Name of the EntitySet from which data will x


be read

ODataSubscriber.DatetimeProperty String Name of the property in the EntitySet that x


contains the creation date and time of the
record. It will be used in a $ lter expression to
fetch only new records

If the time is represented separately in another


property, then use that eld for the date value
and TimeProperty for the time value.

ODataSubscriber.DatetimeFormat String Type of the DatetimeProperty de ned in x


the EntityType of EntitySet. Needed to
construct $ lter expression.

Edm.Int64 should be used when date is


presented in Epoch format.

Supported values: see Datetime Reference


Table

ODataSubscriber.TimeProperty String Name of the property in EntitySet that


represents the time of the record in Edm.Time
type.

This is only needed if your Date and Time are in


separate columns. If your
DatetimeProperty represents the whole
timestamp, then the TimeProperty
parameter should not be lled.

ODataSubscriber.TimeFormat String Type of the TimeProperty de ned in the Required when:


EntityType of EntitySet. Needed to
TimeProperty
construct $ lter expression.
was de ned and
Supported values: see Datetime Reference
ODataVersion
Table
is V4

ODataSubscriber.Selects.Select String Name of the property, which will be used in


$select in the request to the OData service.
Should be used if you want to reduce the
response size.

DatetimeProperty and TimeProperty (if


provided) must be presented in Select
property as well.

For more information, see:

Select System Query Option ODataV2

Select System Query Option ODataV4

This is custom documentation. For more information, please visit the SAP Help Portal 49
6/26/2023

Settings Type Value Mandatory

ODataSubscriber.Expands.Expand String Name of the NavigationProperty which


will be used in $expand in the request to the
OData service. Should be used if you want to
expand your EntitySet with another
EntitySet via association.

For more information, see::

Expand System Query Option Odata V2

Expand System Query Option Odata V4

ODataSubscriber.Filter String Your custom lter expression to be used in


$ lter in the request to the OData service.

For more information:

Filter System Query Option Odata V2

Filter System Query Option Odata V4

ODataSubscriber.Authenticator String Authentication mechanism to be used by the x


subscriber. Supported values:

X.509 (Client certi cate)

Basic (Username/Password)

OAuth (Bearer token)

In case of OAuth usage, Bearer token response


must conform to RFC 6750 standard.

ODataSubscriber.UaaUrl String Fully quali ed URL of UAA server Required in case


of OAuth
authenticator

ODataSubscriber.Username String Username or ClientId Required in case


of Basic or
OAuth
authenticators

ODataSubscriber.Password String Password or ClientSecret Required in case


of Basic or
OAuth
authenticators

ODataSubscriber.Keystore String Path to the Java keystore containing the private Required in case
key. The keystore must be readable by the of X.509
application user. authenticator

ODataSubscriber.KeystorePass String Password for the Java keystore Required in case


of X.509
authenticator

ODataSubscriber.KeystoreAlias String Alias of the private key entry in the Java Required in case
keystore of X.509
authenticator

ODataSubscriber.Truststore String Path to the Java truststore containing trusted Required in case
certi cates. The truststore must be readable of X.509
by the application user. authenticator

ODataSubscriber.TruststorePass String Password of the truststore

This is custom documentation. For more information, please visit the SAP Help Portal 50
6/26/2023

Settings Type Value Mandatory

ODataSubscriber.LogCollectorName String Name of the log collector as it will appear in the x


semantic attribute
TechnicalLogCollectorName. This
indicates that the data was read using the
OData subscriber and can be useful if you want
to distinguish the different log sources.

ODataSubscriber.DelayInMinutes Integer The polling interval in minutes

 Note
This parameter is deprecated. Please use
PollingIntervalInSeconds instead.

ODataSubscriber.PollingIntervalInSeconds Integer The polling interval in seconds

ODataSubscriber.MaxTimerangeInMinutes Integer Maximum time interval in minutes to be


fetched per HTTP request to the OData service.
This serves to reduce load in case when the
last reading from the OData service was a long
time ago.

 Example
The log collector did not work for 1 hour, and
this parameter was con gured with the
value 5. In this case the log collector will
make 12 requests at the next start to
retrieve the logs for 5 minutes at a time and
not a single large request to retrieve all logs
at once. This prevents possible problems
with a huge load.

ODataSubscriber.Proxy Proxy settings

The xml snippet for the proxy settings is


optional. For more information, see Proxy
Settings.

ODataSubscriber.Proxy.Enabled String True or false

ODataSubscriber.Proxy.Host Proxy host x

ODataSubscriber.Proxy.Port Integer Proxy port x

ODataSubscriber.Proxy.Username String Username

ODataSubscriber.Proxy.Password String Password

DateTime Reference Table

OData Version Supported Values for DatetimeFormat Supported Values for TimeFormat

OData V2 Edm.DateTime, Edm.DateTimeOffset, Edm.Time


Edm.Int64

OData V4 Edm.DateTimeOffset, Edm.Int64, Edm.Date Edm.TimeOfDay, Edm.Duration

Reference Con guration for the OData Subscriber Settings

This is custom documentation. For more information, please visit the SAP Help Portal 51
6/26/2023
The following example shows a possible con guration for the OData Subscriber settings with the associated values. You can
adapt this con guration in line with your speci c needs when con guring the log collector.

<LogCollectorConfiguration>

<ODataSubscribers>
<WorkingDirectory></WorkingDirectory>
<ODataSubscriber>
<Id>ProductsODataLogServiceWithClientCertificate</Id>
<Enabled>true</Enabled>
<Authenticator>X.509</Authenticator>
<ODataVersion>V2</ODataVersion>
<ServiceUrl>https://OdataServerV2:8090/services/Products.svc/</ServiceUrl>
<EntitySet>Products</EntitySet>
<DatetimeProperty>CreatedTimestamp</DatetimeProperty>
<DatetimeFormat>Edm.DateTime</DatetimeFormat>
<Keystore>keystore.p12</Keystore>
<KeystorePass>password</KeystorePass>
<KeystoreAlias>alias</KeystoreAlias>
<Truststore>truststore</Truststore>
<Selects>
<Select>Name</Select>
<Select>Description</Select>
<Select>CreatedTimestamp</Select>
<Select>toCategories/Name</Select>
</Selects>
<Expands>
<Expand>toCategories</Expand>
</Expands>
<Filter>Price le 500 and Rating gt 4</Filter>
<LogCollectorName>ETD_logCollector</LogCollectorName>
<PollingIntervalInSeconds>2</PollingIntervalInSeconds>
<MaxTimerangeInMinutes>5<MaxTimerangeInMinutes>
<!-- optional proxy if you want to selectively use a dedicated prox
<Proxy>
<Enabled>true</Enabled>
<Host>proxy.localdomain</Host>
<Port>3128</Port>
</Proxy>
</ODataSubscriber>

<ODataSubscriber>
<Id>ProductsODataLogServiceWithOAuth</Id>
<Enabled>true</Enabled>
<Authenticator>OAuth</Authenticator>
<ServiceUrl>https://OdataServerV2:8090/services/Products.svc/</ServiceUrl>
<EntitySet>Products</EntitySet>
<DatetimeProperty>CreatedTimestamp</DatetimeProperty>
<DatetimeFormat>Edm.DateTime</DatetimeFormat>
<UaaUrl>https://uaaServer:8010/auth</UaaUrl>
<Username>user</Username>
<Password>password</Password>
<LogCollectorName>ETD_logCollector</LogCollectorName>
<PollingIntervalInSeconds>2</PollingIntervalInSeconds>
<MaxTimerangeInMinutes>15<MaxTimerangeInMinutes>
</ODataSubscriber>

<ODataSubscriber>
<Id>DateAndTimeSeparated</Id>
<Enabled>true</Enabled>
<Authenticator>OAuth</Authenticator>
<ServiceUrl>https://OdataServerV2:8090/services/Products.svc/</ServiceUrl>
<EntitySet>Products</EntitySet>
<DatetimeProperty>CreatedDate</DatetimeProperty>
<TimeProperty>CreatedTime</TimeProperty>
<DatetimeFormat>Edm.DateTime</DatetimeFormat>
<TimeFormat>Edm.Time</TimeFormat>
<UaaUrl>https://uaaServer:8010/auth</UaaUrl>
<Username>user</Username>
<Password>password</Password>
<LogCollectorName>ETD_logCollector</LogCollectorName>
<PollingIntervalInSeconds>2</PollingIntervalInSeconds>
</ODataSubscriber>

This is custom documentation. For more information, please visit the SAP Help Portal 52
6/26/2023

<ODataSubscriber>
<Id>Version4</Id>
<Enabled>true</Enabled>
<Authenticator>Basic</Authenticator>
<ODataVersion>V4</ODataVersion>
<ServiceUrl>https://OdataServerV4:8090/services/Logs.svc/</ServiceUrl>
<EntitySet>Logs</EntitySet>
<DatetimeProperty>CreatedDate</DatetimeProperty>
<DatetimeFormat>Edm.DateTimeOffset</DatetimeFormat>
<Username>user</Username>
<Password>password</Password>
<LogCollectorName>ETD_logCollector</LogCollectorName>
<PollingIntervalInSeconds>2</PollingIntervalInSeconds>
</ODataSubscriber>

<ODataSubscriber>
<Id>SomeODataV4Features</Id>
<Enabled>true</Enabled>
<Authenticator>Basic</Authenticator>
<ODataVersion>V4</ODataVersion>
<ServiceUrl>https://OdataServerV4:8090/services/Logs.svc/</ServiceUrl>
<EntitySet>Logs</EntitySet>
<DatetimeProperty>CreatedDate</DatetimeProperty>
<DatetimeFormat>Edm.DateTimeOffset</DatetimeFormat>
<Username>user</Username>
<Password>password</Password>
<LogCollectorName>ETD_logCollector</LogCollectorName>
<PollingIntervalInSeconds>2</PollingIntervalInSeconds>
<Expands>
<Expand>Categories($select=Id,Name)</Expand>
</Expands>
<Selects>
<SelectAddresses($filter=startswith(City,'H');$orderby=City,Street)</Select>
<Select>Description</Select>
</Selects>
</ODataSubscriber>
</ODataSubscribers>

</LogCollectorConfiguration>

File Reader Settings for the Log Collector


A service that reads all les in a speci ed directory and considers each line as a log message. You can de ne multiple le
readers by repeating the XML section. After a le has been read completely, it is deleted.

 Note
With SAP Enterprise Threat Detection 2.0 SP06, the le reader settings for the log collector are deprecated. You can use the
directory reader settings instead. For more information, see Directory Reader Settings for the Log Collector.

Setting Type Value Mandatory Default

Enabled Boolean true or false x

DirectoryPath String Path of the observed directory. All x


les within this directory are read
and then deleted. The directory
must be readable, writeable and
executable by the application user.

This is custom documentation. For more information, please visit the SAP Help Portal 53
6/26/2023

Setting Type Value Mandatory Default

LogCollectorName String Name of the log collector as it will x


appear in the semantic attribute
TechnicalLogCollectorName.
This indicates that the data was
read with this le reader and can
be useful if you want to distinguish
the different log sources.

PollingIntervalInSeconds Integer The polling interval in seconds 60

Reference Con guration for the File Reader Settings


The following example shows a possible con guration for the le reader settings with the associated values. You can adapt this
con guration in line with your speci c needs when con guring the log collector.

<LogCollectorConfiguration>

<FileReaders>
<FileReader>
<Enabled>true</Enabled>
<DirectoryPath>./logFilesToRead</DirectoryPath>
<LogCollectorName>ETD_logCollector</LogCollectorName>
<PollingIntervalInSeconds>30</PollingIntervalInSeconds>
</FileReader>
</FileReaders>

</LogCollectorConfiguration>

Directory Reader Settings for the Log Collector


A service that reads les in the speci ed directory according to the de ned le name pattern. Each line of the le is considered
as one log.

The directory reader expects a timestamp of the log in the line. Per default, it expects it to be the rst thing in the log (default
position 0). If in your case the timestamp is not in the rst position of the line, make sure to specify the position in the
con guration using the parameter DirectoryReader.TimestampPosition.

The service supports le rotation and reads only new lines of the le. You need to provide the location where the directory
reader stores the LastTimeStamp.txt le.

After a restart of the log collector, the con guration is checked and if it was changed the directory reader starts to read the les
from the beginning.

The following attributes are checked:

DirectoryPath

FilePattern

TimestampPattern

As soon as one of these attributes is changed, the information for the DirectoryReader stored in the LastTimestamp.txt
le is considered as invalid and reset.

This is custom documentation. For more information, please visit the SAP Help Portal 54
6/26/2023
You can set up as many directory readers as you want, but one directory reader can only read one unique directory. This means
you cannot con gure two or more directory readers to read from the same directory.

Setting Type Value Mandatory Default

WorkingDirectory String Path to the directory that is used x


by the directory reader to store the
LastTimestamp.txt le. The
directory must be readable,
writeable and executable by the
application user.

DirectoryReader.Enabled Boolean true or false false

DirectoryReader.Id String Any unique identi cator of the x


directory reader to distinguish
between the different directory
readers.

DirectoryReader.DirectoryPath String Path of the observed directory. All x


les within this directory are read
and then deleted. The directory
must be readable, writeable and
executable by the application user.

DirectoryReader.LogCollectorName String Name of the log collector as it will x


appear in the semantic attribute
TechnicalLogCollectorName.
This indicates that the data was
read using the directory reader and
can be useful if you want to
distinguish the different log
sources.

DirectoryReader.FilePattern String File name regex pattern to identify x


les to be read. It should be
de ned to handle only a limited
scope of les instead of going
through all of the les in the
directory.

DirectoryReader.TimestampPattern String Format of the timestamp that is


presented in the lines of the le.

Timestamp pattern is speci ed in


SimpleDateFormat . If set, the
timestamps in the log are tried with
this timestamp format. If not set:
All timestamp formats supported
by log learning are tried. For more
information, see List of Custom
Timestamp Formats.

DirectoryReader.TimestampPosition Integer Expected timestamp position in log 0


line. The directory reader checks
this position and reads the
timestamp from this position.
Position starts at 0.

This is custom documentation. For more information, please visit the SAP Help Portal 55
6/26/2023

Setting Type Value Mandatory Default

DirectoryReader.TimestampOverwriteTimezone String Time zone that will be used in UTC


timestamp operation to parse
timestamp or compare
timestamps.

This time zone is used if the


timestamp does not contain time
zone information. If you don't set a
time zone in the con guration, the
fallback default value is "UTC".

DirectoryReader.MaxInitialTimestampInSeconds Integer The parameter speci es how many 300


seconds the directory reader goes
back in time when it runs for the
very rst time to determine the
logs to be considered.

DirectoryReader.MaxTimestampsInSeconds Integer The parameter speci es how many 604800


seconds the directory reader goes (7 days)
back in time after a restart to
determine the logs to be
considered.

Reference Con guration for the Directory Reader Settings


The following example shows a possible con guration for the direcotry reader settings with the associated values. You can
adapt this con guration in line with your speci c needs when con guring the log collector.

<LogCollectorConfiguration>

<DirectoryReaders>
<WorkingDirectory>/tmp</WorkingDirectory>
<DirectoryReader>
<Id>directory-reader-1</Id>
<Enabled>true</Enabled>
<DirectoryPath>/tmp/directory/to/read</DirectoryPath>
<LogCollectorName>DirectoryReaderLogCollector</LogCollectorName>
<FilePattern>.*\.log</FilePattern>
<TimestampPattern>yyyy.MM.dd HH:mm:ss\,SSSZ</TimestampPattern>
<TimestampPosition>0</TimestampPosition>
<TimestampOverwriteTimezone>UTC</TimestampOverwriteTimezone>
<MaxInitialTimestampInSeconds>300</MaxInitialTimestampInSeconds>
<MaxTimestampsInSeconds>604800</MaxTimestampsInSeconds>
</DirectoryReader>
</DirectoryReaders>

</LogCollectorConfiguration>

Processing Settings for the Log Collector


The processing settings describe how the log collector processes and forwards logs. Logs can be forwarded to another log
collector using HTTP Sender and also in parallel they can be written directly into Kafka using the Kafka Ingestion.

The Processing section contains generic information about the processing of logs, regardless of the source of the log events.

The log collector name is added as an information to each log event that is processed. It is available as an attribute in forensic
lab. You can con gure the log collector name as _default_, which means that it will automatically be detected based on the

This is custom documentation. For more information, please visit the SAP Help Portal 56
6/26/2023
hostname of the system where it is running. The log collector name can be overwritten in certain subscribers. Details can be
found in the speci c sections.

The MaxLogLength is a hard upper limit on the length of a single log. Logs that are larger than this limit are discarded and not
processed any further. A warning message is logged in that case.

In case that there are any unprocessed logs within the internal queue of the log collector on shutdown, these logs will be written
to a directory speci ed using the PersistentDirectory setting. Upon restart they will be read again and processed rst.

Settings Type Value Mandatory Default

LogCollectorName String Name of the log collector. Can be x


seen in the forensic lab in the
semantic attribute
TechnicaLogCollectorName
and in the unrecognized logs UI in
the Log Collector Name column.

MaxLogLength Integer Maximum length/size of an 32567


incoming log. If beyond this
threshold, the log will be
discarded. If a log is discarded a
log entry is written with log level
WARN.

PersistentDirectory String Directory used to store ./queue


unprocessed data when shutting
down the log collector.

The directory must be readable,


writeable and executable by
application user.

Reference Con guration for the Processing Settings


The following example shows a possible con guration for the Processing settings with the associated values. You can adapt this
con guration in line with your speci c needs when con guring the log collector.

<LogCollectorConfiguration>

<Processing>
<LogCollectorName>ETD_logCollector_generalName</LogCollectorName>
<MaxLogLength>32767</MaxLogLength>
<PersistentDirectory>./queue</PersistentDirectory>

<!-- KafkaIngestor config -->


<Kafka>
...
</Kafka>

<!--HTTP\S Sender config … -->


<HTTPSender>
...
</HTTPSender>
</Processing>

</LogCollectorConfiguration>

HTTP Sender Settings for the Log Collector


This is custom documentation. For more information, please visit the SAP Help Portal 57
6/26/2023
The HTTP Sender is used to forward logs to another log collector installed on-premise. The log collector can send logs only to
one destination via HTTP Sender.

Settings Type Value Mandatory Default

Enabled Boolean true or false x

Authenticator String Authentication method. x


Currently basic
authentication ("basic")
and client certi cate
authentication ("X.509")
are supported.

Client certi cate


authentication ("X.509")
is only supported when
using SSL and needs
additional con guration.

DestinationType String Type of target log x


collector. Must always
have the value
"OnPremise"

BaseURL String Target server address, x


for example:

http(s)://host:port

Compressed Boolean Send the payload gzip true


encoded or as plain text.

Batchsize Integer Number of logs that 1000


should be sent in a single
batch.

MaxLingerMs Integer Maximum time interval 10000


to wait until next batch is
sent in milliseconds.

Username String User name used to Mandatory for basic


authenticate on the authentication
destination

Password String Password used to Mandatory for basic


authenticate on the authentication
destination

Truststore String Path to the Java Mandatory if BaseURL is


truststore containing an HTTPS URL
trusted certi cates. The
truststore must be
readable by the
application user.

TruststorePass Password of the Path to the Java Mandatory if BaseURL is


truststore truststore containing an HTTPS URL
trusted certi cates. The
truststore must be
readable by the
application user.

This is custom documentation. For more information, please visit the SAP Help Portal 58
6/26/2023

Settings Type Value Mandatory Default

Keystore String Path to the Java keystore Mandatory for X.509


containing the private authentication
key and trusted
certi cates.

The keystore must be


readable by the
application user.

KeystorePass String Password for the Java Mandatory for X.509


keystore authentication

KeystoreAlias String Alias of the private key Mandatory for X.509


entry in the Java authentication
keystore

Proxy Proxy settings

The xml snippet for the


proxy settings is
optional. For more
information, see Proxy
Settings.

Proxy.Enabled String True or false

Proxy.Host String Proxy host x

Proxy.Port Integer Proxy port x

Proxy.Username String Username

Proxy.Password String Password

Reference Con guration for the HTTP Sender Settings


The following examples show possible con gurations for the HTTP Sender settings with the associated values. You can adapt
this con guration in line with your speci c needs when con guring the log collector.

HTTP Sender Con guration to Forward Logs to Another On-Premise Log Collector Using a Certi cate-Based
Authentication Mechanism

<LogCollectorConfiguration>

<Processing>
<HTTPSender>
<Enabled>true</Enabled>
<Authenticator>X.509</Authenticator>
<DestinationType>OnPremise</DestinationType>
<BaseURL>https://local.logcollector.url/</BaseURL>
<Compressed>true</Compressed>
<Batchsize>1000</Batchsize>
<MaxLingerMs>5000</MaxLingerMs>
<Keystore>keystorePath</Keystore>
<KeystorePass>keystorePassword</KeystorePass>
<KeystoreAlias>keystoreAlias</KeystoreAlias>
<Truststore>truststorePath</Truststore>
<TruststorePass>truststorePassword</TruststorePass>
<!-- optional proxy if you want to selectively use a dedicated prox
<Proxy>
<Enabled>true</Enabled>
<Host>proxy.localdomain</Host>
<Port>3128</Port>

This is custom documentation. For more information, please visit the SAP Help Portal 59
6/26/2023
</Proxy>
</HTTPSender>
</Processing>

</LogCollectorConfiguration>

HTTP Sender Con guration to Forward Logs to Another On-Premise Log Collector Without SSL Using Basic
Authentication (Not Recommended)

<LogCollectorConfiguration>

<Processing>
<HTTPSender>
<Enabled>true</Enabled>
<Authenticator>basic</Authenticator>
<DestinationType>OnPremise</DestinationType>
<BaseURL>http://local.logcollector.url</BaseURL>
<Compressed>true</Compressed>
<Batchsize>1000</Batchsize>
<MaxLingerMs>5000</MaxLingerMs>
<Username>admin</Username>
<Password>password</Password>
</HTTPSender>
</Processing>

</LogCollectorConfiguration>

Related Information
Encrypting Sensitive Con guration Data in the Streaming Applications

Kafka Ingestor Settings for the Log Collector


A service that ingests data to a Kafka cluster. Only one can be con gured.

Settings Type Value Mandatory Default

Enabled Boolean true or false x

PropertiesFile String Path to the Kafka x


properties le. The le
must be readable by the
application user.

Topics.Topic XML properties for the x


topic. Can be repeated
multiple times.

Topic.Id String ID of the topic x

Topic.TargetTopicName String Name of a topic x

This is custom documentation. For more information, please visit the SAP Help Portal 60
6/26/2023

Settings Type Value Mandatory Default

Topic.ThreadCount Integer Number of threads to be 1


used per Kafka topic
when ingesting data into
Kafka. Ideally the
number of threads
corresponds to the
number of partitions in
the topic. The maximum
number of threads that
can be used is 100.

All Kafka topics the KafkaIngestor writes to have default names. Nevertheless, every topic name can be con gured individually
for special use cases. For more information about the Kafka topics, see Kafka Topics Used By SAP Enterprise Threat Detection
Streaming.

Reference Con guration for the Kafka Ingestor Settings


The following example shows a possible con guration for the Kafka Ingestor settings with the associated values. You can adapt
this con guration in line with your speci c needs when con guring the log collector.

<LogCollectorConfiguration>

<Processing>
<Kafka>
<LogCollector>
<Enabled>true</Enabled>
<PropertiesFile>config/lc.properties</PropertiesFile>
<Topics>
<Topic>
<Id>RTLogEventIn</Id>
<TargetTopicName>RTLogEventIn</TargetTopicName>
<ThreadCount>2</ThreadCount>
</Topic>
<Topic>
<Id>UnrecognizedLogsOutForReplication</Id>
<TargetTopicName>UnrecognizedLogsOutForReplication</TargetTopicName>
<ThreadCount>2</ThreadCount>
</Topic>
</Topics>
</LogCollector>
</Kafka>
</Processing>

</LogCollectorConfiguration>

BacklogQueue Settings for the Log Collector


If the Kafka broker or the HTTP sender endpoint isn’t reachable or cannot process new logs, these logs can be stored in a
backlog on the le system, so that they can be sent later, when they are available again.

Using the parameters in the table, you can con gure how much disk space should be used. If the con gured disk space is
exhausted, the oldest logs will be deleted automatically. For a brief period, more than the con gured disk space might be used
because only complete les are deleted.

Settings Type Value Mandatory Default

Enabled Boolean true or false true

This is custom documentation. For more information, please visit the SAP Help Portal 61
6/26/2023

Settings Type Value Mandatory Default

Directory String Directory name for the backlog


backlog queue. The
directory must be
readable, writeable and
executable by the
application user.

InMemoryElements Integer Number of elements to 10000


keep in memory

MaxFileSizeMB Integer Maximum size of an 10


uncompressed le

MaxFiles Integer Number of les to keep 10

Reference Con guration for the BacklogQueue Settings


The following example shows a possible con guration for the BacklogQueue settings with the associated values. You can adapt
this con guration in line with your speci c needs when con guring the log collector.

<LogCollectorConfiguration>
<Processing>
<BacklogQueue>
<Enabled>true</Enabled>
<Directory>backlog</Directory>
<InMemoryElements>10000</InMemoryElements>
<MaxFileSizeMB>10</MaxFileSizeMB>
<MaxFiles>10</MaxFiles>
</BacklogQueue>
</Processing>
</LogCollectorConfiguration>

Rate Limiter Settings for the Log Collector


You can use the rate limiter to slow down clients that send more data than expected. This allows you to reduce the attack
surface for attacks such as DoS attempts.

The rate limiter measures the total size of all requests that are sent by a single client, regardless of the connection type used.
The client is identi ed by the source IP address that is used to connect to the log collector. Depending on your network
con guration (Load Balancer, NAT devices, and so on), this may not be the actual IP address of the client. Therefore, you need
to consider whether this feature is useful for you.

Settings Type Value Mandatory Default

SizeLimit.Enabled Boolean true or false True

SizeLimit.LimitForPeriod Integer The limit for the total x 10000000


size of all requests
from a client within one
period

SizeLimit.RefreshPeriod Long The duration of a x 1000


period in milliseconds

This is custom documentation. For more information, please visit the SAP Help Portal 62
6/26/2023

Settings Type Value Mandatory Default

SizeLimit.TimeoutDuration Long Waiting time in x 5000


milliseconds until the
request is rejected

If you want to set up the rate limiter, you add the following section to the log collector con guration:

<LogCollectorConfiguration>

<RateLimiter>
<SizeLimit>
<Enabled>true</Enabled>
<LimitForPeriod>10000000</LimitForPeriod>
<RefreshPeriod>1000</RefreshPeriod>
<TimeoutDuration>5000</TimeoutDuration>
</SizeLimit>
</RateLimiter>

</LogCollectorConfiguration>

The rate limiter splits the time into slices, speci ed by the RefreshPeriod timer (in milliseconds). For each period, a certain
amount of permissions (bytes) per client is available (The LimitForPeriod parameter). If a client wants to send data, the system
checks whether the limit for the current period is already exhausted. If this is the case, the client must wait until the next period
with enough free capacity. If waiting takes longer than the con gured TimeOutDuration, the request is rejected and –
depending on the protocol – an error message is returned to the client.

Reference Con guration for the Log Collector


The following example of a log collector con guration includes all possible con guration elds with associated values. You can
adapt this con guration in line with your speci c needs when con guring the log collector.

<?xml version="1.0" encoding="utf-8"?>


<LogCollectorConfiguration>

<!-- Monitoring configs -->


<Monitoring>
<!-- Logical name of instance used in monitoring metrics -->
<Name>SAP Enterprise Threat Detection Log Collector</Name>

<!-- If prometheus monitoring is used (for Grafana dashboard integration), which http port
<Prometheus>
<Enabled>true</Enabled>
<ExporterPort>7000</ExporterPort>
</Prometheus>
</Monitoring>

<!-- UDPListener configs ... -->


<UDPPorts>
<UDPPort>
<Enabled>true</Enabled>
<Port>5514</Port>
<ThreadCount>10</ThreadCount>
</UDPPort>
</UDPPorts>

<!-- TCPListener configs ... -->


<TCPPorts>
<TCPPort>
<Enabled>false</Enabled>

This is custom documentation. For more information, please visit the SAP Help Portal 63
6/26/2023
<Port>10514</Port>
<ThreadCount>100</ThreadCount>
<ThreadCountPerClient>8</ThreadCountPerClient>
<TcpFraming>OctetCounted</TcpFraming>
<ConnectionTimeoutInSeconds>90</ConnectionTimeoutInSeconds>
</TCPPort>
</TCPPorts>

<!-- TLSListener configs ... -->


<TLSPorts>
<TLSPort>
<Enabled>false</Enabled>
<Port>10443</Port>
<ThreadCountPerClient>8</ThreadCountPerClient>
<ThreadCount>100</ThreadCount>
<TcpFraming>LineBreak</TcpFraming>
<Keystore>keystore.p12</Keystore>
<KeystorePass>changeit</KeystorePass>
<KeystoreAlias>alias</KeystoreAlias>
<ClientAuth>true</ClientAuth>
<AllowedClientCertificates>
<Certificate>
<DN>CN=client1.test.de\,OU=ETD\,O=SAP\,C=DE</DN>
</Certificate>
<Certificate>
<DN>CN=client2.test.de\,OU=ETD\,O=SAP\,C=DE</DN>
</Certificate>
</AllowedClientCertificates>
<ConnectionTimeoutInSeconds>90</ConnectionTimeoutInSeconds>
</TLSPort>
</TLSPorts>

<!-- HTTPListener configs ... -->


<HTTPPorts>
<HTTPPort>
<Enabled>true</Enabled>
<Port>9093</Port>
<ThreadCount>25</ThreadCount>
<TokenValidity>250</TokenValidity>
<MaximumRequestSizeInMegabyte>10</MaximumRequestSizeInMegabyte>
<RetryAfterInSeconds>10</RetryAfterInSeconds>
<UseSSL>true</UseSSL>
<Keystore>keystore.jks</Keystore>
<KeystorePass>changeit</KeystorePass>
<KeystoreAlias>alias</KeystoreAlias>
<Credentials>
<Credential>
<Username>user</Username>
<PasswordHash>7d0e… </PasswordHash>
</Credential>
<Credential>
<Username>ADMIN</Username>
<PasswordHash>72fe… </PasswordHash>
</Credential>
</Credentials>
<BruteForceSlowDown>
<Enabled>true</Enabled>
<MaxFailedAuthenticationsPerClient>3</MaxFailedAuthenticationsPerClient>
<AdditionalBlockingTimeForClients>30</AdditionalBlockingTimeForClients>
</BruteForceSlowDown>
</HTTPPort>
</HTTPPorts>

<Processing>
<LogCollectorName>ETD_logCollector_generalName</LogCollectorName>
<MaxLogLength>32767</MaxLogLength>
<BacklogQueue>
<Enabled>true</Enabled>
<Directory>backlog</Directory>
<InMemoryElements>10</InMemoryElements>
<MaxFileSizeMB>10</MaxFileSizeMB>
<MaxFiles>10</MaxFiles>
</BackogQueue>

This is custom documentation. For more information, please visit the SAP Help Portal 64
6/26/2023
<!-- KafkaIngestor config -->
<Kafka>
<LogCollector>
<Enabled>true</Enabled>
<ConfigFileDirectory>/opt/lc/config/lc.properties</ConfigFileDirectory>
</LogCollector>
</Kafka>

<!--HTTP\S Sender config … -->


<HTTPSender>
<Enabled>true</Enabled>
<Authenticator>OAuth</Authenticator>
<DestinationType>Cloud</DestinationType>
<TenantID>ibsoetdcfsubacc</TenantID>
<BaseURL>https://my.cloud.logcollector.url</BaseURL>
<Compressed>true</Compressed>
<Batchsize>1000</Batchsize>
<MaxLingerMs>5000</MaxLingerMs>
<Username>admin</Username>
<Password>password</Password>
<Truststore>truststore</Truststore>
<Proxy>
<Enabled>true</Enabled>
<Host>proxy.localdomain</Host>
<Port>3128</Port>
</Proxy>
</HTTPSender>

<HTTPSender>
<Enabled>true</Enabled>
<Authenticator>X.509</Authenticator>
<DestinationType>OnPremise</DestinationType>
<BaseURL>https://local.logcollector.url</BaseURL>
<Compressed>true</Compressed>
<Batchsize>1000</Batchsize>
<MaxLingerMs>5000</MaxLingerMs>
<Keystore>keystore</Keystore>
<KeystorePass>password</KeystorePass>
<KeystoreAlias>alias</KeystoreAlias>
<Truststore>truststore</Truststore>
<Proxy>
<Enabled>true</Enabled>
<Host>proxy.localdomain</Host>
<Port>3128</Port>
</Proxy>
</HTTPSender>

</Processing>

<!-- KafkaSubscriber configs ... -->


<KafkaSubscribers>
<Kafka>
<Enabled>true</Enabled>
<ConfigFile>/opt/etd/logcollector/kafkaSubscriber/config.properties</ConfigFile>
<LogCollectorName>ETD_logCollector</LogCollectorName>
</Kafka>
</KafkaSubscribers>

<!-- FileReader configs ... -->


<FileReaders>
<FileReader>
<Enabled>true</Enabled>
<DirectoryPath>/opt/etd/logcollector/logFilesToRead</DirectoryPath>
<LogCollectorName>ETD_logCollector</LogCollectorName>
</FileReader>
</FileReaders>

<!-- DatabaseSubscriber configs ... -->


<DatabaseSubscribers>
<WorkingDirectory>/opt/etd/logcollector/dbWorkingDirectory</WorkingDirectory>
<DatabaseSubscriber>
<Id>1</Id>
<Enabled>true</Enabled>
<JDBCConnectionString>jdbc:sqlserver://dbServerName:4711;databaseName=db</JDBCConne

This is custom documentation. For more information, please visit the SAP Help Portal 65
6/26/2023
<JDBCDriverClassName> com.microsoft.sqlserver.jdbc.SQLServerDriver</JDBCDriverClass
<Username>admin</Username>
<Password>password</Password>
<SELECTStatement>SELECT * FROM db</SELECTStatement>
<TimestampColumn>timestamp</TimestampColumn>
<PollingIntervalInSeconds>30</PollingIntervalInSeconds>
<LogCollectorName>ETD_logCollector</LogCollectorName>
</DatabaseSubscriber>
</DatabaseSubscribers>

<!-- SCPAuditLog configs... -->


<SCPAuditLogs>
<WorkingDirectory>/opt/etd/logcollector/scpAuditLogWorkingDirectory</WorkingDirectory>
<SCPSubAccount>
<Enabled>false</Enabled>
<Type>CF</Type>
<UaaUrl>https://p2354.authentication….</UaaUrl>
<ClientId>sb-622124a!b16|auditlog-manament!b66</ClientId>
<ClientSecret>VgnYOXAUPlm1f4urss=</ClientSecret>
<AuditLogUrl>https://auditlog…</AuditLogUrl>
<PollingIntervalInSeconds>30</PollingIntervalInSeconds>
<Proxy>
<Enabled>true</Enabled>
<Host>proxy.localdomain</Host>
<Port>3128</Port>
</Proxy>
</SCPSubAccount>
</SCPAuditLogs>

<!-- SplunkSubscriber configs ... -->


<SplunkSubscribers>
<SplunkSubscriber>
<Enabled>true</Enabled>
<LogCollectorName>ETD_logCollector</LogCollectorName>
<InstanceID>234</InstanceID>
<SplunkHost>splunkServer</SplunkHost>
<SplunkPort>123</SplunkPort>
<SplunkQuery>search x > 5</SplunkQuery>
<SplunkUserName>admin</SplunkUserName>
<SplunkPassword>password</SplunkPassword>
<WorkingDirectory>/opt/etd/logcollector/ConfigurationFiles</WorkingDirectory>
<PollingIntervalInMilliseconds>5000</PollingIntervalInMilliseconds>
<MaximumNumberOfSimultaneousRequests>5</MaximumNumberOfSimultaneousRequests>
<RequestDelayInMilliseconds>250</RequestDelayInMilliseconds>
<RetroactiveIntervalWhenNoJobsFoundInSeconds>10</RetroactiveIntervalWhenNoJobsFound
<MinimumSlowdownBetweenErrorsInMilliseconds>1000</MinimumSlowdownBetweenErrorsInMil
<MaximumSlowdownBetweenErrorsInMinutes>6</MaximumSlowdownBetweenErrorsInMinutes>
<RefreshSessionAfterXConsecutiveErrors>5</RefreshSessionAfterXConsecutiveErrors>
<JobRequestTimeoutInMinutes>3</JobRequestTimeoutInMinutes>
<OnlyProcessJobsFoundInWorkingDirectory>true</OnlyProcessJobsFoundInWorkingDirector
<MaximumResultsPerRequestPage>500</MaximumResultsPerRequestPage>
<ConnectTimeoutInMilliseconds>600</ConnectTimeoutInMilliseconds>
</SplunkSubscriber>
</SplunkSubscribers>

<ODataSubscribers>
<WorkingDirectory></WorkingDirectory>
<ODataSubscriber>
<Id>ProductsODataLogServiceWithClientCertificate</Id>
<Enabled>true</Enabled>
<Authenticator>X.509</Authenticator>
<ServiceUrl>https://OdataServer:8090/services/Products.svc/</ServiceUrl>
<EntitySet>Products</EntitySet>
<DatetimeProperty>CreatedTimestamp</DatetimeProperty>
<DatetimeFormat>Edm.DateTime</DatetimeFormat>
<Keystore>keystore.p12</Keystore>
<KeystorePass>password</KeystorePass>
<KeystoreAlias>alias</KeystoreAlias>
<Truststore>truststore</Truststore>
<Selects>
<Select>Name</Select>
<Select>Description</Select>
<Select>CreatedTimestamp</Select>
<Select>toCategories/Name</Select>

This is custom documentation. For more information, please visit the SAP Help Portal 66
6/26/2023
</Selects>
<Expands>
<Expand>toCategories</Expand>
</Expands>
<Filter>Price le 500 and Rating gt 4</Filter>
<LogCollectorName>ETD_logCollector</LogCollectorName>
<DelayInMinutes>5</DelayInMinutes>
<MaxTimerangeInMinutes>5<MaxTimerangeInMinutes>
<Proxy>
<Enabled>true</Enabled>
<Host>proxy.localdomain</Host>
<Port>3128</Port>
</Proxy>
</ODataSubscriber>
<ODataSubscriber>
<Id>ProductsODataLogServiceWithOAuth</Id>
<Enabled>true</Enabled>
<Authenticator>OAuth</Authenticator>
<ServiceUrl>https://OdataServer:8090/services/Products.svc/</ServiceUrl>
<EntitySet>Products</EntitySet>
<DatetimeProperty>CreatedTimestamp</DatetimeProperty>
<DatetimeFormat>Edm.DateTime</DatetimeFormat>
<UaaUrl> https://uaaServer:8010/auth</UaaUrl>
<Username>user</Username>
<Password>password</Password>
<LogCollectorName>ETD_logCollector</LogCollectorName>
<DelayInMinutes>5</DelayInMinutes>
<MaxTimerangeInMinutes>15<MaxTimerangeInMinutes>
<Proxy>
<Enabled>true</Enabled>
<Host>proxy.localdomain</Host>
<Port>3128</Port>
</Proxy>
</ODataSubscriber>
<ODataSubscriber>
<Id>DateAndTimeSeparated</Id>
<Enabled>true</Enabled>
<Authenticator>OAuth</Authenticator>
<ServiceUrl>https://OdataServer:8090/services/Products.svc/</ServiceUrl>
<EntitySet>Products</EntitySet>
<DatetimeProperty>CreatedDate</DatetimeProperty>
<TimeProperty>CreatedTime</TimeProperty>
<DatetimeFormat>Edm.DateTime</DatetimeFormat>
<UaaUrl> https://uaaServer:8010/auth</UaaUrl>
<Username>user</Username>
<Password>password</Password>
<LogCollectorName>ETD_logCollector</LogCollectorName>
<DelayInMinutes>5</DelayInMinutes>
<Proxy>
<Enabled>true</Enabled>
<Host>proxy.localdomain</Host>
<Port>3128</Port>
</Proxy>
</ODataSubscriber>
</ODataSubscribers>

</LogCollectorConfiguration>

Normalizer
The Normalizer reads the logs from the Log Collector Kafka cluster to normalize logs, that is the process of converting raw
(unstructured) log data to normalized events (structured) assigned to semantic events by applying log learning rules to
unrecognized logs and by enriching already normalized logs with additional information.

The output data is then stored in a second Kafka cluster, the Log Pre-Processor Kafka cluster. It connects to HANA via a REST
API in order to read the needed log learning rules.

This is custom documentation. For more information, please visit the SAP Help Portal 67
6/26/2023

Finalizing Installation for the Normalizer

Prerequisites
Checking Out Content from Delivery Unit

If you use manual installation, you have performed the steps under Installing SAP Enterprise Threat Detection Streaming
Manually

Procedure
1. Log in to operating system under root user.

2. Adapt Kafka con guration.

To do so, go to /opt/etd/normalizer/config and make the necessary con guration in the following les:

lc.properties - le contains both consumer and producer properties for the log collector

lpp.properties - le contains both consumer and producer properties for the log pre-processor

a. If you want to use SSL, create a corresponding truststore with the CA certi cate of your Kafka brokers.

b. If you don’t use passwords for truststore: comment ssl.truststore.password property.

3. If you have installed SAP Enterprise Threat Detection Streaming manually, create etd-normalizer systemd unit:

cp /opt/etd/normalizer/systemd/etd-normalizer.service /etc/systemd/system/
systemctl daemon-reload
systemctl enable etd-normalizer

If you have used the installation script, this has already been done by the system.

4. If you have installed SAP Enterprise Threat Detection Streaming manually, add execute authorizations to the start script
of the application:

chmod +x /opt/etd/normalizer/etd-normalizer.sh

If you have used the installation script, this has already been done by the system.

5. Start the normalizer application.

systemctl start etd-normalizer.service

6. Verify the installation:

a. Check the status of systemd unit. The correct status is Running.

systemctl status etd-normalizer.service

b. Check the logs for etd-normalizer.service. The correct response is "-- No entries –".

journalctl -u etd-normalizer.service

c. Check application logs (default location is /opt/etd/normalizer/logs). The correct result is that you don’t
get any entries.

grep ERROR /opt/etd/normalizer/logs/etd-normalizer.log

This is custom documentation. For more information, please visit the SAP Help Portal 68
6/26/2023

Con guration Description for the Normalizer


You can con gure the various parameters for the Normalizer application via an XML le, enabling you to tailor the Normalizer to
the speci c needs of your landscape.

HANA REST Settings for the Normalizer


The normalizer needs a connection to the HANA database where SAP Enterprise Threat Detection runs to be able to retrieve
necessary data.

The data is cached locally, so that it is available even if the connection to HANA fails.

Setting Type Value Mandatory Default

Host String Host URL to the HANA x n/a


Database

Must be an HTTPS URL


if Authenticator=X.509

Authenticator String Authentication method, x X.509


either “basic” or “X.509”

AuthPropertiesFile String Path to the If Authenticator=basic con g/auth.properties


auth.properties le which
contains user and
password

Truststore String Path to the Java If 'Host' is an HTTPS n/a


truststore le containing URL
trusted certi cates. The
truststore must be
readable by the current
user.

TruststorePass String Password of the


truststore

Keystore String Path to the Java keystore If Authenticator=X.509 n/a


le containing the private
key. The keystore must
be readable by the
current user.

KeystorePass String Password for the Java If Authenticator=X.509 n/a


keystore

KeystoreAlias String Alias of the private key If Authenticator=X.509 n/a


entry in the Java
keystore

Reference Con guration for the HANA REST Settings


The following example shows a possible con guration for the HANA REST settings with the associated values. You can adapt this
con guration in line with your speci c needs when con guring the normalizer.

<?xml version="1.0" encoding="utf-8"?>

This is custom documentation. For more information, please visit the SAP Help Portal 69
6/26/2023
<NormalizerConfiguration>
<!-- Reading entries from REST API on HANA -->

<HANA>
<REST>
<Host>https://host:port </Host>
<Authenticator>X.509</Authenticator>
<AuthPropertiesFile>config/auth.properties</AuthPropertiesFile>
<Truststore>config/truststore</Truststore>
<Keystore>config/keystore.p12</Keystore>
<KeystorePass>VgnYOXAUPlm1f4urss=</KeystorePass>
<KeystoreAlias>normalizer</KeystoreAlias>
</REST>
</HANA>

</NormalizerConfiguration>

Kafka Settings for the Normalizer


The normalizer reads data from the log collector Kafka and writes it to the log preprocessor Kafka.

Setting Type Value Mandatory Default

LogCollector.PropertiesFile String Path to Kafka x con g/lc.properties


consumer and
producer properties
le.

The le must be
readable by the
application user.

LogPreProcessor.PropertiesFile String Path to Kafka x con g/lpp.properties


producer and
consumer
properties le.
The le must be
readable by the
application user.

Reference Con guration for the Kafka Settings


The following example shows a possible con guration for the Kafka settings with the associated values. You can adapt this
con guration in line with your speci c needs when con guring the normalizer.

Threading Settings for the Normalizer


Performance and resource consumption can be adjusted by con guring the number of parallel threads. The default values are
sensible defaults for systems which run the normalizer exclusively and might be adjusted if you run multiple applications on the
same host with high workload.

Setting Type Value Mandatory Default

This is custom documentation. For more information, please visit the SAP Help Portal 70
6/26/2023

Setting Type Value Mandatory Default

Parsers Integer Number of threads that -1


will be used to parse
logs simultaneously. If
not provided or “-1”, then
the number of threads
will be calculated based
on number of CPU cores.

Enrichers integer Number of threads that -1


will be used to enrich
logs. If not provided or
“-1”, then the number of
threads will be
calculated based on
number of CPU cores.

Reference Con guration for the Threading Settings


The following example shows a possible con guration for the Threading settings with the associated values. You can adapt this
con guration in line with your speci c needs when con guring the normalizer.

<NormalizerConfiguration>

<!-- Threads count configs. “-1” will based on available processors -->

<Threading>
<Parsers>-1</Parsers>
<Enrichers>-1</Enrichers>
</Threading>

</NormalizerConfiguration>

Processing Settings for the Normalizer


Setting Type Value Mandatory Default

TimestampFormatSupport String Available values are: ALL, x ALL


WITH_TIMEZONE_AND_YEAR_ONLY

If ALL is selected, all timestamp


formats are tried, even formats that
are not fully speci ed. Extraction
may be ambiguous if year or
timezone are missing.

If value
WITH_TIMEZONE_AND_YEAR_ONLY
is selected, then logs without
timezone or year will not be
recognized.

MaxLogLength Integer Maximum length/size of an incoming 32267


log. If beyond this threshold, the log
will be discarded. If a log is
discarded a log entry is written with
log level WARN.

This is custom documentation. For more information, please visit the SAP Help Portal 71
6/26/2023

Setting Type Value Mandatory Default

DHCPEnrichmentEnabled Boolean True or false – Set to true if you want false


to enrich the normalized logs with
MAC addresses based on IP to MAC
address assignment learned from a
connected DHCP server.

For more information about DCHP


enrichment, see Enabling DHCP
Enrichment to Show MAC Addresses
in Forensic Lab.

UsernameMasking.Enabled Boolean True or false - set to true if you want false


to mask all usernames that occur in
attributes other than the username
attribute and that match the
con gurable regex de ned in the
UsernameMasking.Regex
setting. This masking is done in
addition to the pseudonymization of
the usernames in the semantic
attributes speci c to user accounts.
For more information, see Username
Masking.

UsernameMasking.Regex String The regular expression a username [a-zA-Z0-9]{3}


must match in order to be masked.

LocalStorageDirectory String Directory in which the normalizer cache


saves and reads the received
runtime rules, DHCP assignments,
subnets and gateway log struct
lines. The directory must be
readable, writeable and executable
by the application user.

Reference Con guration for the Processing Settings


The following example shows a possible con guration for the Processing settings with the associated values. You can adapt this
con guration in line with your speci c needs when con guring the normalizer.

<NormalizerConfiguration>

<Processing>
<!--ALL, WITH_TIMEZONE_AND_YEAR_ONLY -->
<TimestampFormatSupport>ALL</TimestampFormatSupport>
<MaxLogLength>32267</MaxLogLength>
<DHCPEnrichmentEnabled>false</DHCPEnrichmentEnabled>
<UsernameMasking>
<Enabled>true</Enabled>
<Regex>[a-zA-Z0-9]{3}</Regex>
</UsernameMasking>
<LocalStorageDirectory>/opt/etd/normalizer/cache/</LocalStorageDirectory>
</Processing>

</NormalizerConfiguration>

Formatting Settings for the Normalizer

This is custom documentation. For more information, please visit the SAP Help Portal 72
6/26/2023
You can add formatters to the normalizer, which can preprocess logs. For more information, see Formatters.

Each unrecognized log is check against the speci ed regular expression. If the expression matches the speci ed formatter class
is called which can reformat the contents of the log message into a different format. Some formatters are available within our
delivery, additional formatters can be added manually.

Setting Type Value Mandatory Default

Formatter.Enabled Boolean True or False – enable x


or disable this
formatter

Formatter.Regex String Regex that should x


match the log line

Formatter.FormatterClassname String Class name of the x


formatter that should
be executed

Reference Con guration for the Formatting Settings


The following example shows a possible con guration for the Formatting settings with the associated values. You can adapt this
con guration in line with your speci c needs when con guring the normalizer.

<NormalizerConfiguration>

<Formatting>
<Formatter>
<Enabled>true</Enabled>
<Regex>.* CEF: ?0\|.*</Regex>
<FormatterClassName>com.sap.etd.commons.runtimeparser.format.CEFFormatter</FormatterCla
</Formatter>
<Formatter>
<Enabled>true</Enabled>
<Regex>.* LEEF: ?[1-2]\.0\|.*</Regex>
<FormatterClassName>com.sap.etd.commons.runtimeparser.format.LEEFFormatter</FormatterCl
</Formatter>
</Formatting>

</NormalizerConfiguration>

Kafka Topics for the Normalizer


Con gure the topics from which the normalizer should read and to which it should write.

For more information, see Kafka Topics Used By SAP Enterprise Threat Detection Streaming.

Setting Type Value Mandatory Default

Topic.Id String Id of the topic x

Topic.TopicName String Kafka topic name for the x


logs

This is custom documentation. For more information, please visit the SAP Help Portal 73
6/26/2023

Setting Type Value Mandatory Default

Topic.ThreadCount Integer Number of threads to 1


start to consume data
from this topic.

Note that you cannot use


more threads than you
have partitions
con gured for this Kafka
topic. We recommend to
use as many threads as
there are partitions for
the topic.

Reference Con guration for the Kafka Topics


The following example shows a possible con guration for the Kafka Topics with the associated values. You can adapt this
con guration in line with your speci c needs when con guring the normalizer.

<NormalizerConfiguration>

<Topics>
<!-- Log Collector - Input -->
<Topic>
<Id>LogCollectorNormalized</Id>
<TopicName>SID-RTLogEventIn</TopicName>
<ThreadCount>1</ThreadCount>
</Topic>
<Topic>
<Id>LogCollectorUnrecognized</Id>
<TopicName>SID-UnrecognizedLogsOutForReplication</TopicName>
<ThreadCount>1</ThreadCount>
</Topic>

<!-- Log PreProcessor - Output -->


<Topic>
<Id>LogPreProcessorNormalized</Id>
<TopicName>SID-NormalizedDataOut</TopicName>
<ThreadCount>1</ThreadCount>
</Topic>
<Topic>
<Id>LogPreProcessorUnrecognized</Id>
<TopicName>SID-unrecognized</TopicName>
<ThreadCount>1</ThreadCount>
</Topic>
<Topic>
<Id>LogPreProcessorNewUserSystemData</Id>
<TopicName>SID-NewUserContextSystemData</TopicName>
<ThreadCount>1</ThreadCount>
</Topic>
<Topic>
<Id>LogPreProcessorDHCPIPAssignHANADBOut</Id>
<TopicName>SID-DHCPIPAssignHANADBOut</TopicName>
<ThreadCount>1</ThreadCount>
</Topic>
<Topic>
<Id>LogPreProcessorDHCPIPAssignDBHistory</Id>
<TopicName>SID-DHCPIPAssignDBHistory</TopicName>
<ThreadCount>1</ThreadCount>
</Topic>
</Topics>

</NormalizerConfiguration>

This is custom documentation. For more information, please visit the SAP Help Portal 74
6/26/2023

Reference Con guration for the Normalizer


The following example of a normalizer con guration includes all possible con guration elds with associated values. You can
adapt this con guration in line with your speci c needs when con guring the normalizer.

The example assumes that you used "SID" as value for "SIDPlaceholder" in placerholders.txt

<?xml version="1.0" encoding="utf-8"?>

<NormalizerConfiguration>

<!-- Monitoring configs -->


<Monitoring>
<!-- Logical name of instance used in monitoring metrics -->
<Name>SAP Enterprise Threat Detection Normalizer</Name>

<!-- If prometheus monitoring is used (for Grafana dashboard integration),


which http port should be used to export the metrics -->
<Prometheus>
<Enabled>true</Enabled>
<ExporterPort>7000</ExporterPort>
</Prometheus>
</Monitoring>

<!-- Reading entries from REST API on HANA -->


<HANA>
<REST>
<Host>https://host:port </Host>
<Authenticator>X.509</Authenticator>
<AuthPropertiesFile>config/auth.properties</AuthPropertiesFile>
<UseSSL>true</UseSSL>
<Truststore>config/truststore</Truststore>
<Keystore>config/keystore.p12</Keystore>
<KeystorePass>VgnYOXAUPlm1f4urss=</KeystorePass>
<KeystoreAlias>normalizer</KeystoreAlias>
</REST>
</HANA>

<!-- Kafka configs -->


<Kafka>
<LogCollector>
<!-- File name of consumer and producer properties file for
connecting to log collector Kafka -->
<PropertiesFile>lc.properties</PropertiesFile>
</LogCollector>

<LogPreProcessor>
<!-- File name of consumer and producer properties file for
connecting to log preprocessor Kafka -->
<PropertiesFile>lpp.properties</PropertiesFile>
</LogPreProcessor>
</Kafka>

This is custom documentation. For more information, please visit the SAP Help Portal 75
6/26/2023

<!-- Threads count configs. “-1” will based on available processors -->
<Threading>
<Parsers>-1</Parsers>
<Enrichers>-1</Enrichers>
</Threading>

<!-- Processing configs -->


<Processing>
<!--ALL, WITH_TIMEZONE_AND_YEAR_ONLY -->
<TimestampFormatSupport>ALL</TimestampFormatSupport>
<MaxLogLength>32267</MaxLogLength>
<DHCPEnrichmentEnabled>false</DHCPEnrichmentEnabled>
</Processing>

<!-- Custom formatters for logs configs -->


<Formatting>
<Formatter>
<Enabled>true</Enabled>
<Regex>.* CEF: ?0\|.*</Regex>
<FormatterClassName>com.sap.etd.normalizer.processing.formatting.CEFFormatter</Formatte
</Formatter>

<Formatter>
<Enabled>true</Enabled>
<Regex>.* LEEF: ?[1-2]\.0\|.*</Regex>
<FormatterClassName>com.sap.etd.normalizer.processing.formatting.LEEFFormatter</Formatt
</Formatter>
</Formatting>

<!-- Topics configs>


<Topics>
<!-- Log Collector - Input -->
<Topic>
<Id>LogCollectorNormalized</Id>
<TopicName>SID-RTLogEventIn</TopicName>
</Topic>
<Topic>
<Id>LogCollectorUnrecognized</Id>
<TopicName>SID-UnrecognizedLogsOutForReplication</TopicName>
</Topic>

<!-- Log PreProcessor - Output -->


<Topic>
<Id>LogPreProcessorNormalized</Id>
<TopicName>SID-NormalizedDataOut</TopicName>
</Topic>
<Topic>
<Id>LogPreProcessorUnrecognized</Id>
<TopicName>SID-unrecognized</TopicName>
</Topic>
<Topic>
<Id>LogPreProcessorLog4jHANAOut</Id>
<TopicName>SID-Log4jHANAOut</TopicName>

This is custom documentation. For more information, please visit the SAP Help Portal 76
6/26/2023
</Topic>
<Topic>
<Id>LogPreProcessorNewUserSystemData</Id>
<TopicName>SID-NewUserContextSystemData</TopicName>
</Topic>
<Topic>
<Id>LogPreProcessorPingFromESPDerivedStream</Id>
<TopicName>SID-PingFromESPDerivedStream</TopicName>
</Topic>
<Topic>
<Id>LogPreProcessorDHCPIPAssignHANADBOut</Id>
<TopicName>SID-DHCPIPAssignHANADBOut</TopicName>
</Topic>
<Topic>
<Id>LogPreProcessorDHCPIPAssignDBHistory</Id>
<TopicName>SID-DHCPIPAssignDBHistory</TopicName>
</Topic>
</Topics>

</NormalizerConfiguration>

Transporter
Similar to the Normalizer, the Transporter reads data from the Log Collector Kafka cluster and stores it in the Log Pre-
Processor Kafka cluster. Its job is to process any data that does not require further normalization or enrichment, such as ABAP
master data or pings.

Finalizing Installation for the Transporter

Prerequisites
Checking Out Content from Delivery Unit

If you use manual installation, you have performed the steps under Installing SAP Enterprise Threat Detection Streaming
Manually

Procedure
1. Log in to operating system under root user.

2. Adapt Kafka con guration.

To do so, go to /opt/etd/transporter/config and make the necessary con guration in the following les:

lpp.properties - le contains both consumer and producer properties for the log pre-processorlc.properties - le contains
both consumer and producer properties for the log collector

a. lc.properties - le contains both consumer and producer properties for theIf you want to use SSL, create a
corresponding truststore with the CA certi cate of your Kafka brokers.

b. If you don’t use passwords for truststore: comment ssl.truststore.password property.

3. If you have installed SAP Enterprise Threat Detection Streaming manually, create etd-transporter systemd unit:

This is custom documentation. For more information, please visit the SAP Help Portal 77
6/26/2023

cp /opt/etd/transporter/systemd/etd-transporter.service /etc/systemd/system/
systemctl daemon-reload
systemctl enable etd-transporter

If you have used the installation script, this has already been done by the system.

4. If you have installed SAP Enterprise Threat Detection Streaming manually, add execute authorizations to the start script
of the application:

chmod +x /opt/etd/transporter/etd-transporter.sh

If you have used the installation script, this has already been done by the system.

5. Start the transporter application.

systemctl start etd-transporter.service

6. Verify the installation:

a. Check the status of systemd unit. The correct status is Running.

systemctl status etd-transporter.service

b. Check the logs for etd-transporter.service. The correct response is "-- No entries –".

journalctl -u etd-transporter.service

c. Check the application logs (default location is /opt/etd/transporter/logs). The correct result is that you
don’t get any entries.

grep ERROR /opt/etd/transporter/logs/etd-transporter.log

Con guration Description for the Transporter


You can con gure the various parameters for the transporter application via an XML le, enabling you to tailor the transporter
to the speci c needs of your landscape.

Kafka Settings for the Transporter


Setting Type Value Mandatory Default

LogCollector.PropertiesFile String Path to Kafka x con g/lc.properties


consumer and
producer properties
le.

The le must be
readable by the
application user.

This is custom documentation. For more information, please visit the SAP Help Portal 78
6/26/2023

Setting Type Value Mandatory Default

LogPreProcessor.PropertiesFile String Path to Kafka and x con g/lpp.properties


producer consumer
properties le

The le must be
readable by the
application user.

Reference Con guration for the Kafka Settings


The following example shows a possible con guration for the Kafka settings with the associated values. You can adapt this
con guration in line with your speci c needs when con guring the transporter.

<TransporterConfiguration>

<Kafka>
<LogCollector>
<!-- File name of consumer and producer properties file for
connecting to log collector Kafka -->
<PropertiesFile>config/lc.properties</PropertiesFile>
</LogCollector>

<LogPreProcessor>
<!-- File name of consumer and producer properties file for
connecting to log preprocessor Kafka -->
<PropertiesFile>config/lpp.properties</PropertiesFile>
</LogPreProcessor>
</Kafka>

</TransporterConfiguration>

Kafka Topics for the Transporter


Setting Type Value Mandatory Default

Topic.Enabled Boolean True or false – enable x true


or not this route

Topic.SourceKafka String Source Kafka from x LogCollector


which read logs. Use
constants LogCollector
or LogPreProcessor

Topic.TargetKafka String Target Kafka to which x LogPreProcessor


transport logs. Use
constants LogCollector
or LogPreProcessor

Topic.SourceTopicName String Source Topic Name x

Topic.TargetTopics XML Array List of XML objects.

TargetTopic.TargetTopicName String Name of a topic x

TargetTopic.ConverterClassName String Name of class to


convert the data before
publishing to target
topic

This is custom documentation. For more information, please visit the SAP Help Portal 79
6/26/2023
For more information, see Kafka Topics Used By SAP Enterprise Threat Detection Streaming.

Reference Con guration for the Kafka Topics


The following example shows a possible con guration for the Kafka Topics with the associated values. You can adapt this
con guration in line with your speci c needs when con guring the transporter.

<Topics>
<Topic>
<!-- Route can be enabled or disabled -->
<Enabled>true</Enabled>

<!-- Source Kafka, use constants LogCollector or LogPreProcessor -->


<SourceKafka>LogCollector</SourceKafka>

<!-- Target Kafka, use constants LogCollector or LogPreProcessor -->


<TargetKafka>LogPreProcessor</TargetKafka>

<!-- Source Topic Name -->


<SourceTopicName>{LandscapeID} -HRData</SourceTopicName>

<!-- List of target topics, at least one -->


<TargetTopics>
<TargetTopic>
<!-- Target Topic Name -->
<TargetTopicName>{LandscapeID}- UserDataConverted</TargetTopicName>
<!-- Name of class to convert the data before publishing to target topic (optional) -->
<ConverterClassName>com.sap.etd.transporter.converter.UserHRDataConverter</ConverterClas
</TargetTopic>
</TargetTopics>
</Topic>

Reference Con guration for the Transporter


The following example of a Transporter con guration includes all possible con guration elds with associated values. You can
adapt this con guration in line with your speci c needs when con guring the Transporter.

<?xml version="1.0" encoding="utf-8"?>

<TransporterConfiguration>

<!-- Monitoring configs -->


<Monitoring>
<!-- Logical name of instance used in monitoring metrics -->
<Name>SAP Enterprise Threat Detection Transporter</Name>

<!-- If prometheus monitoring is used (for Grafana dashboard integration), which http port
<Prometheus>
<Enabled>true</Enabled>
<ExporterPort>7002</ExporterPort>
</Prometheus>
</Monitoring>

<!-- Kafka configs -->


<Kafka>
<LogCollector>
<!-- File name of consumer and producer properties file for
connecting to log collector Kafka -->

This is custom documentation. For more information, please visit the SAP Help Portal 80
6/26/2023
<PropertiesFile>lc.properties</PropertiesFile>
</LogCollector>

<LogPreProcessor>
<!-- File name of consumer and producer properties file for
connecting to log preprocessor Kafka -->
<PropertiesFile>lpp.properties</PropertiesFile>
</LogPreProcessor>
</Kafka>

<!-- Topics to be transported from source to target. The Topic tag can be repeated as many time
<Topics>
<Topic>
<!-- Route can be enabled or disabled -->
<Enabled>true</Enabled>

<!-- Source Kafka, use constants LogCollector or LogPreProcessor -->


<SourceKafka>LogCollector</SourceKafka>

<!-- Target Kafka, use constants LogCollector or LogPreProcessor -->


<TargetKafka>LogPreProcessor</TargetKafka>

<!-- Source Topic Name -->


<SourceTopicName>SID-PingFromSystemIn</SourceTopicName>

<!-- List of target topics, at least one -->


<TargetTopics>
<TargetTopic>
<!-- Target Topic Name -->
<TargetTopicName>SID-PingFromSystemIn</TargetTopicName>
</TargetTopic>
</TargetTopics>
</Topic>
...
</Topics>

</TransporterConfiguration>

HANA Writer
The HANA Writer reads all relevant data from the Log Pre-Processor Kafka cluster and writes it into SAP HANA database tables
to make the logs and master data available for SAP Enterprise Threat Detection. It is also doing the content replication, which
allows you to replicate content between different instances of SAP Enterprise Threat Detection (such as development, testing,
production).

 Note
Please be aware that the technical name of the HANA Writer application is kafka_2_hana.

Finalizing Installation for the HANA Writer


This is custom documentation. For more information, please visit the SAP Help Portal 81
6/26/2023

Prerequisites
Checking Out Content from Delivery Unit

If you use manual installation, you have performed the steps under Installing SAP Enterprise Threat Detection Streaming
Manually

Procedure
1. Log in to operating system under root user.

2. Adapt Kafka con guration.

To do so, go to /opt/etd/kafka_2_hana/config and make the necessary con guration in the following le:

lpp.properties

a. If you want to use SSL, create a corresponding truststore with the CA certi cate of your Kafka brokers.

b. If you don’t use passwords for truststore: comment ssl.truststore.password property.

3. Create ETD_DATA_COMMITTER user according to the information in the following topic:

Creating Users and Assigning Authorizations

4. Check parameters in jdbc.properties located in the /opt/etd/kafka_2_hana/config folder.

This le contains parameters needed to connect and write data to SAP HANA.

In this con guration le you use the user and password created as described under Creating Users and Assigning
Authorizations. The other parameters are described in the SAP HANA Security Guide under Client-Side TLS/SSL
Connection Properties (JDBC).

By default TLS/SSL encryption is enabled. In this case you also need to make additional con guration on SAP HANA
server side, this is described in the SAP HANA Security Guide under TLS/SSL Con guration on the SAP HANA Server.

If you don't use SSL, set the properties encrypt and validateCertificate in the jdbc.properties le to false.

5. If you have installed SAP Enterprise Threat Detection Streaming manually, create etd-kafka_2_hana systemd unit:

cp /opt/etd/kafka_2_hana/systemd/etd-kafka_2_hana.service /etc/systemd/system/
systemctl daemon-reload
systemctl enable etd-kafka_2_hana

If you have used the installation script, this has already been done by the system.

6. If you have installed SAP Enterprise Threat Detection Streaming manually, add execute authorizations to the start script
of the application:

chmod +x /opt/etd/kafka_2_hana/etd-kafka_2_hana.sh

If you have used the installation script, this has already been done by the system.

7. Start the kafka_2_hana application.

systemctl start etd-kafka_2_hana.service

8. Verify the installation:

a. Check the status of systemd unit. The correct status is Running.

systemctl status etd-kafka_2_hana.service

b. Check the logs for etd-kafka_2_hana.service. The correct response is "-- No entries –".
This is custom documentation. For more information, please visit the SAP Help Portal 82
6/26/2023

journalctl -u etd-kafka_2_hana.service

c. Check the application logs (default location is /opt/etd/kafka_2_hana/logs). The correct result is that you
don’t get any entries.

grep ERROR /opt/etd/kafka_2_hana/logs/etd-kafka_2_hana.log

Con guration Description for the HANA Writer


You can con gure the various input and output channels of the HANA writer via an XML le, enabling you to tailor the HANA
writer to the speci c needs of your landscape. You need to con gure which Kafka topics should be read and written to the
database and which Kafka topic is used for the content replication.

Shutdown Settings for the HANA Writer


Settings Type Value Mandatory Default

TimeOutInMinutes Integer Time limit in minutes 10


how long application will
wait from the time it
received stop signal until
all process is nished.
After that time
application will be killed

Reference Con guration for the Shutdown Settings


The following example shows a possible con guration for the Shutdown settings with the associated values. You can adapt this
con guration in line with your speci c needs when con guring the HANA Writer.

<?xml version="1.0" encoding="utf-8"?>

<Kafka2HanaConfiguration>

<Shutdown>
<TimeOutInMinutes>10</TimeOutInMinutes>
</Shutdown>

</Kafka2HanaConfiguration>

HANA Settings for the HANA Writer


Settings Type Value Mandatory Default

JDBCUrl String HANA jdbc string x

This is custom documentation. For more information, please visit the SAP Help Portal 83
6/26/2023

Settings Type Value Mandatory Default

JDBCPropertiesFile String Path to jdbc.properties x con g/jdbc.properties


le, where user and
password should be
de ned. Optionally
con guration related to
SSL
The le must be
readable by the
application user.

MaxCommitInterval Integer Number display how long 1000


batch will wait until
commit will be executed.
In milliseconds.

Reference Con guration for the HANA Settings


The following example shows a possible con guration for the HANA settings with the associated values. You can adapt this
con guration in line with your speci c needs when con guring the HANA Writer.

<?xml version="1.0" encoding="utf-8"?>

<Kafka2HanaConfiguration>

<HANA>
<JDBCUrl>jdbc:sap://host:port</JDBCUrl>
<JDBCPropertiesFile>config/jdbc.properties</JDBCPropertiesFile>
<MaxCommitInterval>1000</MaxCommitInterval>
</HANA>

</Kafka2HanaConfiguration>

MaxInternalQueueSize for the HANA Writer


Setting Type Value Mandatory Default

MaxInternalQueueSize Integer The maximum size of the 32768


queues within
application, which stores
logs. When queue is full,
application will stop
getting new logs.

Reference Con guration for the MaxInternalQueueSize Setting


The following example shows a possible con guration for the MaxInternalQueueSize setting with the associated values. You can
adapt this con guration in line with your speci c needs when con guring the HANA Writer.

<?xml version="1.0" encoding="utf-8"?>

<Kafka2HanaConfiguration>

<MaxInternalQueueSize>32768</MaxInternalQueueSize>

</Kafka2HanaConfiguration>

This is custom documentation. For more information, please visit the SAP Help Portal 84
6/26/2023

Kafka Settings for the HANA Writer


Setting Type Value Mandatory Default

LogPreProcessor. String Path to Kafka properties x con g/lpp.properties


PropertiesFile le

The le must be
readable by the
application user.

Reference Con guration for the Kafka Settings


The following example shows a possible con guration for the Kafka settings with the associated values. You can adapt this
con guration in line with your speci c needs when con guring the HANA Writer.

<Kafka2HanaConfiguration>
<Kafka>
<LogPreProcessor>
<PropertiesFile>config/lpp.properties</PropertiesFile>
</LogPreProcessor>
</Kafka>
</Kafka2HanaConfiguration>

Kafka Topics for the HANA Writer


Setting Type Value Mandatory Default

LogEvents XML Array Contains the XML object. x


Description see
LogEvents Settings for
the HANA Writer.

ContentReplication XML Object XML properties for


content replication.
Description see
ContentReplication
Settings for the HANA
Writer.

Topic XML Object XML properties for topic. x


Can be repeated
multiple times.
Description see Topic
Settings for the HANA
Writer.

LogEvents Settings for the HANA Writer


Setting Type Value Mandatory Default

Normalized.EnableNormalized Boolean True or false – enable x true


or disable reading
Normalized logs

This is custom documentation. For more information, please visit the SAP Help Portal 85
6/26/2023

Setting Type Value Mandatory Default

Normalized.EnableOriginal Boolean True or false – enable x true


or disable reading
Original logs

Normalized.SourceTopicName String Name of Kafka topic x SID-NormalizedDataOut

Normalized.BatchSize Integer Number of logs per x 1000


batch

Normalized. ThreadCount Integer Number of threads, x 2


which will process
Normalized logs

Unrecognized.Enable Boolean True or false – enable x true


or disable reading
Unrecognized logs

Unrecognized.SourceTopicName String Name of Kafka topic x SID-unrecognized

Unrecognized.BatchSize Integer Number of logs per x 1000


batch

Unrecognized. ThreadCount Integer Number of threads, x 2


which will process
Normalized logs

Reference Con guration for the Log Events Setting


The following example shows a possible con guration for the Log Events Setting with the associated values. You can adapt this
con guration in line with your speci c needs when con guring the HANA Writer.

<?xml version="1.0" encoding="utf-8"?>

<Kafka2HanaConfiguration>

<Topics>
<LogEvents>
<Normalized>
<EnabledNormalized>true</EnabledNormalized>
<EnabledOriginal>true</EnabledOriginal>
<SourceTopicName>SID-NormalizedDataOut</SourceTopicName>
<BatchSize>1000</BatchSize>
<ThreadCount>2</ThreadCount>
</Normalized>

<Unrecognized>
<Enabled>true</Enabled>
<SourceTopicName>SID-unrecognized</SourceTopicName>
<BatchSize>1000</BatchSize>
<ThreadCount>2</ThreadCount>
</Unrecognized>
</LogEvents>
</Topics>

</Kafka2HanaConfiguration>

ContentReplication Settings for the HANA Writer


The content replication is used to replicate data between different instances of Enterprise Threat Detection (such as
development, testing, production).

This is custom documentation. For more information, please visit the SAP Help Portal 86
6/26/2023
One of the HANA Writer instances that is con gured to write data into the source SAP Enterprise Threat Detection database
reads the data that is supplied in the UI and publish it on the con gured Source Topic.

Each consumer group that is subscribed to the Source Topic will receive this data once and check if it is the speci ed target
system. If that is the case it will process the data and write it to the con gured HANA database. Data that is not addressed to
this system will be ignored.

Therefore, it is important to con gure the HANA Writers correctly:

SAP Enterprise Threat Detection systems that should be able to exchange data must have the same Source Topic and
the same Kafka servers con gured.

All HANA Writers that write to the same HANA database must have the same consumer group con gured (group.id in
lpp.properties le).

HANA Writers that write to a different HANA database must have different consumer groups con gured.

Setting Type Value Mandatory Default

Enable Boolean True or false – enable or x true


disable content
replication

SourceTopicName String Name of Kafka topic x ContentReplication

Reference Con guration for the Content Replication Settings


The following example shows a possible con guration for the Content Replication Settings with the associated values. You can
adapt this con guration in line with your speci c needs when con guring the HANA Writer.

<?xml version="1.0" encoding="utf-8"?>

<Kafka2HanaConfiguration>

<ContentReplication>
<Enabled>true</Enabled>
<SourceTopicName>ContentReplication</SourceTopicName>
</ContentReplication>

</Kafka2HanaConfiguration>

Topic Settings for the HANA Writer


Setting Type Value Mandatory Default

Id String ID of the topic x

Enabled Boolean True or false – enable or x


disable processing this
topic

SourceTopicName String Name of kafka topic x

DBWriterClassName String Full name of java class, x


which process logs from
this topic

For more information, see Kafka Topics Used By SAP Enterprise Threat Detection Streaming.

This is custom documentation. For more information, please visit the SAP Help Portal 87
6/26/2023

Reference Con guration for the Topic Settings


The following example shows a possible con guration for the Topic Settings with the associated values. You can adapt this
con guration in line with your speci c needs when con guring the HANA Writer.

<?xml version="1.0" encoding="utf-8"?>

<Kafka2HanaConfiguration>

<Topic>
<Id>DHCPIPAssignDBHistory</Id>
<Enabled>true</Enabled>
<SourceTopicName>SID-DHCPIPAssignDBHistory</SourceTopicName>
<DBWriterClassName>com.sap.etd.kafka2hana.db.IPAssignHistoryWriter</DBWriterClassName>
</Topic>

</Kafka2HanaConfiguration>

Reference Con guration for the HANA Writer


The following example of a kafka_2_hana con guration includes all possible con guration elds with associated values. You can
adapt this con guration in line with your speci c needs when con guring the kafka_2_hana.

<?xml version="1.0" encoding="utf-8"?>

<Kafka2HanaConfiguration>
<Shutdown>
<TimeOutInMinutes>10</TimeOutInMinutes>
</Shutdown>

<!-- Monitoring configs -->

<Monitoring>
<!-- Logical name of instance used in monitoring metrics -->
<Name>SAP Enterprise Threat Detection Kafka_2_hana</Name>

<!-- If prometheus monitoring is used (for Grafana dashboard integration),


which http port should be used to export the metrics -->
<Prometheus>
<Enabled>true</Enabled>
<ExporterPort>7003</ExporterPort>
</Prometheus>

</Monitoring>

<!-- HANA configs -->


<HANA>
<JDBCUrl>jdbc:sap://host:port</JDBCUrl>
<JDBCPropertiesFile>config/jdbc.properties</JDBCPropertiesFile>
<MaxCommitInterval>1000</MaxCommitInterval>
</HANA>

<MaxInternalQueueSize>32768</MaxInternalQueueSize>

<Kafka>
This is custom documentation. For more information, please visit the SAP Help Portal 88
6/26/2023
<LogPreProcessor>
<PropertiesFile>src/test/resources/lpp.properties</PropertiesFile>
</LogPreProcessor>
</Kafka>

<Topics>
<LogEvents>
<Normalized>
<EnabledNormalized>true</EnabledNormalized>
<EnabledOriginal>true</EnabledOriginal>
<SourceTopicName>SID-NormalizedDataOut</SourceTopicName>
<BatchSize>1000</BatchSize>
<ThreadCount>2</ThreadCount>
</Normalized>

<Unrecognized>
<Enabled>true</Enabled>
<SourceTopicName>SID-unrecognized</SourceTopicName>
<BatchSize>1000</BatchSize>
<ThreadCount>2</ThreadCount>
</Unrecognized>
</LogEvents>

<ContentReplication>
<Enabled>true</Enabled>
<SourceTopicName>SID-ContentReplication</SourceTopicName>
</ContentReplication>

<Topic>
<Id>DHCPIPAssignDBHistory</Id>
<Enabled>true</Enabled>
<SourceTopicName>SID-DHCPIPAssignDBHistory</SourceTopicName>
<DBWriterClassName>com.sap.etd.kafka2hana.db.IPAssignHistoryWriter</DBWriterClassName>
</Topic>
….
</Topics>

</Kafka2HanaConfiguration>

Log Learner
The Log Learner works together with the Log Learning application. It is responsible for analyzing the sample data uploaded in
new Log Learning runs in order to create log entry types and markups. Furthermore it is needed to test the Log Learning runs.

It connects to HANA via a REST API in order to interact the Log Learning application. The application is optional, it is only
needed when the Log Learning Application is used.

Finalizing Installation for the Log Learner

This is custom documentation. For more information, please visit the SAP Help Portal 89
6/26/2023

Prerequisites
Checking Out Content from Delivery Unit

If you use manual installation, you have performed the steps under Installing SAP Enterprise Threat Detection Streaming
Manually

Procedure
1. Log in to operating system under root user.

2. If you have installed SAP Enterprise Threat Detection Streaming manually, create etd-loglearner systemd unit:

cp /opt/etd/loglearner/systemd/etd-loglearner.service /etc/systemd/system/
systemctl daemon-reload
systemctl enable etd-loglearner

3. If you have installed SAP Enterprise Threat Detection Streaming manually, add execute authorizations to the start script
of the application:

chmod +x /opt/etd/loglearner/etd-loglearner.sh

If you have used the installation script, this has already been done by the system.

4. Start the loglearner application.

systemctl start etd-loglearner.service

5. Verify the installation:

a. Check the status of systemd unit. The correct status is Running.

systemctl status etd-loglearner.service

b. Check the logs for etd-loglearner.service.

journalctl -u etd-loglearner.service

c. Check the application logs (default location is /opt/etd/loglearner/logs). The correct status is No errors
occur.

grep ERROR /opt/etd/loglearner/logs/etd-loglearner.log

Con guration Description for the Log Learner


You can con gure the various input and output channels of the log learner via an XML le, enabling you to tailor the log learner
to the speci c needs of your landscape

HANA REST Settings for the Log Learner


Setting Type Value Mandatory Default

This is custom documentation. For more information, please visit the SAP Help Portal 90
6/26/2023

Setting Type Value Mandatory Default

Host String Host URL of the HANA x n/a


database (please use the
FQDN of the HANA
database)

Must be an HTTPS URL


if Authenticator=X.509

Authenticator String Authentication method, x X.509


either “basic” or “X.509”

AuthPropertiesFile String Path to the If Authenticator=basic con g/auth.properties


auth.properties le which
contains user and
password.

The le must be
readable by the
application user.

Truststore String Path to the Java n/a


truststore containing
trusted certi cates. The
truststore must be
readable by the
application user.

TruststorePass String Password for the Java n/a


truststore

Keystore String Path to the Java keystore If Authenticator=X.509 n/a


containing the private
key. The keystore must
be readable by the
application user.

KeystorePass String Password for the Java If Authenticator=X.509 n/a


keystore

KeystoreAlias String Alias of the private key If Authenticator=X.509 n/a


entry in the Java
keystore

Reference Con guration for the HANA REST Settings


The following example shows a possible con guration for the HANA Rest settings with the associated values. You can adapt this
con guration in line with your speci c needs when con guring the Log Learner.

<LogLearnerConfiguration>

<!-- Reading entries from REST API on HANA -->


<HANA>
<REST>
<Host>https://host:port</Host>
<Authenticator>X.509</Authenticator>
<AuthPropertiesFile>config/auth.properties</AuthPropertiesFile>
<Truststore>config/truststore</Truststore>
<TruststorePass>Tv6TAazNTpXz95Ak</TruststorePass>
<Keystore>loglearnerKeystore.p12</Keystore>
<KeystorePass>VgnYOXAUPlm1f4urss=</KeystorePass>
<KeystoreAlias>loglearner</KeystoreAlias>
</REST>
This is custom documentation. For more information, please visit the SAP Help Portal 91
6/26/2023
</HANA>

</LogLearnerConfiguration>

Processing Settings for the Log Learner


Settings Type Value Mandatory Default

TimestampFormatSupport String Available values are: ALL, x ALL


WITH_TIMEZONE_AND_YEAR_ONLY

If value
WITH_TIMEZONE_AND_YEAR_ONLY
is selected, then logs without
timezone or year will not be
recognized.

Reference Con guration for the Processing Settings


The following example shows a possible con guration for the Processing settings with the associated values. You can adapt this
con guration in line with your speci c needs when con guring the Log Learner.

<LogLearnerConfiguration>

<!-- Processing configs -->


<Processing>
<!--ALL, WITH_TIMEZONE_AND_YEAR_ONLY -->
<TimestampFormatSupport>ALL</TimestampFormatSupport>
</Processing>

</LogLearnerConfiguration>

Reference Con guration for the Log Learner


The following example of a Log Learner con guration includes all possible con guration elds with associated values. You can
adapt this con guration in line with your speci c needs when con guring the Log Learner.

<?xml version="1.0" encoding="utf-8"?>

<LogLearnerConfiguration>

<!-- Monitoring configs -->

<Monitoring>
<!-- Logical name of instance used in monitoring metrics -->
<Name>SAP Enterprise Threat Detection Log Learner</Name>
<!-- If prometheus monitoring is used (for Grafana dashboard integration), which http port
<Prometheus>
<Enabled>true</Enabled>
<ExporterPort>7004</ExporterPort>
</Prometheus>
</Monitoring>

<!-- Reading entries from REST API on HANA -->

This is custom documentation. For more information, please visit the SAP Help Portal 92
6/26/2023

<HANA>
<REST>
<Host>https://host:port</Host>
<Authenticator>X.509</Authenticator>
<AuthPropertiesFile>config/auth.properties</AuthPropertiesFile>
<UseSSL>true</UseSSL>
<Truststore>config/truststore</Truststore>
<TruststorePass>Tv6TAazNTpXz95Ak</TruststorePass>
<Keystore>loglearnerKeystore.p12</Keystore>
<KeystorePass>VgnYOXAUPlm1f4urss=</KeystorePass>
<KeystoreAlias>loglearner</KeystoreAlias>
</REST>
</HANA>

<!-- Formatting configs -->

<Formatting>
<Formatter>
<Enabled>true</Enabled>
<Regex>.* ?CEF: ?0\|.*</Regex>
<FormatterClassName>com.sap.etd.commons.runtimeparser.format.CEFFormatter</FormatterClassNa
</Formatter>
<Formatter>
<Enabled>true</Enabled>
<Regex>.* ?LEEF: ?[1-2]\.0\|.*</Regex>
<FormatterClassName>com.sap.etd.commons.runtimeparser.format.LEEFFormatter</FormatterClassN
</Formatter>
</Formatting>

<!-- Processing configs -->

<Processing>
<!--ALL, WITH_TIMEZONE_AND_YEAR_ONLY -->
<TimestampFormatSupport>ALL</TimestampFormatSupport>
</Processing>

</LogLearnerConfiguration>

Cold Storage Writer


You can use the Cold Storage Writer to archive log data by writing it to the le system using GZIP compression. The data can
then be used to simply do a text-based search in the les using the "grep" command for example. If more complex analysis
needs to be done, the data can also be restored into SAP HANA. The Cold Storage Writer reads the logs from the Kafka topic
speci ed in the Cold Storage Writer con guration.

For more information about restoring data, see Restoring Data from the Cold Storage.

Directory Structure
The Cold Storage Writer writes to the directories speci ed in its con guration le:

This is custom documentation. For more information, please visit the SAP Help Portal 93
6/26/2023
For unrecognized logs, the Cold Storage Writer writes into the WriteDirectory attribute in the Unrecognized
section.

For normalized or original logs, the Cold Storage Writer writes into the following attributes in the Normalized section:

WriteDirectoryNormalized

WriteDirectoryOriginal

The directory structure of the default con guration looks like this:

coldstorage

archive

normalized

2022-03-30

2022-03-31

original

2022-03-30

2022-03-31

unrecognized

2022-03-30

2022-03-31

On the lowest hierarchy level there are directories for individual days. Each log event is stored in the directory whose date
corresponds to the time stamp of the log event. The timestamp is determined from the log event eld Timestamp. For
example, if the timestamp of the log event is March 30, the log event is stored in the folder for March 30 even if the log event
was delivered for example on March 31. If the date cannot be determined, the log event is written to a le in directory 0000-
01-01.

Temporary Files and GZIP-Compressed Files


The Cold Storage Writer writes log events to temporary les ending with .tmp. The le name consists of the type of log
(normalized, unrecognized, or original), the number of the thread that wrote the le and the date of the log's timestamp.

Here's an example for two les created by two threads:

Normalized_0_2022-03-30.tmp

Normalized_1_2022-03-30.tmp

The temporary les are compressed using GZIP compression and are closed for writing if one of the following happens:

the Cold Storage application is stopped

the number of log events in the le has reached its maximum (as speci ed in the EventsPerFile attribute in the
con guration le, by default 1000000)

the time was reached to close the le (as speci ed in the FileRotateIntervalInHours attribute, by default 6 hours
after the last log event belonging to that particular date has been received).

This is custom documentation. For more information, please visit the SAP Help Portal 94
6/26/2023
When the temporary le is closed, it gets renamed to a .gz le. The le names of the .gz les start with the name of the
corresponding temporary les. In addition, these names include the date and time when the temporary le was closed. The .gz
les are located in the same date directory as the corresponding temporary les. Here are some examples for le names in the
directory “2022-03-30":

Normalized_0_2022-03-30_2022-03-30T02-34-37-563.gz

Normalized_1_2022-03-30_2022-03-30T06-00-02-381.gz

Normalized_0_2022-03-30_2022-03-30T05-59-04-283.gz

Normalized_1_2022-03-30_2022-03-30T07-59-44-850.gz

Normalized_0_2022-03-30_2022-03-30T07-58-16-943.gz

Normalized_1_2022-03-30_2022-03-30T02-35-02-525.gz

If new log events arrive for an older date where the temporary le had been closed already, the Cold Storage Writer creates a
new temporary le for that date.

 Note
If you decide to use multiple instances of Cold Storage Writers, ensure that they do not write into the same directories. The
same holds for the Cold Storage Readers: they must not use the same directories.

File Structure
The log events from the Kafka topic are converted before they are written to the le:

New lines \n are replaced by &newl&

Semicolons ; are replaced by &semi&

The les are written using CSV format without a header line and with semicolon as the value separator. For more information,
see

Cold Storage File for Normalized Logs

Cold Storage File for Original Logs

Cold Storage File for Unrecognized Logs

Retention
The data (that is, the .gz les and the folders) is deleted after it has reached the end of the retention period as de ned in
attribute RetentionDays. If data is deleted due to the retention policy, a corresponding log entry is written to the logs/the
retention.log le).

Finalizing Installation for the Cold Storage Writer

Prerequisites
Checking Out Content from Delivery Unit

This is custom documentation. For more information, please visit the SAP Help Portal 95
6/26/2023
If you use manual installation, you have performed the steps under Installing SAP Enterprise Threat Detection Streaming
Manually

Procedure
1. Log in to operating system under root user.

2. Adapt Kafka con guration.

To do so, go to /opt/etd/coldstorage/config and make the necessary con guration in the following le:

lpp.properties

a. If you want to use SSL, create a corresponding truststore with the CA certi cate of your Kafka brokers.

b. If you don’t use passwords for truststore: comment ssl.truststore.password property

3. If you want to use the Cold Storage Reader, perform the following steps:

a. Create ETD_DATA_COMMITTER user according to the information in the following topic:

Creating Users and Assigning Authorizations

b. Check parameters in jdbc.properties located in /opt/etd/coldstorage/config/ folder.

This le contains parameters needed to connect and write data to SAP HANA.

In this con guration le you use the user and password created as described under Creating Users and Assigning
Authorizations. The other parameters are described in the SAP HANA Security Guide under Client-Side TLS/SSL
Connection Properties (JDBC).

By default TLS/SSL encryption is enabled. In this case you also need to make additional con guration on SAP
HANA server side, this is described in the SAP HANA Security Guide under TLS/SSL Con guration on the SAP
HANA Server.

If you don't use SSL, set the properties encrypt and validateCertificate in the jdbc.properties le to
false.

4. If you have installed SAP Enterprise Threat Detection Streaming manually, create etd-coldstorage systemd unit:

cp /opt/etd/coldstorage/systemd/etd-coldstorage.service /etc/systemd/system/
systemctl daemon-reload
systemctl enable etd-coldstorage

If you have used the installation script, this has already been done by the system.

5. If you have installed SAP Enterprise Threat Detection Streaming manually, add execute authorizations to the start script
of the application:

chmod +x /opt/etd/coldstorage/etd-coldstorage.sh

If you have used the installation script, this has already been done by the system.

6. Start Coldstorage application

systemctl start etd-coldstorage.service

7. Verify the installation:

a. Check the status of systemd unit. The correct status is Running.

systemctl status etd-coldstorage.service

b. Check the logs for etd-coldstorage.service. The correct response is "-- No entries –".

This is custom documentation. For more information, please visit the SAP Help Portal 96
6/26/2023

journalctl -u etd-coldstorage.service

c. Check the application logs (default location is /opt/etd/coldstorage/logs). The correct result is that you
don’t get any entries.

grep ERROR /opt/etd/coldstorage/logs/etd-coldstorage.log

Cold Storage File for Normalized Logs


The table shows the information contained in a cold storage le for normalized logs. The cold storage les are written using CSV
format without a header line and with semicolon as the value separator.

Field Number Attribute Name

1 Cold Storage Format Version

2 TechnicalLogEntryType

3 TechnicalNumber

4 TechnicalNumberRange

5 TechnicalGroupId

6 AttackName

7 AttackType

8 CorrelationId

9 CorrelationSubId

10 Event

11 EventLogType

12 EventMessage

13 EventScenarioRoleOfActor

14 EventScenarioRoleOfInitiator

15 EventSeverityCode

16 EventSourceId

17 EventSourceType

18 GenericAction

19 GenericCategory

20 GenericDeviceType

21 GenericExplanation

22 GenericGeolocationCodeActor

23 GenericGeolocationCodeTarget

24 GenericOrder

25 GenericOutcome

This is custom documentation. For more information, please visit the SAP Help Portal 97
6/26/2023

Field Number Attribute Name

26 GenericOutcomeReason

27 GenericPath

28 GenericPathPrior

29 GenericPurpose

30 GenericRiskLevel

31 GenericScore

32 GenericSessionId

33 GenericURI

34 NetworkHostnameActor

35 NetworkHostnameInitiator

36 NetworkHostnameIntermediary

37 NetworkHostnameReporter

38 NetworkHostnameTarget

39 NetworkHostDomainActor

40 NetworkHostDomainInitiator

41 NetworkHostDomainIntermediary

42 NetworkHostDomainReporter

43 NetworkHostDomainTarget

44 NetworkInterfaceActor

45 NetworkInterfaceTarget

46 NetworkIPAddressActor

47 NetworkIPAddressInitiator

48 NetworkIPAddressIntermediary

49 NetworkIPAddressReporter

50 NetworkIPAddressTarget

51 NetworkIPBeforeNATActor

52 NetworkIPBeforeNATTarget

53 NetworkMACAddressActor

54 NetworkMACAddressInitiator

55 NetworkMACAddressIntermediary

56 NetworkMACAddressReporter

57 NetworkMACAddressTarget

58 NetworkNetworkPre xActor

This is custom documentation. For more information, please visit the SAP Help Portal 98
6/26/2023

Field Number Attribute Name

59 NetworkNetworkPre xTarget

60 NetworkPortActor

61 NetworkPortInitiator

62 NetworkPortIntermediary

63 NetworkPortReporter

64 NetworkPortTarget

65 NetworkPortBeforeNATActor

66 NetworkPortBeforeNATTarget

67 NetworkProtocol

68 NetworkSessionId

69 NetworkZoneActor

70 NetworkZoneTarget

71 ParameterDirection

72 ParameterDirectionContext

73 ParameterName

74 ParameterNameContext

75 ParameterDataType

76 ParameterDataTypeContext

77 ParameterType

78 ParameterTypeContext

79 ParameterValueDouble

80 ParameterValueDoublePriorValue

81 ParameterValueNumber

82 ParameterValueNumberContext

83 ParameterValueNumberPriorValue

84 ParameterValueString

85 ParameterValueStringContext

86 ParameterValueStringPriorValue

87 ParameterValueTimestamp

88 ParameterValueTimestampPriorValue

89 PrivilegeIsGrantable

90 PrivilegeName

91 PrivilegeType

This is custom documentation. For more information, please visit the SAP Help Portal 99
6/26/2023

Field Number Attribute Name

92 PrivilegeGranteeName

93 PrivilegeGranteeType

94 ResourceContainerName

95 ResourceContainerType

96 ResourceContent

97 ResourceContentType

98 ResourceCount

99 ResourceName

100 ResourceNamePrior

101 ResourceRequestSize

102 ResourceResponseSize

103 ResourceSize

104 ResourceType

105 ResourceSumCriteria

106 ResourceSumOverTime

107 ResourceUnitsOfMeasure

108 ServiceAccessName

109 ServiceFunctionName

110 ServiceReferrer

111 ServiceRequestLine

112 ServiceType

113 ServiceVersion

114 ServiceApplicationName

115 ServiceExecutableName

116 ServiceExecutableType

117 ServiceInstanceName

118 ServiceOutcome

119 ServicePartId

120 ServiceProcessId

121 ServiceProgramName

122 ServiceTransactionName

123 ServiceUserAgent

124 ServiceWork owName

This is custom documentation. For more information, please visit the SAP Help Portal 100
6/26/2023

Field Number Attribute Name

125 SystemIdActor

126 SystemIdInitiator

127 SystemIdIntermediary

128 SystemIdReporter

129 SystemIdTarget

130 SystemTypeActor

131 SystemTypeInitiator

132 SystemTypeIntermediary

133 SystemTypeReporter

134 SystemTypeTarget

135 TimeDuration

136 TimestampOfEnd

137 TimestampOfStart

138 TriggerNameActing

139 TriggerNameTargeted

140 TriggerTypeActing

141 TriggerTypeTargeted

142 UserLogonMethod

143 UsernameActing

144 UsernameInitiating

145 UsernameTargeted

146 UsernameTargeting

147 UsernameDomainNameActing

148 UsernameDomainNameInitiating

149 UsernameDomainNameTargeted

150 UsernameDomainNameTargeting

151 UsernameDomainTypeActing

152 UsernameDomainTypeInitiating

153 UsernameDomainTypeTargeted

154 UsernameDomainTypeTargeting

155 Id

156 Timestamp

157 UserIdActing

This is custom documentation. For more information, please visit the SAP Help Portal 101
6/26/2023

Field Number Attribute Name

158 UserIdInitiating

159 UserIdTargeted

160 UserIdTargeting

161 NetworkSubnetIdActor

162 NetworkSubnetIdInitiator

163 NetworkSubnetIdIntermediary

164 NetworkSubnetIdReporter

165 NetworkSubnetIdTarget

166 TechnicalLogCollectorName

167 TechnicalLogCollectorIPAddress

168 TechnicalLogCollectorPort

169 AccountNameHashActing

170 AccountNameHashInitiating

171 AccountNameHashTargeted

172 AccountNameHashTargeting

173 AccountIdActing

174 AccountIdInitiating

175 AccountIdTargeted

176 AccountIdTargeting

177 TechnicalTimestampOfInsertion

178 TechnicalTimestampInteger

Cold Storage File for Original Logs


The table shows the information contained in a cold storage le for original logs. The les are written using CSV format without
a header line and with semicolon as the value separator.

Field Number Attribute Name

1 Cold Storage Format Version

2 EventLogType

3 EventSourceId

4 EventSourceType

5 Id

6 Timestamp

7 OriginalData

This is custom documentation. For more information, please visit the SAP Help Portal 102
6/26/2023

Field Number Attribute Name

8 TechnicalLogCollectorName

9 TechnicalLogCollectorIPAddress

10 TechnicalLogCollectorPort

11 TechnicalTimestampOfInsertion

12 TechnicalTimestampInteger

Cold Storage File for Unrecognized Logs


The table shows the information contained in a cold storage le for unrecognized logs. The les are written using CSV format
without a header line and with semicolon as the value separator.

Field Number Attribute Name

1 Cold Storage Format Version

2 OriginalData

3 Timestamp

4 ESPInstanceId

5 SourceIPAddress

6 TechnicalLogCollectorPort

7 TechnicalLogCollectorIPAddress

8 ReasonCode

9 TechnicalTimestampInteger

Con guration Description for the Cold Storage Writer


You can con gure the various input and output channels of the cold storage writer via an XML le, enabling you to tailor the cold
storage writer to the speci c needs of your landscape. You can con gure which Kafka topics should be archived via an XML le.
You can also con gure to restore data from disk and write them directly into the HANA database.

Cold Storage Writer Settings


Setting Type Value mandatory Default

LogPreProcessor.PropertiesFile String Path to Kafka x con g/lpp.properties


consumer and
producer
properties le

The le must be
readable by the
application user.

This is custom documentation. For more information, please visit the SAP Help Portal 103
6/26/2023

Setting Type Value mandatory Default

Normalized. EnabledNormalized Boolean True or false – x true


enable or disable
writing of
normalized logs

Normalized. EnabledOriginal Boolean True or false – x true


enable or disable
writing of original
les

Normalized. String Directory in x /opt/etd/coldstorage/archive/normalized


WriteDirectoryNormalized which les with
normalized logs
will be
temporarily
stored during
application work.

The directory
must be
readable,
writeable and
executable by
the application
user.

Normalized. WriteDirectoryOriginal String Directory in x /opt/etd/coldstorage/archive/original


which les with
original logs will
be temporarily
stored during
application work.

The directory
must be
readable,
writeable and
executable by
the application
user.

Normalized. SourceTopicName String Name of Kafka x SID-NormalizedDataOut


topic for original
and normalized
logs

Normalized. ThreadCount Integer Number of 2


threads which
will process
normalized and
original logs

Normalized. EventsPerFile Integer Number of logs 1000000


which will be
stored per le

This is custom documentation. For more information, please visit the SAP Help Portal 104
6/26/2023

Setting Type Value mandatory Default

Normalized. Integer Number of hours 6


FileRotateIntervalInHoursNormalized after which le
with normalized
logs will be
closed even if
not enough logs
are there

Normalized. Integer Number of hours 6


FileRotateIntervalInHoursOriginal after which le
with original logs
will be closed
even if it doesn't
contain enough
logs

RetentionDaysNormalized Integer Files which are -1


older than this
value in days will
be deleted
automatically for
normalized logs.
-1 if no need to
automatically
delete les

RetentionDaysOriginal Integer Files which are -1


older than this
value in days will
be deleted
automatically for
original logs. -1 if
no need to
automatically
delete les

Unrecognized. Enabled Boolean True or false – x true


enable or disable
writing of
unrecognized
logs

Unrecognized. WriteDirectory String Directory in x /opt/etd/coldstorage/archive/unrecognized


which les with
unrecognized
logs will be
temporarily
stored during
application work.

The directory
must be
readable,
writeable and
executable by
the application
user.

This is custom documentation. For more information, please visit the SAP Help Portal 105
6/26/2023

Setting Type Value mandatory Default

Unrecognized. SourceTopicName String Name of Kafka x SID-unrecognized


topic for
unrecognized
logs

Unrecognized. ThreadCount Integer Number of 2


threads which
will process
Unrecognized
logs

Unrecognized. Integer Number of logs 1000000


which will be
EventsPerFile
stored per le

Unrecognized. Integer Number of hours 6


FileRotateIntervalInHours after which le
with
unrecognized
logs will be
closed even if it
doesn't contain
enough logs

RetentionDays Integer Files which are -1


older than this
value in days will
be deleted
automatically for
unrecognized
logs. -1 if no need
to automatically
delete les

Cold Storage Reader Settings


Setting Type Value Mandatory Default

HANA.JDBCUrl String HANA JDBC x


string

HANA. JDBCPropertiesFile String Path to x con g/jdbc.properties


jdbc.properties
le which is
used to connect
to HANA. The
le must be
readable by the
application
user.

HANA. MaxCommitInterval Integer Number display x 1000


how long batch
will wait untill
commit will be
executed. In
milliseconds

This is custom documentation. For more information, please visit the SAP Help Portal 106
6/26/2023

Setting Type Value Mandatory Default

HANA.BatchSize Integer Number of logs x 1000


per batch

Normalized. EnabledNormalized Boolean True or false – x false


enable or not
restoring
normalized logs

Normalized. EnabledOriginal Boolean True or false – x false


enable or not
restoring
original logs

Normalized. FileHandlingAfterInsertion String Prede ned x Delete


values are only
allowed: Delete
or Move. Delete
means – delete
cold le, when
logs were
completely
restored. Move
– means move
le from which
logs were
restored into
another
directory
(provided in
con gs)

Normalized. ReadDirectoryNormalized String Directory from x /opt/etd/coldstorage/archive/normalized/


where cold les
with normalized
logs are
located.

The directory
must be
readable,
writeable and
executable by
the application
user.

Normalized. ReadDirectoryOriginal String Directory from x /opt/coldstorage/archive/original/


where cold les
with original
logs are located

The directory
must be
readable,
writeable and
executable by
the application
user.

This is custom documentation. For more information, please visit the SAP Help Portal 107
6/26/2023

Setting Type Value Mandatory Default

Normalized. MoveDirectoryNormalized String Directory in x /opt/coldstorage/archive/moved/normalized/


which cold les
with normalized
logs will be
moved after
successful
restoring.

The directory
must be
readable,
writeable and
executable by
the application
user.

Normalized. MoveDirectoryOriginal String Directory in x /opt/coldstorage/archive/moved/original/


which cold le
with original
logs will be
moved after
successfull
restoring

The directory
must be
readable,
writeable and
executable by
the application
user.

Normalized. ErrorDirectoryNormalized String Directory in x /opt/coldstorage/archive/errored/normalized/


which cold les
with normalized
logs will be
moved in case
of failed
restoring
process.

The directory
must be
readable,
writeable and
executable by
the application
user.

This is custom documentation. For more information, please visit the SAP Help Portal 108
6/26/2023

Setting Type Value Mandatory Default

Normalized. ErrorDirectoryOriginal String Directory in x /opt/coldstorage/archive/errored/original/


which cold le
with original
logs will be
moved in case
of failed
restoring
process

The directory
must be
readable,
writeable and
executable by
the application
user.

Normalized.ThreadCount Integer Number of x 2


threads which
will be used to
process
normalized and
original logs

Unrecognized.Enabled Boolean True or false – x false


enable or not
restoring
unrecognized
logs

Unrecognized.FileHandlingAfterInsertion String Prede ned x Move


values are only
allowed: Delete
or Move. Delete
means – delete
cold le, when
logs were
completely
restored. Move
– means move
le from which
logs were
restored into
another
directory
(provided in
con gs)

Unrecognized.ReadDirectory String Directory from x /opt/coldstorage/archive/unrecognized/


where cold les
with original
logs are located

The directory
must be
readable,
writeable and
executable by
the application
user.

This is custom documentation. For more information, please visit the SAP Help Portal 109
6/26/2023

Setting Type Value Mandatory Default

Unrecognized.MoveDirectory String Directory in x /opt/coldstorage/archive/moved/unrecognized/


which cold les
with
unrecognized
logs will be
moved after
successfull
restoring

The directory
must be
readable,
writeable and
executable by
the application
user.

Unrecognized.ErrorDirectory String Directory in x /opt/coldstorage/archive/errored/unrecognized/


which cold les
with
unrecognized
logs will be
moved in case
of failed
restoring
process

The directory
must be
readable,
writeable and
executable by
the application
user.

Unrecognized.ThreadCount Integer Number of x 2


threads which
will be used to
process
unrecognized
logs

Reference Con guration for the Cold Storage Writer


The following example of a Coldstorage con guration includes all possible con guration elds with associated values. You can
adapt this con guration in line with your speci c needs when con guring the Coldstorage.

<?xml version="1.0" encoding="utf-8"?>

<ColdStorageConfiguration>

<!-- Monitoring configs -->

<Monitoring>
<!-- Logical name of instance used in monitoring metrics -->

This is custom documentation. For more information, please visit the SAP Help Portal 110
6/26/2023
<Name>SAP Enterprise Threat Detection Coldstorage</Name>

<!-- If prometheus monitoring is used (for Grafana dashboard integration),


which http port should be used to export the metrics -->
<Prometheus>
<Enabled>true</Enabled>
<ExporterPort>7006</ExporterPort>
</Prometheus>
</Monitoring>

<ColdStorageWriter>

<Kafka>
<LogPreProcessor>
<PropertiesFile>config/lpp.properties</PropertiesFile>
</LogPreProcessor>
</Kafka>

<Normalized>
<EnabledNormalized>true</EnabledNormalized>
<EnabledOriginal>true</EnabledOriginal>
<WriteDirectoryNormalized>/opt/etd/coldstorage/archive/normalized</WriteDirectoryNormal
<WriteDirectoryOriginal>/opt/etd/coldstorage/archive/original</WriteDirectoryOriginal>
<SourceTopicName>SID-NormalizedDataOut</SourceTopicName>
<ThreadCount>2</ThreadCount>
<EventsPerFile>1000000</EventsPerFile>
<FileRotateIntervalInHoursNormalized>5</FileRotateIntervalInHoursNormalized>
<FileRotateIntervalInHoursOriginal>4</FileRotateIntervalInHoursOriginal>
<RetentionDaysNormalized>-1</RetentionDaysNormalized>
<RetentionDaysOriginal>-1</RetentionDaysOriginal>
</Normalized>

<Unrecognized>
<Enabled>false</Enabled>
<WriteDirectory>/opt/etd/coldstorage/archive/unrecognized</WriteDirectory>
<SourceTopicName>SID-unrecognized</SourceTopicName>
<ThreadCount>2</ThreadCount>
<EventsPerFile>100000</EventsPerFile>
<FileRotateIntervalInHours>3</FileRotateIntervalInHours>
<RetentionDays>-1</RetentionDays>
</Unrecognized>

</ColdStorageWriter>

<ColdStorageReader>

<HANA>
<JDBCUrl>jdbc:sap://host:port</JDBCUrl>
<JDBCPropertiesFile>config/jdbc.properties</JDBCPropertiesFile>
<MaxCommitInterval>1000</MaxCommitInterval>
<BatchSize>1000</BatchSize>
</HANA>

<Normalized>

This is custom documentation. For more information, please visit the SAP Help Portal 111
6/26/2023
<EnabledNormalized>false</EnabledNormalized>
<EnabledOriginal>false</EnabledOriginal>
<FileHandlingAfterInsertion>Delete</FileHandlingAfterInsertion>
<ReadDirectoryNormalized>/opt/etd/coldstorage/archive/normalized/</ReadDirectoryNormali
<ReadDirectoryOriginal>/opt/etd/coldstorage/archive/original/</ReadDirectoryOriginal>
<MoveDirectoryNormalized>/opt/etd/coldstorage/archive/moved/normalized/</MoveDirectoryN
<MoveDirectoryOriginal>/opt/etd/coldstorage/archive/moved/original/</MoveDirectoryOrigi
<ErrorDirectoryNormalized>/opt/etd/coldstorage/archive/errored/normalized/</ErrorDirect
<ErrorDirectoryOriginal>/opt/etd/coldstorage/archive/errored/original/</ErrorDirectoryO
<ThreadCount>2</ThreadCount>
</Normalized>

<Unrecognized>
<Enabled>false</Enabled>
<FileHandlingAfterInsertion>Move</FileHandlingAfterInsertion>
<ReadDirectory>/opt/etd/coldstorage/archive/unrecognized/</ReadDirectory>
<MoveDirectory>/opt/etd/coldstorage/archive/moved/unrecognized/</MoveDirectory>
<ErrorDirectory>/opt/etd/coldstorage/archive/errored/unrecognized/</ErrorDirectory>
<ThreadCount>2</ThreadCount>
</Unrecognized>

</ColdStorageReader>

</ColdStorageConfiguration>

Restoring Data from the Cold Storage


In some scenarios, you might want to temporarily restore speci c data which is outside the retention period of your hot or warm
storage.

Procedure
1. Temporarily turn off the sap.secmon.services.partitioning::clearData job to prevent it from deleting the data that you
want to restore.

If you don’t turn off the job, the data will only be available within your SAP HANA DB until the next job execution.

2. Check the archive directory of your cold storage application and identify the data that you want to restore.

3. (Recommended) Set up the data restoration using a new directory.

a. Check the archive directory of your cold storage application and identify the data that you need to restore.

b. Copy this data from the existing archive directory to a new directory.

c. Set up a second cold storage application reading from this directory.

The cold storage application will read all available data from the copied directory and write it directly into the SAP
HANA DB (without using Kafka).

d. Con gure the Cold Storage Writer to delete all les that have been successfully restored.

The relevant parameter is FileHandlingAfterInsertion. For more information, see Cold Storage Reader
Settings.

4. As an alternative to the recommended approach using a new directory, you can set up the Cold Storage Reader to read
directly from the existing archive directory without identifying and copying any data.

This approach might be the better choice if your log data volume is very high.

This is custom documentation. For more information, please visit the SAP Help Portal 112
6/26/2023
Since the retention period for the cold storage application is usually much higher than it is for the hot and warm storage,
you will potentially restore a lot of data that will be deleted again from your SAP HANA DB very soon because the
retention period there is already over. Furthermore, note that you cannot have the Cold Storage Reader and the Cold
Storage Writer operating simultaneously on the same directory. So, you need to temporarily turn off your Cold Storage
Writer in this case. If you have multiple archive directories to restore data from, you need to start several instances of
the Cold Storage Reader since it is not possible to con gure multiple archive directories within a single Cold Storage
Reader instance.

 Caution
Make sure to set the parameter FileHandlingAfterInsertion to "Move". If you don't do that, your original cold
storage les will be deleted.

5. When you are done with the analysis of the restored data in HANA, turn on the
sap.secmon.services.partitioning::clearData job again.

This will delete all the restored data from HANA again, because it lies outside the retention period.

Proxy Settings
SAP Enterprise Threat Detection supports proxy settings. You can use a global proxy, dedicated proxies for the log collector, or a
combination of both.

Any application can use global proxy settings from the command line. For more information, see
https://docs.oracle.com/javase/8/docs/technotes/guides/net/proxies.html . As an example, the script
/opt/etd/logcollector/etd-logcollector.sh can include the following global settings:

-Dhttps.proxyHost=proxy.localDomain -Dhttps.proxyPort=3128

These settings will cause all HTTP requests to be routed through the proxy.

For the log collector, you can specify a dedicated proxy in the log collector con guration le for the following HTTP-based
connections:

HTTP Sender

OData Subscriber

SCP Audit Log Subscriber

The global setting http.nonProxyHosts affects the global setting and the speci c settings on the log collector. Example: All
proxies on localhost are disabled if you append the following snippet to the script etd-logcollector.sh:

-Dhttp.nonProxyHosts=127.0.0.1|localhost

Related Information
HTTP Sender Settings for the Log Collector
OData Subscriber Settings for the Log Collector
SCP Audit Log Subscriber Settings for the Log Collector

Formatters
This is custom documentation. For more information, please visit the SAP Help Portal 113
6/26/2023
Formatters can be used in the normalizer and log learner to format messages before they are processed. This can be used to
convert a log from one format to another and may enable you to process logs that otherwise cannot be processed.

To use a formatter in the normalizer or the log learner, the following settings are available:

Setting Type Value Mandatory Default

Formatter.Enabled Boolean True or false – enable x


or disable this
formatter

Formatter.Regex String Regex that should x


match the log line

Formatter.FormatterClassname String Class name of the x


formatter that should
be executed

<Formatting>
<Formatter>
<Enabled>true</Enabled>
<Regex>.* CEF: ?0\|.*</Regex>
<FormatterClassName>com.sap.etd.commons.runtimeparser.format.CEFFormatter</FormatterCla
</Formatter>
</Formatting>

Any incoming log is matched against the speci ed regex. If the regex matches, the speci ed Formatter is called and the log
message is replaced with the result of the formatter. The following formatters are available with the standard delivery of SAP
Enterprise Threat Detection:

com.sap.etd.commons.runtimeparser.format.CEFFormatter

Converts a log message in CEF into a key/value based log.

com.sap.etd.commons.runtimeparser.format.LEEFFormatter

Converts a log message in LEEF Format into key/value based log.

Creating your own Formatter


To create your own formatter you need to add the etd_commons-*.jar into your classpath and create a class that implements
com.sap.etd.commons.runtimeparser.format.IFormatter

The class only contains one method that you need to implement. It takes the original log as input and returns the modi ed log.

For example:

public class Reformatter implements IFormatter {

public String format(String input) {


// Modify input
String output = input;
return output;
}
}

This is custom documentation. For more information, please visit the SAP Help Portal 114
6/26/2023
Compile your class and package it into a jar. The jar needs to be in the class path of the normalizer/log learner (usually
accomplished by copying it into the libs folder).

Enable your class in the con guration:

<Formatting>
<Formatter>
<Enabled>true</Enabled>
<Regex>special_log</Regex>
<FormatterClassName>com.example.Reformatter</FormatterClassName>
</Formatter>
</Formatting>

Monitoring Settings
All SAP Enterprise Threat Detection Streaming applications provide an HTTP endpoint that exports certain metrics that can be
consumed by Prometheus or any other compatible monitoring tools.

Prometheus is an open-source system monitoring and alerting toolkit that is easy to use, has a wide range of support and is
capable to generate alerts.

Grafana or other observability tools can be used to visualize and aggregate data from various sources and provide monitoring
of your log infrastructure.

Monitoring Settings

Settings Type Value Mandatory Default

Prometheus.Enabled Boolean true or false true

Prometheus.ExporterPort Integer Port on which 8090


Prometheus
statistics can be
fetched via HTTP.
Consumable with
graphical dashboards
like Grafana. Please
ensure that the port
is unique on the host.
It is not possible to
use the same port
multiple times.

This is custom documentation. For more information, please visit the SAP Help Portal 115
6/26/2023

Settings Type Value Mandatory Default

Prometheus.ExporterBindAddress String Address of the 0.0.0.0


network interface on
which Prometheus
statistics can be
fetched via HTTP.

For example, if you


need to limit access
to the Prometheus
metrics to localhost
only, specify the IP
address of the
loopback device
“127.0.0.1” or use
“localhost”. The
parameter is
optional. If it is
omitted, the default
is to listen to all
interfaces.

Reference Con guration for the Monitoring Settings


The following example shows a possible con guration for the Monitoring settings with the associated values. You can adapt this
con guration in line with your speci c needs when con guring the streaming applications.

<ConfigRoot>

<Monitoring>
<Prometheus>
<ExporterPort>7000</ExporterPort>
<Enabled>true</Enabled>
<ExporterBindAddress>127.0.0.1</ExporterBindAddress>
</Prometheus>
</Monitoring>

</ConfigRoot>

The Con gRoot differs between the various applications. Replace this with the respective con guration.

Placeholders
PlaceHolder Description Example Value Used by Manda
Applications

{BaseDir} Base directory under /opt/etd All Yes


which the applications are
installed

{InstanceNumber} Instance number which is 00 All Only n


used to build the port are goi
numbers for Prometheus monito
monitoring. If you don't
use Prometheus, simply
use 00.

This is custom documentation. For more information, please visit the SAP Help Portal 116
6/26/2023

PlaceHolder Description Example Value Used by Manda


Applications

{LandscapeID} Landscape ID which is PROD All except Yes


used as a pre x for all the Log Learner
Kafka topics to be able to
separate the data when
you want to run multiple
SAP Enterprise Threat
Detection landscapes on
the same Kafka cluster. If
unsure, simply use the
System ID of your HANA
system where SAP
Enterprise Threat
Detection is installed.

{KafkaBootstrapServerLogPreProcessor} Kafka bootstrap servers For a non-high-availability Kafka cluster: All except Yes
for the LPP kafka mykafkahost:9092 Log Learner

For a high-availability Kafka cluster with


two brokers:
mykafkahost1:9092,mykafkahost2:9092

{KafkaBootstrapServerLogCollector} Kafka bootstrap servers For a non-high-availability Kafka cluster: Log Yes
for the Log Collector mykafkahost:9092 Collector,
Normalizer,
For a high-availability Kafka cluster with
Transporter
two brokers:
mykafkahost1:9092,mykafkahost2:9092

{KafkaUser} Username to access Kafka n/a All except KafkaU


Log Learner KafkaP
only m
when K
require
authen
is whe
is used
proper

{KafkaPassword} Password for the n/a All except KafkaU


{KafkaUser} Log Learner KafkaP
only m
when K
require
authen
when S
used in
proper

{KafkaTruststoreLocation} Location to Kafka con g/truststore All except Only n


truststore Log Learner SSL is
Kafka

{KafkaTruststorePassword} Password for Kafka n/a All except Only n


truststore Log Learner SSL is
Kafka

This is custom documentation. For more information, please visit the SAP Help Portal 117
6/26/2023

PlaceHolder Description Example Value Used by Manda


Applications

{HANAjdbc} HANA JDBC URL to SQL jdbc:sap://myhanahost:30015 HANA Writer, Yes


port of your HANA tenant Cold Storage
where SAP Enterprise Reader
Threat Detection is
installed.

{HANAKeystore} Keystore used to connect con g/keystore HANA Writer Only n


to HANA SSL is
JDBC
connec

{HANAKeystorePassword} Password for n/a HANA Writer Only n


{HANAKeystore} SSL is
JDBC
connec

{HANAKeystoreAlias} Alias for {HANAKeystore} n/a HANA Writer Only n


SSL is
JDBC
connec

{HANArest} HTTP/HTTPS URL to http://myhanahost:8000 or HANA Writer Yes


HANA where SAP https://myhanahost:4300
Enterprise Threat
Detection is installed (the
host and port you are also
using to access the SAP
Enterprise Threat
Detection UIs)

{HANAAuthenticator} Type of authentication basic Normalizer, Yes


used for HANAREST, either Log Learner
x.509 or basic

{HANATruststore} Truststore used to connect con g/truststore Normalizer, Only n


to HANA Log Learner HTTPS
HANA

{HANATruststorePassword} Password for n/a Normalizer, Only n


{HANATruststore} Log Learner HTTPS
HANA

{HANAUserNormalizer} HANA User used by ETD_STREAMING_NORMALIZER Normalizer Yes


Normalizer

{HANAPasswordNormalizer} HANA Password used by n/a Normalizer Yes


{HANAUserNormalizer}

{NormalizerRESTKeystore} Keystore for accessing n/a Normalizer Only n


REST service from X.509
Normalizer HANAA
in Norm

{NormalizerRESTKeystorePassword} Password for n/a Normalizer Only n


{NormalizerRESTKeystore} X.509
HANAA
in Norm

This is custom documentation. For more information, please visit the SAP Help Portal 118
6/26/2023

PlaceHolder Description Example Value Used by Manda


Applications

{NormalizerRESTKeystoreAlias} Alias for n/a Normalizer Only n


{NormalizerRESTKeystore} X.509
HANAA
in Norm

{HANAUserLoglearning} HANA User used by ETD_STREAMING_LOGLEARNER Log Learner Only fo


Loglearning Learne

{HANAPasswordLoglearning} HANA Password used by n/a Log Learner Only fo


{HANAUserLoglearning} Learne

{LoglearnerRESTKeystore} Keystore for accessing n/a Log Learner Only n


REST service from the Log X.509
Learner HANAA
in Log

{LoglearnerRESTKeystorePassword} Password for n/a Log Learner Only n


{LoglearnerRESTKeystore} X.509
HANAA
in Log

{LoglearnerRESTKeystoreAlias} Alias for n/a Log Learner Only n


{LoglearnerRESTKeystore} X.509
HANAA
in Log

{SysUserColdstorage} Operating system user etdcoldstorage Cold Storage Yes


used to run the Cold Reader
Storage Writer Writer

{SysUserKafka2HANA} Operating system user etdkafka2hana HANA Writer Yes


used to run the HANA
Writer

{SysUserLogCollector} Operating system user etdlogcollector Log Collector Yes


used to run the Log
Collector

{SysUserLogLearner} Operating system user etdloglearner Log Learner Only fo


used to run the Log Learne
Learner

{SysUserNormalizer} Operating system user etdnormalizer Normalizer Yes


used to run the Normalizer

{SysUserTransporter} Operating system user etdtransporter Transporter Yes


used to run the
Transporter

{SysGroupSecurityAdmin} Operating system group to etdadmins All Yes


encrypt passwords

{SysGroupAdmin} Operating system group to etdsecadmins All Yes


administer the
applications

{HANAUserColdstorage} HANA user used by Cold ETD_DATA_COMMITER Cold Storage Only fo


Storage Writer to write Reader Storag
data into HANA Writer Writer

This is custom documentation. For more information, please visit the SAP Help Portal 119
6/26/2023

PlaceHolder Description Example Value Used by Manda


Applications

{HANAPasswordColdstorage} HANA Password used by n/a Cold Storage Only fo


the Cold Storage Writer Reader Storag
Writer Writer

{ColdstorageRetentionDays} Retention days from day 0 365 Cold Storage No


to day 365 Reader
Writer

{HANAUserHotstorage} HANA user used by HANA ETD_DATA_COMMITER HANA Writer Only fo


Writer to write data into Writer
HANA

{HANAPasswordHotstorage} HANA Password used by HANA Writer Only fo


the HANA Writer Writer

Kafka Topics Used By SAP Enterprise Threat Detection


Streaming
SAP Enterprise Threat Detection Streaming makes use of Kafka to store data in different categories of data called Kafka topics,
which are used to exchange data between the Kafka clusters and the SAP Enterprise Threat Detection Streaming components.

Normally, no manual con guration is needed here because the placeholders below will be automatically replaced by the replacer
script (see Installing SAP Enterprise Threat Detection Streaming Manually). However, for some advanced installation setups it’s
necessary to change the source and target topics manually.

Kafka Topics Overview

Below you can nd an overview of all Kafka topics that are used to read from (source topics) and to write to (target topics). The
con guration can be done in an XML le for each application. For more information, see Application-Speci c Installation Steps.

Kafka Topics
This is custom documentation. For more information, please visit the SAP Help Portal 120
6/26/2023
Kafka: The Kafka cluster

Default Topic Name: The default value that is used if nothing is con gured

Template: The value that is used in the delivered templates.

Written By / ID: The application(s) that writes to the topic and the Id that is used in the con guration. The Id can either be an Id
within a <Id> Tag, or the tag name itself. In case of the transporter the value is speci ed in brackets, because the topic is
speci ed by value. Multiple applications can write to the same topic.

Read By / ID: The application(s) that read from the topic and the Id that is used in the con guration. The Id can either be an Id
within a <Id> Tag, or the tag name itself. In case of the transporter the value is speci ed in brackets, because the topic is
speci ed by value. Multiple applications can read from the same topic.

Volume: The volume of this topic in relation to the other topics. High volume topics might need more partitions than low volume
topics.

Description: A description of the contents of this topic.

KAFKA Default Topic Name Template Written By / Id

LC RTLogEventIn {LandscapeID}-RTLogEventIn LogCollector/RTLogEventIn

LC UnrecognizedLogsOutForReplication {LandscapeID}- LogCollector/UnrecognizedLogsOutForReplication


UnrecognizedLogsOutForReplication

LC PingFromSystemIn {LandscapeID}-PingFromSystemIn LogCollector/PingFromSystemIn

LC PingDetailFromSystemIn {LandscapeID}- LogCollector/PingDetailFromSystemIn


PingDetailFromSystemIn

LC JSONLogEvents {LandscapeID}-JSONLogEvents LogCollector/JSONLogEvents

LC JSONMasterData {LandscapeID}-JSONMasterData Log Collector/JSONMasterData

LC MD-proxy-PingFromSystemIn {LandscapeID}-PingFromSystemIn LogCollector/PingFromSystemIn

This is custom documentation. For more information, please visit the SAP Help Portal 121
6/26/2023

KAFKA Default Topic Name Template Written By / Id

LC MD-proxy-PingDetailFromSystemIn {LandscapeID}- LogCollector/PingDetailFromSystemIn


PingDetailFromSystemIn

LC MD-proxy-ApplicationServerIP {LandscapeID}-ApplicationServerIP LogCollector/ApplicationServerIP

LC MD-proxy-ApplicationServer {LandscapeID}-ApplicationServer LogCollector/ApplicationServer

LC MD-proxy-HRData {LandscapeID}-HRData LogCollector/HRData

LC MD-proxy-MasterDataIn {LandscapeID}-MasterDataIn LogCollector/MasterDataIn

LC MD-proxy-NoteImplementationIn {LandscapeID}- LogCollector/NoteImplementationIn


NoteImplementationIn

LC MD-proxy- {LandscapeID}- LogCollector/NoteImplementationSacfIn


NoteImplementationSacfIn NoteImplementationSacfIn

LC MD-proxy-ObjectAuthorizationIn {LandscapeID}- LogCollector/ObjectAuthorizationIn


ObjectAuthorizationIn

LC MD-proxy-ObjectDirectoryIn {LandscapeID}-ObjectDirectoryIn LogCollector/ObjectDirectoryIn

LC MD-proxy- {LandscapeID}- LogCollector/SystemApplicationServer


SystemApplicationServer SystemApplicationServer

LC MD-proxy-SystemData {LandscapeID}-SystemData LogCollector/SystemData

LC MD-proxy-SystemDetail {LandscapeID}-SystemDetail LogCollector/SystemDetail

LC MD-proxy-SystemHeader {LandscapeID}-SystemHeader LogCollector/SystemHeader

LC MD-proxy-UserHRDataIn {LandscapeID}-UserHRDataIn LogCollector/UserHRDataIn

LC MD-proxy-UserSystemDataIn {LandscapeID}-UserSystemDataIn LogCollector/UserSystemDataIn

LPP n/a {LandscapeID}-NormalizedDataOut Normalizer/LogPreProcessorNormalized

LPP n/a Normalizer/LogPreProcessorOriginal

LPP n/a {LandscapeID}-unrecognized Normalizer/LogPreProcessorUnrecognized

This is custom documentation. For more information, please visit the SAP Help Portal 122
6/26/2023

KAFKA Default Topic Name Template Written By / Id

LPP n/a {LandscapeID}-Log4jHANAOut LogLearner/LogPreProcessorLog4jHANAOut

LPP n/a {LandscapeID}- Normalizer/LogPreProcessorNewUserSystemData


NewUserContextSystemData

LPP n/a {LandscapeID}- Normalizer/LogPreProcessorDHCPIPAssignHANADB


DHCPIPAssignHANADBOut

LPP n/a {LandscapeID}- Normalizer/LogPreProcessorDHCPIPAssignDBHisto


DHCPIPAssignDBHistory

LPP n/a ContentReplication HANA Writer/ContentReplication

LPP n/a {LandscapeID}- Transporter/[MasterDataIn]


MasterDataInConverted

LPP n/a {LandscapeID}- Transporter/[PingFromSystemIn]


PingFromSystemInOutput

LPP n/a {LandscapeID}- Transporter/[PingDetailFromSystemIn]


PingDetailFromSystemInOutput

LPP n/a {LandscapeID}- Transporter/[ApplicationServer]


ApplicationServerConverted

LPP n/a {LandscapeID}- Transporter/[ApplicationServerIP]


ApplicationServerIPConverted

LPP n/a {LandscapeID}- Transporter/[NoteImplementationIn]


NoteImplementationConverted

LPP n/a {LandscapeID}- Transporter/[NoteImplementationSacfIn]


NoteImplementationSacfConverted

LPP n/a {LandscapeID}- Transporter/[ObjectAuthorizationIn]


ObjectAuthorizationConverted

This is custom documentation. For more information, please visit the SAP Help Portal 123
6/26/2023

KAFKA Default Topic Name Template Written By / Id

LPP n/a {LandscapeID}- Transporter/[ObjectDirectoryIn]


ObjectDirectoryConverted

LPP n/a {LandscapeID}- Transporter/[SystemApplicationServer]


SystemApplicationServerConverted

LPP n/a {LandscapeID}- Transporter/[SystemDetail]


SystemDetailConverted

LPP n/a {LandscapeID}- Transporter/[SystemHeader]


SystemHeaderConverted

LPP n/a {LandscapeID}- Transporter/[SystemHeader]


SystemHeaderExtConverted

LPP n/a {LandscapeID}-UserDataConverted Transporter/[HRData] Transporter/[UserHRDataIn]

LPP n/a {LandscapeID}- Transporter/[SystemData]


UserSystemDataConverted Transporter/[UserSystemDataIn]

LPP JSONMasterDataOutput {LandscapeID}- Transporter/JSONMasterDataOutput


JSONMasterDataOutput

This is custom documentation. For more information, please visit the SAP Help Portal 124

You might also like